登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
代码拉取完成,页面将自动刷新
仓库状态说明
开源项目
>
工业软件
>
自动驾驶/无人机
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
21
Star
156
Fork
150
GVP
Ascend
/
DrivingSDK
暂停
代码
Issues
5
Pull Requests
18
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
maptr训练卡死
CLOSED
#IC5LJL
缺陷
xy
创建于
2025-05-06 10:38
一、问题现象(附报错日志上下文): 参考model_sample中的maptr训练步骤训练,程序卡死 日志如下: [root@fedf3901b7b1 MapTR]# ./MapTR/tools/dist_train.sh ./MapTR/projects/configs/maptr/maptr_tiny_r50_24e_bevformer.py 1 Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /usr/local/python3.8/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects `--local-rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details. def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:293: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.set_default_device, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.Tensor.pin_memory, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:250: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu. warnings.warn(msg, RuntimeWarning) projects.mmdet3d_plugin /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:23: ImportWarning: ``MultiScaleDeformableAttention`` has been moved to ``mmcv.ops.multi_scale_deform_attn``, please change original path ``from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` to ``from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` warnings.warn( Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details. def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:293: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.set_default_device, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.Tensor.pin_memory, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:250: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu. warnings.warn(msg, RuntimeWarning) 2025-05-06 02:29:12,801 - mmdet - INFO - Environment info: ------------------------------------------------------------ sys.platform: linux Python: 3.8.17 (default, Apr 29 2025, 07:31:52) [Clang 12.0.1 (openEuler 12.0.1-6.oe2203sp4 f4a7df2c51fa9eb3679555ef2de92c638ba CUDA available: True GPU 0: Ascend910B3 CUDA_HOME: None GCC: clang version 12.0.1 (openEuler 12.0.1-6.oe2203sp4 f4a7df2c51fa9eb3679555ef2de92c638ba9f880) PyTorch: 2.1.0a0+git7bcf7da PyTorch compiling details: PyTorch built with: - GCC 4.2 - C++ Version: 201703 - clang 17.0.6 - OpenMP 202011 - NNPACK is enabled - CPU capability usage: NO AVX - Build settings: BUILD_TYPE=Release, CXX_COMPILER=/home/xinyi/tool/BiShengCompiler-4.1.0-aarch64-linux/bin/clang++, CXX_FLAGS=-flto=thin -fuse-ld=lld -D_GLIBCXX_USE_CXX11_ABI=0 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wvla-extension -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-range-loop-analysis -Wno-pass-failed -Wsuggest-override -Wno-error=pedantic -Wno-error=old-style-cast -Wno-error=inconsistent-missing-override -Wno-error=inconsistent-missing-destructor-override -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-missing-braces -Wunused-lambda-capture -Qunused-arguments -fcolor-diagnostics -faligned-new -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math -Werror=format, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.16.0 OpenCV: 4.11.0 MMCV: 1.7.2 MMCV Compiler: clang 12.0.1 MMCV CUDA Compiler: not available MMDetection: 2.14.0 MMSegmentation: 0.14.1 MMDetection3D: 1.0.0rc4+0ff02cd spconv2.0: False ------------------------------------------------------------ 2025-05-06 02:29:14,649 - mmdet - INFO - Distributed training: True 2025-05-06 02:29:16,207 - mmdet - INFO - Config: point_cloud_range = [-15.0, -30.0, -2.0, 15.0, 30.0, 2.0] class_names = [ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ] dataset_type = 'CustomNuScenesLocalMapDataset' data_root = 'data/nuscenes/' input_modality = dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True) file_client_args = dict(backend='disk') train_pipeline = [ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict(type='PhotoMetricDistortionMultiViewImage'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, with_attr_label=False), dict( type='ObjectRangeFilter', point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict(type='CustomCollect3D', keys=['gt_bboxes_3d', 'gt_labels_3d', 'img']) ] test_pipeline = [ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ] eval_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, file_client_args=dict(backend='disk')), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, file_client_args=dict(backend='disk')), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' ], with_label=False), dict(type='Collect3D', keys=['points']) ] data = dict( samples_per_gpu=4, workers_per_gpu=8, train=dict( type='CustomNuScenesLocalMapDataset', data_root='data/nuscenes/', ann_file='data/nuscenes/nuscenes_infos_temporal_train.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict(type='PhotoMetricDistortionMultiViewImage'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, with_attr_label=False), dict( type='ObjectRangeFilter', point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='CustomCollect3D', keys=['gt_bboxes_3d', 'gt_labels_3d', 'img']) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=False, box_type_3d='LiDAR', use_valid_flag=True, bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, map_classes=['divider', 'ped_crossing', 'boundary'], queue_length=1, gt_shift_pts_pattern='v5'), val=dict( type='CustomNuScenesLocalMapDataset', ann_file='data/nuscenes/nuscenes_infos_temporal_val.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=True, box_type_3d='LiDAR', data_root='data/nuscenes/', map_ann_file='data/nuscenes/nuscenes_map_anns_val.json', bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, gt_shift_pts_pattern='v5', map_classes=['divider', 'ped_crossing', 'boundary'], samples_per_gpu=1), test=dict( type='CustomNuScenesLocalMapDataset', data_root='data/nuscenes/', ann_file='data/nuscenes/nuscenes_infos_temporal_val.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=True, box_type_3d='LiDAR', map_ann_file='data/nuscenes/nuscenes_map_anns_val.json', bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, gt_shift_pts_pattern='v5', map_classes=['divider', 'ped_crossing', 'boundary']), pin_memory=True, shuffler_sampler=dict(type='DistributedGroupSampler'), nonshuffler_sampler=dict(type='DistributedSampler')) evaluation = dict( interval=2, pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], metric='chamfer') checkpoint_config = dict(interval=1) log_config = dict( interval=50, hooks=[dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' work_dir = './work_dirs/maptr_tiny_r50_24e_bevformer' load_from = None resume_from = None workflow = [('train', 1)] plugin = True plugin_dir = 'projects/mmdet3d_plugin/' voxel_size = [0.15, 0.15, 4] img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) map_classes = ['divider', 'ped_crossing', 'boundary'] fixed_ptsnum_per_gt_line = 20 fixed_ptsnum_per_pred_line = 20 eval_use_same_gt_sample_num_flag = True num_map_classes = 3 _dim_ = 256 _pos_dim_ = 128 _ffn_dim_ = 512 _num_levels_ = 1 bev_h_ = 200 bev_w_ = 100 queue_length = 1 model = dict( type='MapTR', use_grid_mask=True, video_test_mode=False, pretrained=dict(img='ckpts/resnet50-19c8e357.pth'), img_backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(3, ), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=False), norm_eval=True, style='pytorch'), img_neck=dict( type='FPN', in_channels=[2048], out_channels=256, start_level=0, add_extra_convs='on_output', num_outs=1, relu_before_extra_convs=True), pts_bbox_head=dict( type='MapTRHead', bev_h=200, bev_w=100, num_query=900, num_vec=50, num_pts_per_vec=20, num_pts_per_gt_vec=20, dir_interval=1, query_embed_type='instance_pts', transform_method='minmax', gt_shift_pts_pattern='v5', num_classes=3, in_channels=256, sync_cls_avg_factor=True, with_box_refine=True, as_two_stage=False, code_size=2, code_weights=[1.0, 1.0, 1.0, 1.0], transformer=dict( type='MapTRPerceptionTransformer', rotate_prev_bev=True, use_shift=True, use_can_bus=True, embed_dims=256, encoder=dict( type='BEVFormerEncoder', num_layers=1, pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], num_points_in_pillar=4, return_intermediate=False, transformerlayers=dict( type='BEVFormerLayer', attn_cfgs=[ dict( type='TemporalSelfAttention', embed_dims=256, num_levels=1), dict( type='SpatialCrossAttention', pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], deformable_attention=dict( type='MSDeformableAttention3D', embed_dims=256, num_points=8, num_levels=1), embed_dims=256) ], feedforward_channels=512, ffn_dropout=0.1, operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', 'norm'))), decoder=dict( type='MapTRDecoder', num_layers=6, return_intermediate=True, transformerlayers=dict( type='DetrTransformerDecoderLayer', attn_cfgs=[ dict( type='MultiheadAttention', embed_dims=256, num_heads=8, dropout=0.1), dict( type='CustomMSDeformableAttention', embed_dims=256, num_levels=1) ], feedforward_channels=512, ffn_dropout=0.1, operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', 'norm')))), bbox_coder=dict( type='MapTRNMSFreeCoder', post_center_range=[-20, -35, -20, -35, 20, 35, 20, 35], pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], max_num=50, voxel_size=[0.15, 0.15, 4], num_classes=3), positional_encoding=dict( type='LearnedPositionalEncoding', num_feats=128, row_num_embed=200, col_num_embed=100), loss_cls=dict( type='FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=2.0), loss_bbox=dict(type='L1Loss', loss_weight=0.0), loss_iou=dict(type='GIoULoss', loss_weight=0.0), loss_pts=dict(type='PtsL1Loss', loss_weight=5.0), loss_dir=dict(type='PtsDirCosLoss', loss_weight=0.005)), train_cfg=dict( pts=dict( grid_size=[512, 512, 1], voxel_size=[0.15, 0.15, 4], point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], out_size_factor=4, assigner=dict( type='MapTRAssigner', cls_cost=dict(type='FocalLossCost', weight=2.0), reg_cost=dict( type='BBoxL1Cost', weight=0.0, box_format='xywh'), iou_cost=dict(type='IoUCost', iou_mode='giou', weight=0.0), pts_cost=dict(type='OrderedPtsL1Cost', weight=5), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0])))) optimizer = dict( type='AdamW', lr=0.0006, paramwise_cfg=dict(custom_keys=dict(img_backbone=dict(lr_mult=0.1))), weight_decay=0.01) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) lr_config = dict( policy='CosineAnnealing', warmup='linear', warmup_iters=500, warmup_ratio=0.3333333333333333, min_lr_ratio=0.001) total_epochs = 24 runner = dict(type='EpochBasedRunner', max_epochs=24) fp16 = dict(loss_scale=512.0) gpu_ids = range(0, 1) 2025-05-06 02:29:16,207 - mmdet - INFO - Set random seed to 0, deterministic: True /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:440: DeprecationWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py:88: UserWarning: DeprecationWarning: pretrained is a deprecated key, please consider using init_cfg. warnings.warn('DeprecationWarning: pretrained is a deprecated ' > /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/runner/base_module.py(79)init_weights() -> for name, param in self.named_parameters(): (Pdb) c !!!! pts_bbox_head.code_weights !!!! pts_bbox_head.positional_encoding.row_embed.weight !!!! pts_bbox_head.positional_encoding.col_embed.weight !!!! pts_bbox_head.transformer.level_embeds !!!! pts_bbox_head.transformer.cams_embeds !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.sampling_offsets.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.sampling_offsets.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.attention_weights.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.attention_weights.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.value_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.value_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.output_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.output_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.sampling_offsets.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.sampling_offsets.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.attention_weights.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.attention_weights.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.value_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.value_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.0.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.0.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.1.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.1.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.2.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.2.bias !!!! pts_bbox_head.transformer.reference_points.weight !!!! pts_bbox_head.transformer.reference_points.bias !!!! pts_bbox_head.transformer.can_bus_mlp.0.weight !!!! pts_bbox_head.transformer.can_bus_mlp.0.bias !!!! pts_bbox_head.transformer.can_bus_mlp.2.weight !!!! pts_bbox_head.transformer.can_bus_mlp.2.bias !!!! pts_bbox_head.transformer.can_bus_mlp.norm.weight !!!! pts_bbox_head.transformer.can_bus_mlp.norm.bias !!!! pts_bbox_head.cls_branches.0.0.weight !!!! pts_bbox_head.cls_branches.0.0.bias !!!! pts_bbox_head.cls_branches.0.1.weight !!!! pts_bbox_head.cls_branches.0.1.bias !!!! pts_bbox_head.cls_branches.0.3.weight !!!! pts_bbox_head.cls_branches.0.3.bias !!!! pts_bbox_head.cls_branches.0.4.weight !!!! pts_bbox_head.cls_branches.0.4.bias !!!! pts_bbox_head.cls_branches.0.6.weight !!!! pts_bbox_head.cls_branches.0.6.bias !!!! pts_bbox_head.cls_branches.1.0.weight !!!! pts_bbox_head.cls_branches.1.0.bias !!!! pts_bbox_head.cls_branches.1.1.weight !!!! pts_bbox_head.cls_branches.1.1.bias !!!! pts_bbox_head.cls_branches.1.3.weight !!!! pts_bbox_head.cls_branches.1.3.bias !!!! pts_bbox_head.cls_branches.1.4.weight !!!! pts_bbox_head.cls_branches.1.4.bias !!!! pts_bbox_head.cls_branches.1.6.weight !!!! pts_bbox_head.cls_branches.1.6.bias !!!! pts_bbox_head.cls_branches.2.0.weight !!!! pts_bbox_head.cls_branches.2.0.bias !!!! pts_bbox_head.cls_branches.2.1.weight !!!! pts_bbox_head.cls_branches.2.1.bias !!!! pts_bbox_head.cls_branches.2.3.weight !!!! pts_bbox_head.cls_branches.2.3.bias !!!! pts_bbox_head.cls_branches.2.4.weight !!!! pts_bbox_head.cls_branches.2.4.bias !!!! pts_bbox_head.cls_branches.2.6.weight !!!! pts_bbox_head.cls_branches.2.6.bias !!!! pts_bbox_head.cls_branches.3.0.weight !!!! pts_bbox_head.cls_branches.3.0.bias !!!! pts_bbox_head.cls_branches.3.1.weight !!!! pts_bbox_head.cls_branches.3.1.bias !!!! pts_bbox_head.cls_branches.3.3.weight !!!! pts_bbox_head.cls_branches.3.3.bias !!!! pts_bbox_head.cls_branches.3.4.weight !!!! pts_bbox_head.cls_branches.3.4.bias !!!! pts_bbox_head.cls_branches.3.6.weight !!!! pts_bbox_head.cls_branches.3.6.bias !!!! pts_bbox_head.cls_branches.4.0.weight !!!! pts_bbox_head.cls_branches.4.0.bias !!!! pts_bbox_head.cls_branches.4.1.weight !!!! pts_bbox_head.cls_branches.4.1.bias !!!! pts_bbox_head.cls_branches.4.3.weight !!!! pts_bbox_head.cls_branches.4.3.bias !!!! pts_bbox_head.cls_branches.4.4.weight !!!! pts_bbox_head.cls_branches.4.4.bias !!!! pts_bbox_head.cls_branches.4.6.weight !!!! pts_bbox_head.cls_branches.4.6.bias !!!! pts_bbox_head.cls_branches.5.0.weight !!!! pts_bbox_head.cls_branches.5.0.bias !!!! pts_bbox_head.cls_branches.5.1.weight !!!! pts_bbox_head.cls_branches.5.1.bias !!!! pts_bbox_head.cls_branches.5.3.weight !!!! pts_bbox_head.cls_branches.5.3.bias !!!! pts_bbox_head.cls_branches.5.4.weight !!!! pts_bbox_head.cls_branches.5.4.bias !!!! pts_bbox_head.cls_branches.5.6.weight !!!! pts_bbox_head.cls_branches.5.6.bias !!!! pts_bbox_head.reg_branches.0.0.weight !!!! pts_bbox_head.reg_branches.0.0.bias !!!! pts_bbox_head.reg_branches.0.2.weight !!!! pts_bbox_head.reg_branches.0.2.bias !!!! pts_bbox_head.reg_branches.0.4.weight !!!! pts_bbox_head.reg_branches.0.4.bias !!!! pts_bbox_head.reg_branches.1.0.weight !!!! pts_bbox_head.reg_branches.1.0.bias !!!! pts_bbox_head.reg_branches.1.2.weight !!!! pts_bbox_head.reg_branches.1.2.bias !!!! pts_bbox_head.reg_branches.1.4.weight !!!! pts_bbox_head.reg_branches.1.4.bias !!!! pts_bbox_head.reg_branches.2.0.weight !!!! pts_bbox_head.reg_branches.2.0.bias !!!! pts_bbox_head.reg_branches.2.2.weight !!!! pts_bbox_head.reg_branches.2.2.bias !!!! pts_bbox_head.reg_branches.2.4.weight !!!! pts_bbox_head.reg_branches.2.4.bias !!!! pts_bbox_head.reg_branches.3.0.weight !!!! pts_bbox_head.reg_branches.3.0.bias !!!! pts_bbox_head.reg_branches.3.2.weight !!!! pts_bbox_head.reg_branches.3.2.bias !!!! pts_bbox_head.reg_branches.3.4.weight !!!! pts_bbox_head.reg_branches.3.4.bias !!!! pts_bbox_head.reg_branches.4.0.weight !!!! pts_bbox_head.reg_branches.4.0.bias !!!! pts_bbox_head.reg_branches.4.2.weight !!!! pts_bbox_head.reg_branches.4.2.bias !!!! pts_bbox_head.reg_branches.4.4.weight !!!! pts_bbox_head.reg_branches.4.4.bias !!!! pts_bbox_head.reg_branches.5.0.weight !!!! pts_bbox_head.reg_branches.5.0.bias !!!! pts_bbox_head.reg_branches.5.2.weight !!!! pts_bbox_head.reg_branches.5.2.bias !!!! pts_bbox_head.reg_branches.5.4.weight !!!! pts_bbox_head.reg_branches.5.4.bias !!!! pts_bbox_head.bev_embedding.weight > /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/runner/base_module.py(84)init_weights() -> 'init_info'] = f'The value is the same before and ' \ (Pdb) ll 56 def init_weights(self) -> None: 57 """Initialize the weights.""" 58 59 is_top_level_module = False 60 # check if it is top-level module 61 if not hasattr(self, '_params_init_info'): 62 # The `_params_init_info` is used to record the initialization 63 # information of the parameters 64 # the key should be the obj:`nn.Parameter` of model and the value 65 # should be a dict containing 66 # - init_info (str): The string that describes the initialization. 67 # - tmp_mean_value (FloatTensor): The mean of the parameter, 68 # which indicates whether the parameter has been modified. 69 # this attribute would be deleted after all parameters 70 # is initialized. 71 self._params_init_info: defaultdict = defaultdict(dict) 72 is_top_level_module = True 73 74 # Initialize the `_params_init_info`, 75 # When detecting the `tmp_mean_value` of 76 # the corresponding parameter is changed, update related 77 # initialization information 78 import pdb;pdb.set_trace() 79 for name, param in self.named_parameters(): 80 print("!!!!", name) 81 if name == "pts_bbox_head.bev_embedding.weight": 82 import pdb;pdb.set_trace() 83 self._params_init_info[param][ 84 -> 'init_info'] = f'The value is the same before and ' \ 85 f'after calling `init_weights` ' \ 86 f'of {self.__class__.__name__} ' 87 self._params_init_info[param][ 88 'tmp_mean_value'] = param.data.mean() 89 90 # pass `params_init_info` to all submodules 91 # All submodules share the same `params_init_info`, 92 # so it will be updated when parameters are 93 # modified at any level of the model. 94 for sub_module in self.modules(): 95 sub_module._params_init_info = self._params_init_info 96 97 # Get the initialized logger, if not exist, 98 # create a logger named `mmcv` 99 logger_names = list(logger_initialized.keys()) 100 logger_name = logger_names[0] if logger_names else 'mmcv' 101 102 from ..cnn import initialize 103 from ..cnn.utils.weight_init import update_init_info 104 module_name = self.__class__.__name__ 105 if not self._is_init: 106 if self.init_cfg: 107 print_log( 108 f'initialize {module_name} with init_cfg {self.init_cfg}', 109 logger=logger_name) 110 initialize(self, self.init_cfg) 111 if isinstance(self.init_cfg, dict): 112 # prevent the parameters of 113 # the pre-trained model 114 # from being overwritten by 115 # the `init_weights` 116 if self.init_cfg['type'] == 'Pretrained': 117 return 118 119 for m in self.children(): 120 if hasattr(m, 'init_weights'): 121 m.init_weights() 122 # users may overload the `init_weights` 123 update_init_info( 124 m, 125 init_info=f'Initialized by ' 126 f'user-defined `init_weights`' 127 f' in {m.__class__.__name__} ') 128 129 self._is_init = True 130 else: 131 warnings.warn(f'init_weights of {self.__class__.__name__} has ' 132 f'been called more than once.') 133 134 if is_top_level_module: 135 self._dump_init_info(logger_name) 136 137 for sub_module in self.modules(): 138 del sub_module._params_init_info (Pdb) param.data tensor([[ 0.3346, -0.8828, -0.1136, ..., 0.8826, -1.8123, 0.8151], [-0.5951, -1.4626, 0.8475, ..., 1.4519, 2.5579, -0.2043], [-0.6746, 0.0264, 0.7649, ..., 1.3184, -0.7278, 0.0154], ..., [ 0.5182, 1.1863, 0.4237, ..., 2.0670, 0.0473, 1.8927], [-0.1231, -1.1684, 1.9254, ..., -0.5681, -0.6239, -0.5316], [ 1.1549, 0.5097, -0.6489, ..., -1.0827, 0.0209, -1.4871]]) (Pdb) param.data[0] tensor([ 0.3346, -0.8828, -0.1136, 0.3244, -0.3209, 0.7281, 0.2600, 1.0407, 0.3501, -0.7894, -0.2642, 0.0256, 0.4568, -0.7666, 0.5501, 0.6672, 0.9547, -1.5918, -0.5885, 0.6217, -1.7893, 0.5015, 1.3663, 0.5167, -0.8397, 1.2157, -0.2112, -0.1900, -1.1100, -0.1835, 0.1395, -0.1162, -0.0241, -0.2620, -1.4265, -3.5514, -1.0383, 0.5434, 0.6944, -0.4704, -0.6737, -0.3197, 2.0386, 0.9962, 0.3623, 0.2161, 0.1009, 0.4102, -0.5804, 0.9949, -0.9284, -1.5280, -0.6673, 1.0168, 0.2570, -0.0159, 0.0816, -0.0709, 1.8976, 0.8071, -2.1896, -0.2846, 0.7351, -0.0122, 0.4719, -1.0442, 1.3061, -0.6122, -1.7751, 0.0140, -0.1835, -1.3341, -0.4490, -0.8574, -0.5970, -0.6065, -0.8981, -0.4639, -0.3914, 0.9162, 0.5757, -0.2449, 0.2926, 0.3605, -0.8455, 0.1431, 1.4882, 1.0701, 0.4523, -0.5384, 0.7856, 0.5376, 0.5866, 0.3177, 0.3368, 0.5757, -0.4450, -0.4648, -0.2667, -0.4380, -0.6291, 1.0922, 0.8174, 1.6219, 0.1084, -1.0030, 1.0718, 0.0233, 0.3976, -0.2483, 0.1274, 1.2697, 1.7519, 0.5526, -0.0835, -0.2765, 1.5941, 0.1815, -0.9194, -0.3283, 0.7386, 1.2244, -1.5840, 0.5081, -0.4533, -0.3685, -0.7840, -2.2048, -1.2149, -0.1261, 0.4236, -0.5035, 0.7673, 3.0645, 1.5516, -1.0794, -0.9831, -0.1259, -0.0342, -0.9783, -0.9835, 0.4927, 0.2750, 0.0925, 0.9347, -1.0328, 0.2586, -0.1690, 1.6564, -1.0191, -0.2295, -0.4965, -1.5436, -0.4743, -0.3224, -0.6014, -1.1098, -0.4344, 0.6232, 0.2970, -0.6383, 0.2032, 1.4624, -0.1955, -0.9351, 0.8184, -1.6618, -0.3535, 0.5543, 0.6959, -0.0045, 0.5455, 0.2708, 0.3808, -0.8948, -1.1259, 0.2557, -0.2562, 1.0773, 0.0700, 0.9273, -0.2175, -0.6563, -0.1454, 1.2647, -0.9731, 0.5140, 0.2004, -1.2946, -0.9143, 0.2443, -0.6664, -1.4068, 1.9337, -1.9506, -0.0645, 1.2034, -0.9366, -1.4137, 0.5090, -1.4954, 0.3777, 0.1271, 1.4837, -2.3799, -0.7307, 0.8735, 1.1964, 0.3695, -0.0382, 0.1699, -1.2928, 0.7482, -0.4217, 0.4145, -0.2290, 0.8246, 0.6280, -0.9238, 0.9698, 2.5129, -0.3167, -2.5102, -0.5279, 1.8864, 0.6001, 0.8151, 1.2361, 0.1404, -0.0159, 0.7300, 0.5005, 0.9352, 2.0131, -1.9003, -0.3568, -1.5779, -0.1438, 0.4502, -0.3725, -0.8348, -1.3513, -2.3635, 1.6701, 0.4200, 1.7095, -0.5624, 0.4635, 0.5312, 0.5178, -0.8639, 1.3752, -0.6619, 0.8826, -1.8123, 0.8151]) (Pdb) param.data[0].max() tensor(3.0645) (Pdb) param.data[0].mean() tensor(-0.0098) (Pdb) param.data[1].mean() tensor(-0.0044) (Pdb) param.data.mean() 二、软件版本: -- CANN 版本 CANN8.1 使用的官方镜像 --Tensorflow/Pytorch/MindSpore 版本:torch-2.1.0使用毕昇编译器编译 --Python 版本 (e.g., Python 3.7.5):python3.8使用毕昇编译器编译 --操作系统版本 (e.g., Ubuntu 18.04):Linux version 5.10.0-60.18.0.50.oe2203.aarch64 (abuild@obs-worker-002) (gcc_old (GCC) 10.3.1, GNU ld (GNU Binutils) 2.37) #1 SMP Wed Mar 30 02:43:08 UTC 2022 [root@fedf3901b7b1 MapTR]# pip list Package Version Editable project location ----------------------- ----------------------- ----------------------------------------------------------------------- absl-py 2.2.2 addict 2.4.0 argcomplete 3.6.2 asttokens 3.0.0 attrs 25.3.0 auto_tune 0.1.0 av 12.3.0 av2 0.2.1 backcall 0.2.0 black 24.8.0 cachetools 5.5.2 certifi 2025.4.26 charset-normalizer 3.4.1 click 8.1.8 cloudpickle 3.1.1 colorlog 6.9.0 cycler 0.12.1 dataflow 0.0.1 decorator 5.2.1 dependency-groups 1.3.0 descartes 1.1.0 distlib 0.3.9 exceptiongroup 1.2.2 executing 2.2.0 filelock 3.16.1 fire 0.7.0 flake8 7.1.2 fonttools 4.57.0 fsspec 2023.6.0 google-auth 2.39.0 google-auth-oauthlib 1.0.0 grpcio 1.70.0 hccl 0.1.0 hccl_parser 0.1 huggingface-hub 0.30.2 idna 3.10 imageio 2.35.1 importlib_metadata 8.5.0 iniconfig 2.1.0 ipython 8.12.3 jedi 0.19.2 Jinja2 3.1.6 joblib 1.4.2 kiwisolver 1.4.7 kornia 0.7.3 kornia_rs 0.1.8 llm_datadist 0.0.1 llvmlite 0.41.1 lyft-dataset-sdk 0.0.8 Markdown 3.7 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.5.3 matplotlib-inline 0.1.7 mccabe 0.7.0 mdurl 0.1.2 ml-dtypes 0.2.0 mmcv-full 1.7.2 /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv mmdet 2.14.0 mmdet3d 1.0.0rc4 /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d mmsegmentation 0.14.1 mock 5.2.0 mpmath 1.3.0 msobjdump 0.1.0 mx_driving 1.0.0+git0ff02cd mypy_extensions 1.1.0 narwhals 1.37.0 networkx 2.2 ninja 1.11.1.4 nox 2025.2.9 numba 0.58.1 numpy 1.23.0 nuscenes-devkit 1.1.11 oauthlib 3.2.2 op_compile_tool 0.1.0 op_gen 0.1 op_test_frame 0.1 opc_tool 0.1.0 opencv-python 4.11.0.86 packaging 25.0 pandas 2.0.3 parso 0.8.4 pathspec 0.12.1 pexpect 4.9.0 pickleshare 0.7.5 pillow 10.4.0 pip 22.3.1 platformdirs 4.3.6 plotly 6.0.1 pluggy 1.5.0 plyfile 1.0.3 polars 1.8.2 prettytable 3.11.0 projects 1.0.9 prompt_toolkit 3.0.51 protobuf 4.21.6 psutil 7.0.0 ptyprocess 0.7.0 pure_eval 0.2.3 pyarrow 17.0.0 pyasn1 0.6.1 pyasn1_modules 0.4.2 pycocotools 2.0.7 pycodestyle 2.12.1 pyflakes 3.2.0 Pygments 2.19.1 pyparsing 3.1.4 pyproj 3.5.0 pyquaternion 0.9.9 pytest 8.3.5 python-dateutil 2.9.0.post0 pytz 2025.2 PyWavelets 1.4.1 PyYAML 6.0.2 requests 2.32.3 requests-oauthlib 2.0.0 rich 14.0.0 rsa 4.9.1 safetensors 0.5.3 schedule_search 0.0.1 scikit-image 0.19.3 scikit-learn 1.3.2 scipy 1.10.1 setuptools 75.1.0 Shapely 1.8.5.post1 show_kernel_debug_data 0.1.0 six 1.17.0 stack-data 0.6.3 sympy 1.13.3 synr 0.5.0 te 0.4.0 tensorboard 2.14.0 tensorboard-data-server 0.7.2 termcolor 2.4.0 terminaltables 3.1.10 threadpoolctl 3.5.0 tifffile 2023.7.10 timm 1.0.15 tomli 2.2.1 torch 2.1.0a0+git7bcf7da torch-npu 2.1.0.post13+gitcce8403 torchaudio 2.1.0 torchvision 0.16.0 tornado 6.4.2 tqdm 4.67.1 traitlets 5.14.3 trimesh 2.35.39 typing_extensions 4.13.2 tzdata 2025.2 universal_pathlib 0.2.6 urllib3 2.2.3 urwid 2.6.16 virtualenv 20.30.0 wcwidth 0.2.13 Werkzeug 3.0.6 wheel 0.45.1 yapf 0.43.0 zipp 3.20.2
一、问题现象(附报错日志上下文): 参考model_sample中的maptr训练步骤训练,程序卡死 日志如下: [root@fedf3901b7b1 MapTR]# ./MapTR/tools/dist_train.sh ./MapTR/projects/configs/maptr/maptr_tiny_r50_24e_bevformer.py 1 Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /usr/local/python3.8/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects `--local-rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details. def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:293: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.set_default_device, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.Tensor.pin_memory, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:250: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu. warnings.warn(msg, RuntimeWarning) projects.mmdet3d_plugin /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:23: ImportWarning: ``MultiScaleDeformableAttention`` has been moved to ``mmcv.ops.multi_scale_deform_attn``, please change original path ``from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` to ``from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` warnings.warn( Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime! /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/core/evaluation/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details. def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41): /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:293: ImportWarning: ************************************************************************************************************* The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now.. The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now.. The backend in torch.distributed.init_process_group set to hccl now.. The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now.. The device parameters have been replaced with npu in the function below: torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.set_default_device, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.Tensor.pin_memory, torch.nn.Module.to, torch.nn.Module.to_empty ************************************************************************************************************* warnings.warn(msg, ImportWarning) /usr/local/python3.8/lib/python3.8/site-packages/torch_npu/contrib/transfer_to_npu.py:250: RuntimeWarning: torch.jit.script and torch.jit.script_method will be disabled by transfer_to_npu, which currently does not support them, if you need to enable them, please do not use transfer_to_npu. warnings.warn(msg, RuntimeWarning) 2025-05-06 02:29:12,801 - mmdet - INFO - Environment info: ------------------------------------------------------------ sys.platform: linux Python: 3.8.17 (default, Apr 29 2025, 07:31:52) [Clang 12.0.1 (openEuler 12.0.1-6.oe2203sp4 f4a7df2c51fa9eb3679555ef2de92c638ba CUDA available: True GPU 0: Ascend910B3 CUDA_HOME: None GCC: clang version 12.0.1 (openEuler 12.0.1-6.oe2203sp4 f4a7df2c51fa9eb3679555ef2de92c638ba9f880) PyTorch: 2.1.0a0+git7bcf7da PyTorch compiling details: PyTorch built with: - GCC 4.2 - C++ Version: 201703 - clang 17.0.6 - OpenMP 202011 - NNPACK is enabled - CPU capability usage: NO AVX - Build settings: BUILD_TYPE=Release, CXX_COMPILER=/home/xinyi/tool/BiShengCompiler-4.1.0-aarch64-linux/bin/clang++, CXX_FLAGS=-flto=thin -fuse-ld=lld -D_GLIBCXX_USE_CXX11_ABI=0 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wvla-extension -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-range-loop-analysis -Wno-pass-failed -Wsuggest-override -Wno-error=pedantic -Wno-error=old-style-cast -Wno-error=inconsistent-missing-override -Wno-error=inconsistent-missing-destructor-override -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-missing-braces -Wunused-lambda-capture -Qunused-arguments -fcolor-diagnostics -faligned-new -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math -Werror=format, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.16.0 OpenCV: 4.11.0 MMCV: 1.7.2 MMCV Compiler: clang 12.0.1 MMCV CUDA Compiler: not available MMDetection: 2.14.0 MMSegmentation: 0.14.1 MMDetection3D: 1.0.0rc4+0ff02cd spconv2.0: False ------------------------------------------------------------ 2025-05-06 02:29:14,649 - mmdet - INFO - Distributed training: True 2025-05-06 02:29:16,207 - mmdet - INFO - Config: point_cloud_range = [-15.0, -30.0, -2.0, 15.0, 30.0, 2.0] class_names = [ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ] dataset_type = 'CustomNuScenesLocalMapDataset' data_root = 'data/nuscenes/' input_modality = dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True) file_client_args = dict(backend='disk') train_pipeline = [ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict(type='PhotoMetricDistortionMultiViewImage'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, with_attr_label=False), dict( type='ObjectRangeFilter', point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict(type='CustomCollect3D', keys=['gt_bboxes_3d', 'gt_labels_3d', 'img']) ] test_pipeline = [ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ] eval_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=5, use_dim=5, file_client_args=dict(backend='disk')), dict( type='LoadPointsFromMultiSweeps', sweeps_num=10, file_client_args=dict(backend='disk')), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'trailer', 'bus', 'construction_vehicle', 'bicycle', 'motorcycle', 'pedestrian', 'traffic_cone', 'barrier' ], with_label=False), dict(type='Collect3D', keys=['points']) ] data = dict( samples_per_gpu=4, workers_per_gpu=8, train=dict( type='CustomNuScenesLocalMapDataset', data_root='data/nuscenes/', ann_file='data/nuscenes/nuscenes_infos_temporal_train.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict(type='PhotoMetricDistortionMultiViewImage'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, with_attr_label=False), dict( type='ObjectRangeFilter', point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0]), dict( type='ObjectNameFilter', classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ]), dict( type='CustomCollect3D', keys=['gt_bboxes_3d', 'gt_labels_3d', 'img']) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=False, box_type_3d='LiDAR', use_valid_flag=True, bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, map_classes=['divider', 'ped_crossing', 'boundary'], queue_length=1, gt_shift_pts_pattern='v5'), val=dict( type='CustomNuScenesLocalMapDataset', ann_file='data/nuscenes/nuscenes_infos_temporal_val.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=True, box_type_3d='LiDAR', data_root='data/nuscenes/', map_ann_file='data/nuscenes/nuscenes_map_anns_val.json', bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, gt_shift_pts_pattern='v5', map_classes=['divider', 'ped_crossing', 'boundary'], samples_per_gpu=1), test=dict( type='CustomNuScenesLocalMapDataset', data_root='data/nuscenes/', ann_file='data/nuscenes/nuscenes_infos_temporal_val.pkl', pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], classes=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], modality=dict( use_lidar=False, use_camera=True, use_radar=False, use_map=False, use_external=True), test_mode=True, box_type_3d='LiDAR', map_ann_file='data/nuscenes/nuscenes_map_anns_val.json', bev_size=(200, 100), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], fixed_ptsnum_per_line=20, eval_use_same_gt_sample_num_flag=True, padding_value=-10000, gt_shift_pts_pattern='v5', map_classes=['divider', 'ped_crossing', 'boundary']), pin_memory=True, shuffler_sampler=dict(type='DistributedGroupSampler'), nonshuffler_sampler=dict(type='DistributedSampler')) evaluation = dict( interval=2, pipeline=[ dict(type='LoadMultiViewImageFromFiles', to_float32=True), dict( type='NormalizeMultiviewImage', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict( type='MultiScaleFlipAug3D', img_scale=(1600, 900), pts_scale_ratio=1, flip=False, transforms=[ dict(type='RandomScaleImageMultiViewImage', scales=[0.5]), dict(type='PadMultiViewImage', size_divisor=32), dict( type='DefaultFormatBundle3D', class_names=[ 'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone' ], with_label=False), dict(type='CustomCollect3D', keys=['img']) ]) ], metric='chamfer') checkpoint_config = dict(interval=1) log_config = dict( interval=50, hooks=[dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' work_dir = './work_dirs/maptr_tiny_r50_24e_bevformer' load_from = None resume_from = None workflow = [('train', 1)] plugin = True plugin_dir = 'projects/mmdet3d_plugin/' voxel_size = [0.15, 0.15, 4] img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) map_classes = ['divider', 'ped_crossing', 'boundary'] fixed_ptsnum_per_gt_line = 20 fixed_ptsnum_per_pred_line = 20 eval_use_same_gt_sample_num_flag = True num_map_classes = 3 _dim_ = 256 _pos_dim_ = 128 _ffn_dim_ = 512 _num_levels_ = 1 bev_h_ = 200 bev_w_ = 100 queue_length = 1 model = dict( type='MapTR', use_grid_mask=True, video_test_mode=False, pretrained=dict(img='ckpts/resnet50-19c8e357.pth'), img_backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(3, ), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=False), norm_eval=True, style='pytorch'), img_neck=dict( type='FPN', in_channels=[2048], out_channels=256, start_level=0, add_extra_convs='on_output', num_outs=1, relu_before_extra_convs=True), pts_bbox_head=dict( type='MapTRHead', bev_h=200, bev_w=100, num_query=900, num_vec=50, num_pts_per_vec=20, num_pts_per_gt_vec=20, dir_interval=1, query_embed_type='instance_pts', transform_method='minmax', gt_shift_pts_pattern='v5', num_classes=3, in_channels=256, sync_cls_avg_factor=True, with_box_refine=True, as_two_stage=False, code_size=2, code_weights=[1.0, 1.0, 1.0, 1.0], transformer=dict( type='MapTRPerceptionTransformer', rotate_prev_bev=True, use_shift=True, use_can_bus=True, embed_dims=256, encoder=dict( type='BEVFormerEncoder', num_layers=1, pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], num_points_in_pillar=4, return_intermediate=False, transformerlayers=dict( type='BEVFormerLayer', attn_cfgs=[ dict( type='TemporalSelfAttention', embed_dims=256, num_levels=1), dict( type='SpatialCrossAttention', pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], deformable_attention=dict( type='MSDeformableAttention3D', embed_dims=256, num_points=8, num_levels=1), embed_dims=256) ], feedforward_channels=512, ffn_dropout=0.1, operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', 'norm'))), decoder=dict( type='MapTRDecoder', num_layers=6, return_intermediate=True, transformerlayers=dict( type='DetrTransformerDecoderLayer', attn_cfgs=[ dict( type='MultiheadAttention', embed_dims=256, num_heads=8, dropout=0.1), dict( type='CustomMSDeformableAttention', embed_dims=256, num_levels=1) ], feedforward_channels=512, ffn_dropout=0.1, operation_order=('self_attn', 'norm', 'cross_attn', 'norm', 'ffn', 'norm')))), bbox_coder=dict( type='MapTRNMSFreeCoder', post_center_range=[-20, -35, -20, -35, 20, 35, 20, 35], pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], max_num=50, voxel_size=[0.15, 0.15, 4], num_classes=3), positional_encoding=dict( type='LearnedPositionalEncoding', num_feats=128, row_num_embed=200, col_num_embed=100), loss_cls=dict( type='FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=2.0), loss_bbox=dict(type='L1Loss', loss_weight=0.0), loss_iou=dict(type='GIoULoss', loss_weight=0.0), loss_pts=dict(type='PtsL1Loss', loss_weight=5.0), loss_dir=dict(type='PtsDirCosLoss', loss_weight=0.005)), train_cfg=dict( pts=dict( grid_size=[512, 512, 1], voxel_size=[0.15, 0.15, 4], point_cloud_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0], out_size_factor=4, assigner=dict( type='MapTRAssigner', cls_cost=dict(type='FocalLossCost', weight=2.0), reg_cost=dict( type='BBoxL1Cost', weight=0.0, box_format='xywh'), iou_cost=dict(type='IoUCost', iou_mode='giou', weight=0.0), pts_cost=dict(type='OrderedPtsL1Cost', weight=5), pc_range=[-15.0, -30.0, -2.0, 15.0, 30.0, 2.0])))) optimizer = dict( type='AdamW', lr=0.0006, paramwise_cfg=dict(custom_keys=dict(img_backbone=dict(lr_mult=0.1))), weight_decay=0.01) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) lr_config = dict( policy='CosineAnnealing', warmup='linear', warmup_iters=500, warmup_ratio=0.3333333333333333, min_lr_ratio=0.001) total_epochs = 24 runner = dict(type='EpochBasedRunner', max_epochs=24) fp16 = dict(loss_scale=512.0) gpu_ids = range(0, 1) 2025-05-06 02:29:16,207 - mmdet - INFO - Set random seed to 0, deterministic: True /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:690: DeprecationWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/cnn/bricks/transformer.py:440: DeprecationWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) warnings.warn( /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d/mmdet3d/models/detectors/mvx_two_stage.py:88: UserWarning: DeprecationWarning: pretrained is a deprecated key, please consider using init_cfg. warnings.warn('DeprecationWarning: pretrained is a deprecated ' > /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/runner/base_module.py(79)init_weights() -> for name, param in self.named_parameters(): (Pdb) c !!!! pts_bbox_head.code_weights !!!! pts_bbox_head.positional_encoding.row_embed.weight !!!! pts_bbox_head.positional_encoding.col_embed.weight !!!! pts_bbox_head.transformer.level_embeds !!!! pts_bbox_head.transformer.cams_embeds !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.sampling_offsets.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.sampling_offsets.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.attention_weights.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.attention_weights.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.value_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.value_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.output_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.0.output_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.sampling_offsets.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.sampling_offsets.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.attention_weights.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.attention_weights.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.value_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.deformable_attention.value_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.encoder.layers.0.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.encoder.layers.0.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.0.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.0.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.1.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.1.bias !!!! pts_bbox_head.transformer.encoder.layers.0.norms.2.weight !!!! pts_bbox_head.transformer.encoder.layers.0.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.0.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.0.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.0.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.0.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.1.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.1.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.1.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.1.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.2.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.2.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.2.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.2.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.3.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.3.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.3.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.3.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.4.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.4.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.4.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.4.norms.2.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.in_proj_weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.in_proj_bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.out_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.0.attn.out_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.sampling_offsets.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.sampling_offsets.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.attention_weights.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.attention_weights.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.value_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.value_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.output_proj.weight !!!! pts_bbox_head.transformer.decoder.layers.5.attentions.1.output_proj.bias !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.0.0.weight !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.0.0.bias !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.1.weight !!!! pts_bbox_head.transformer.decoder.layers.5.ffns.0.layers.1.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.0.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.0.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.1.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.1.bias !!!! pts_bbox_head.transformer.decoder.layers.5.norms.2.weight !!!! pts_bbox_head.transformer.decoder.layers.5.norms.2.bias !!!! pts_bbox_head.transformer.reference_points.weight !!!! pts_bbox_head.transformer.reference_points.bias !!!! pts_bbox_head.transformer.can_bus_mlp.0.weight !!!! pts_bbox_head.transformer.can_bus_mlp.0.bias !!!! pts_bbox_head.transformer.can_bus_mlp.2.weight !!!! pts_bbox_head.transformer.can_bus_mlp.2.bias !!!! pts_bbox_head.transformer.can_bus_mlp.norm.weight !!!! pts_bbox_head.transformer.can_bus_mlp.norm.bias !!!! pts_bbox_head.cls_branches.0.0.weight !!!! pts_bbox_head.cls_branches.0.0.bias !!!! pts_bbox_head.cls_branches.0.1.weight !!!! pts_bbox_head.cls_branches.0.1.bias !!!! pts_bbox_head.cls_branches.0.3.weight !!!! pts_bbox_head.cls_branches.0.3.bias !!!! pts_bbox_head.cls_branches.0.4.weight !!!! pts_bbox_head.cls_branches.0.4.bias !!!! pts_bbox_head.cls_branches.0.6.weight !!!! pts_bbox_head.cls_branches.0.6.bias !!!! pts_bbox_head.cls_branches.1.0.weight !!!! pts_bbox_head.cls_branches.1.0.bias !!!! pts_bbox_head.cls_branches.1.1.weight !!!! pts_bbox_head.cls_branches.1.1.bias !!!! pts_bbox_head.cls_branches.1.3.weight !!!! pts_bbox_head.cls_branches.1.3.bias !!!! pts_bbox_head.cls_branches.1.4.weight !!!! pts_bbox_head.cls_branches.1.4.bias !!!! pts_bbox_head.cls_branches.1.6.weight !!!! pts_bbox_head.cls_branches.1.6.bias !!!! pts_bbox_head.cls_branches.2.0.weight !!!! pts_bbox_head.cls_branches.2.0.bias !!!! pts_bbox_head.cls_branches.2.1.weight !!!! pts_bbox_head.cls_branches.2.1.bias !!!! pts_bbox_head.cls_branches.2.3.weight !!!! pts_bbox_head.cls_branches.2.3.bias !!!! pts_bbox_head.cls_branches.2.4.weight !!!! pts_bbox_head.cls_branches.2.4.bias !!!! pts_bbox_head.cls_branches.2.6.weight !!!! pts_bbox_head.cls_branches.2.6.bias !!!! pts_bbox_head.cls_branches.3.0.weight !!!! pts_bbox_head.cls_branches.3.0.bias !!!! pts_bbox_head.cls_branches.3.1.weight !!!! pts_bbox_head.cls_branches.3.1.bias !!!! pts_bbox_head.cls_branches.3.3.weight !!!! pts_bbox_head.cls_branches.3.3.bias !!!! pts_bbox_head.cls_branches.3.4.weight !!!! pts_bbox_head.cls_branches.3.4.bias !!!! pts_bbox_head.cls_branches.3.6.weight !!!! pts_bbox_head.cls_branches.3.6.bias !!!! pts_bbox_head.cls_branches.4.0.weight !!!! pts_bbox_head.cls_branches.4.0.bias !!!! pts_bbox_head.cls_branches.4.1.weight !!!! pts_bbox_head.cls_branches.4.1.bias !!!! pts_bbox_head.cls_branches.4.3.weight !!!! pts_bbox_head.cls_branches.4.3.bias !!!! pts_bbox_head.cls_branches.4.4.weight !!!! pts_bbox_head.cls_branches.4.4.bias !!!! pts_bbox_head.cls_branches.4.6.weight !!!! pts_bbox_head.cls_branches.4.6.bias !!!! pts_bbox_head.cls_branches.5.0.weight !!!! pts_bbox_head.cls_branches.5.0.bias !!!! pts_bbox_head.cls_branches.5.1.weight !!!! pts_bbox_head.cls_branches.5.1.bias !!!! pts_bbox_head.cls_branches.5.3.weight !!!! pts_bbox_head.cls_branches.5.3.bias !!!! pts_bbox_head.cls_branches.5.4.weight !!!! pts_bbox_head.cls_branches.5.4.bias !!!! pts_bbox_head.cls_branches.5.6.weight !!!! pts_bbox_head.cls_branches.5.6.bias !!!! pts_bbox_head.reg_branches.0.0.weight !!!! pts_bbox_head.reg_branches.0.0.bias !!!! pts_bbox_head.reg_branches.0.2.weight !!!! pts_bbox_head.reg_branches.0.2.bias !!!! pts_bbox_head.reg_branches.0.4.weight !!!! pts_bbox_head.reg_branches.0.4.bias !!!! pts_bbox_head.reg_branches.1.0.weight !!!! pts_bbox_head.reg_branches.1.0.bias !!!! pts_bbox_head.reg_branches.1.2.weight !!!! pts_bbox_head.reg_branches.1.2.bias !!!! pts_bbox_head.reg_branches.1.4.weight !!!! pts_bbox_head.reg_branches.1.4.bias !!!! pts_bbox_head.reg_branches.2.0.weight !!!! pts_bbox_head.reg_branches.2.0.bias !!!! pts_bbox_head.reg_branches.2.2.weight !!!! pts_bbox_head.reg_branches.2.2.bias !!!! pts_bbox_head.reg_branches.2.4.weight !!!! pts_bbox_head.reg_branches.2.4.bias !!!! pts_bbox_head.reg_branches.3.0.weight !!!! pts_bbox_head.reg_branches.3.0.bias !!!! pts_bbox_head.reg_branches.3.2.weight !!!! pts_bbox_head.reg_branches.3.2.bias !!!! pts_bbox_head.reg_branches.3.4.weight !!!! pts_bbox_head.reg_branches.3.4.bias !!!! pts_bbox_head.reg_branches.4.0.weight !!!! pts_bbox_head.reg_branches.4.0.bias !!!! pts_bbox_head.reg_branches.4.2.weight !!!! pts_bbox_head.reg_branches.4.2.bias !!!! pts_bbox_head.reg_branches.4.4.weight !!!! pts_bbox_head.reg_branches.4.4.bias !!!! pts_bbox_head.reg_branches.5.0.weight !!!! pts_bbox_head.reg_branches.5.0.bias !!!! pts_bbox_head.reg_branches.5.2.weight !!!! pts_bbox_head.reg_branches.5.2.bias !!!! pts_bbox_head.reg_branches.5.4.weight !!!! pts_bbox_head.reg_branches.5.4.bias !!!! pts_bbox_head.bev_embedding.weight > /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv/mmcv/runner/base_module.py(84)init_weights() -> 'init_info'] = f'The value is the same before and ' \ (Pdb) ll 56 def init_weights(self) -> None: 57 """Initialize the weights.""" 58 59 is_top_level_module = False 60 # check if it is top-level module 61 if not hasattr(self, '_params_init_info'): 62 # The `_params_init_info` is used to record the initialization 63 # information of the parameters 64 # the key should be the obj:`nn.Parameter` of model and the value 65 # should be a dict containing 66 # - init_info (str): The string that describes the initialization. 67 # - tmp_mean_value (FloatTensor): The mean of the parameter, 68 # which indicates whether the parameter has been modified. 69 # this attribute would be deleted after all parameters 70 # is initialized. 71 self._params_init_info: defaultdict = defaultdict(dict) 72 is_top_level_module = True 73 74 # Initialize the `_params_init_info`, 75 # When detecting the `tmp_mean_value` of 76 # the corresponding parameter is changed, update related 77 # initialization information 78 import pdb;pdb.set_trace() 79 for name, param in self.named_parameters(): 80 print("!!!!", name) 81 if name == "pts_bbox_head.bev_embedding.weight": 82 import pdb;pdb.set_trace() 83 self._params_init_info[param][ 84 -> 'init_info'] = f'The value is the same before and ' \ 85 f'after calling `init_weights` ' \ 86 f'of {self.__class__.__name__} ' 87 self._params_init_info[param][ 88 'tmp_mean_value'] = param.data.mean() 89 90 # pass `params_init_info` to all submodules 91 # All submodules share the same `params_init_info`, 92 # so it will be updated when parameters are 93 # modified at any level of the model. 94 for sub_module in self.modules(): 95 sub_module._params_init_info = self._params_init_info 96 97 # Get the initialized logger, if not exist, 98 # create a logger named `mmcv` 99 logger_names = list(logger_initialized.keys()) 100 logger_name = logger_names[0] if logger_names else 'mmcv' 101 102 from ..cnn import initialize 103 from ..cnn.utils.weight_init import update_init_info 104 module_name = self.__class__.__name__ 105 if not self._is_init: 106 if self.init_cfg: 107 print_log( 108 f'initialize {module_name} with init_cfg {self.init_cfg}', 109 logger=logger_name) 110 initialize(self, self.init_cfg) 111 if isinstance(self.init_cfg, dict): 112 # prevent the parameters of 113 # the pre-trained model 114 # from being overwritten by 115 # the `init_weights` 116 if self.init_cfg['type'] == 'Pretrained': 117 return 118 119 for m in self.children(): 120 if hasattr(m, 'init_weights'): 121 m.init_weights() 122 # users may overload the `init_weights` 123 update_init_info( 124 m, 125 init_info=f'Initialized by ' 126 f'user-defined `init_weights`' 127 f' in {m.__class__.__name__} ') 128 129 self._is_init = True 130 else: 131 warnings.warn(f'init_weights of {self.__class__.__name__} has ' 132 f'been called more than once.') 133 134 if is_top_level_module: 135 self._dump_init_info(logger_name) 136 137 for sub_module in self.modules(): 138 del sub_module._params_init_info (Pdb) param.data tensor([[ 0.3346, -0.8828, -0.1136, ..., 0.8826, -1.8123, 0.8151], [-0.5951, -1.4626, 0.8475, ..., 1.4519, 2.5579, -0.2043], [-0.6746, 0.0264, 0.7649, ..., 1.3184, -0.7278, 0.0154], ..., [ 0.5182, 1.1863, 0.4237, ..., 2.0670, 0.0473, 1.8927], [-0.1231, -1.1684, 1.9254, ..., -0.5681, -0.6239, -0.5316], [ 1.1549, 0.5097, -0.6489, ..., -1.0827, 0.0209, -1.4871]]) (Pdb) param.data[0] tensor([ 0.3346, -0.8828, -0.1136, 0.3244, -0.3209, 0.7281, 0.2600, 1.0407, 0.3501, -0.7894, -0.2642, 0.0256, 0.4568, -0.7666, 0.5501, 0.6672, 0.9547, -1.5918, -0.5885, 0.6217, -1.7893, 0.5015, 1.3663, 0.5167, -0.8397, 1.2157, -0.2112, -0.1900, -1.1100, -0.1835, 0.1395, -0.1162, -0.0241, -0.2620, -1.4265, -3.5514, -1.0383, 0.5434, 0.6944, -0.4704, -0.6737, -0.3197, 2.0386, 0.9962, 0.3623, 0.2161, 0.1009, 0.4102, -0.5804, 0.9949, -0.9284, -1.5280, -0.6673, 1.0168, 0.2570, -0.0159, 0.0816, -0.0709, 1.8976, 0.8071, -2.1896, -0.2846, 0.7351, -0.0122, 0.4719, -1.0442, 1.3061, -0.6122, -1.7751, 0.0140, -0.1835, -1.3341, -0.4490, -0.8574, -0.5970, -0.6065, -0.8981, -0.4639, -0.3914, 0.9162, 0.5757, -0.2449, 0.2926, 0.3605, -0.8455, 0.1431, 1.4882, 1.0701, 0.4523, -0.5384, 0.7856, 0.5376, 0.5866, 0.3177, 0.3368, 0.5757, -0.4450, -0.4648, -0.2667, -0.4380, -0.6291, 1.0922, 0.8174, 1.6219, 0.1084, -1.0030, 1.0718, 0.0233, 0.3976, -0.2483, 0.1274, 1.2697, 1.7519, 0.5526, -0.0835, -0.2765, 1.5941, 0.1815, -0.9194, -0.3283, 0.7386, 1.2244, -1.5840, 0.5081, -0.4533, -0.3685, -0.7840, -2.2048, -1.2149, -0.1261, 0.4236, -0.5035, 0.7673, 3.0645, 1.5516, -1.0794, -0.9831, -0.1259, -0.0342, -0.9783, -0.9835, 0.4927, 0.2750, 0.0925, 0.9347, -1.0328, 0.2586, -0.1690, 1.6564, -1.0191, -0.2295, -0.4965, -1.5436, -0.4743, -0.3224, -0.6014, -1.1098, -0.4344, 0.6232, 0.2970, -0.6383, 0.2032, 1.4624, -0.1955, -0.9351, 0.8184, -1.6618, -0.3535, 0.5543, 0.6959, -0.0045, 0.5455, 0.2708, 0.3808, -0.8948, -1.1259, 0.2557, -0.2562, 1.0773, 0.0700, 0.9273, -0.2175, -0.6563, -0.1454, 1.2647, -0.9731, 0.5140, 0.2004, -1.2946, -0.9143, 0.2443, -0.6664, -1.4068, 1.9337, -1.9506, -0.0645, 1.2034, -0.9366, -1.4137, 0.5090, -1.4954, 0.3777, 0.1271, 1.4837, -2.3799, -0.7307, 0.8735, 1.1964, 0.3695, -0.0382, 0.1699, -1.2928, 0.7482, -0.4217, 0.4145, -0.2290, 0.8246, 0.6280, -0.9238, 0.9698, 2.5129, -0.3167, -2.5102, -0.5279, 1.8864, 0.6001, 0.8151, 1.2361, 0.1404, -0.0159, 0.7300, 0.5005, 0.9352, 2.0131, -1.9003, -0.3568, -1.5779, -0.1438, 0.4502, -0.3725, -0.8348, -1.3513, -2.3635, 1.6701, 0.4200, 1.7095, -0.5624, 0.4635, 0.5312, 0.5178, -0.8639, 1.3752, -0.6619, 0.8826, -1.8123, 0.8151]) (Pdb) param.data[0].max() tensor(3.0645) (Pdb) param.data[0].mean() tensor(-0.0098) (Pdb) param.data[1].mean() tensor(-0.0044) (Pdb) param.data.mean() 二、软件版本: -- CANN 版本 CANN8.1 使用的官方镜像 --Tensorflow/Pytorch/MindSpore 版本:torch-2.1.0使用毕昇编译器编译 --Python 版本 (e.g., Python 3.7.5):python3.8使用毕昇编译器编译 --操作系统版本 (e.g., Ubuntu 18.04):Linux version 5.10.0-60.18.0.50.oe2203.aarch64 (abuild@obs-worker-002) (gcc_old (GCC) 10.3.1, GNU ld (GNU Binutils) 2.37) #1 SMP Wed Mar 30 02:43:08 UTC 2022 [root@fedf3901b7b1 MapTR]# pip list Package Version Editable project location ----------------------- ----------------------- ----------------------------------------------------------------------- absl-py 2.2.2 addict 2.4.0 argcomplete 3.6.2 asttokens 3.0.0 attrs 25.3.0 auto_tune 0.1.0 av 12.3.0 av2 0.2.1 backcall 0.2.0 black 24.8.0 cachetools 5.5.2 certifi 2025.4.26 charset-normalizer 3.4.1 click 8.1.8 cloudpickle 3.1.1 colorlog 6.9.0 cycler 0.12.1 dataflow 0.0.1 decorator 5.2.1 dependency-groups 1.3.0 descartes 1.1.0 distlib 0.3.9 exceptiongroup 1.2.2 executing 2.2.0 filelock 3.16.1 fire 0.7.0 flake8 7.1.2 fonttools 4.57.0 fsspec 2023.6.0 google-auth 2.39.0 google-auth-oauthlib 1.0.0 grpcio 1.70.0 hccl 0.1.0 hccl_parser 0.1 huggingface-hub 0.30.2 idna 3.10 imageio 2.35.1 importlib_metadata 8.5.0 iniconfig 2.1.0 ipython 8.12.3 jedi 0.19.2 Jinja2 3.1.6 joblib 1.4.2 kiwisolver 1.4.7 kornia 0.7.3 kornia_rs 0.1.8 llm_datadist 0.0.1 llvmlite 0.41.1 lyft-dataset-sdk 0.0.8 Markdown 3.7 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.5.3 matplotlib-inline 0.1.7 mccabe 0.7.0 mdurl 0.1.2 ml-dtypes 0.2.0 mmcv-full 1.7.2 /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmcv mmdet 2.14.0 mmdet3d 1.0.0rc4 /home/xinyi/2025/MapTR-V2/DrivingSDK/model_examples/MapTR/mmdetection3d mmsegmentation 0.14.1 mock 5.2.0 mpmath 1.3.0 msobjdump 0.1.0 mx_driving 1.0.0+git0ff02cd mypy_extensions 1.1.0 narwhals 1.37.0 networkx 2.2 ninja 1.11.1.4 nox 2025.2.9 numba 0.58.1 numpy 1.23.0 nuscenes-devkit 1.1.11 oauthlib 3.2.2 op_compile_tool 0.1.0 op_gen 0.1 op_test_frame 0.1 opc_tool 0.1.0 opencv-python 4.11.0.86 packaging 25.0 pandas 2.0.3 parso 0.8.4 pathspec 0.12.1 pexpect 4.9.0 pickleshare 0.7.5 pillow 10.4.0 pip 22.3.1 platformdirs 4.3.6 plotly 6.0.1 pluggy 1.5.0 plyfile 1.0.3 polars 1.8.2 prettytable 3.11.0 projects 1.0.9 prompt_toolkit 3.0.51 protobuf 4.21.6 psutil 7.0.0 ptyprocess 0.7.0 pure_eval 0.2.3 pyarrow 17.0.0 pyasn1 0.6.1 pyasn1_modules 0.4.2 pycocotools 2.0.7 pycodestyle 2.12.1 pyflakes 3.2.0 Pygments 2.19.1 pyparsing 3.1.4 pyproj 3.5.0 pyquaternion 0.9.9 pytest 8.3.5 python-dateutil 2.9.0.post0 pytz 2025.2 PyWavelets 1.4.1 PyYAML 6.0.2 requests 2.32.3 requests-oauthlib 2.0.0 rich 14.0.0 rsa 4.9.1 safetensors 0.5.3 schedule_search 0.0.1 scikit-image 0.19.3 scikit-learn 1.3.2 scipy 1.10.1 setuptools 75.1.0 Shapely 1.8.5.post1 show_kernel_debug_data 0.1.0 six 1.17.0 stack-data 0.6.3 sympy 1.13.3 synr 0.5.0 te 0.4.0 tensorboard 2.14.0 tensorboard-data-server 0.7.2 termcolor 2.4.0 terminaltables 3.1.10 threadpoolctl 3.5.0 tifffile 2023.7.10 timm 1.0.15 tomli 2.2.1 torch 2.1.0a0+git7bcf7da torch-npu 2.1.0.post13+gitcce8403 torchaudio 2.1.0 torchvision 0.16.0 tornado 6.4.2 tqdm 4.67.1 traitlets 5.14.3 trimesh 2.35.39 typing_extensions 4.13.2 tzdata 2025.2 universal_pathlib 0.2.6 urllib3 2.2.3 urwid 2.6.16 virtualenv 20.30.0 wcwidth 0.2.13 Werkzeug 3.0.6 wheel 0.45.1 yapf 0.43.0 zipp 3.20.2
评论 (
17
)
登录
后才可以发表评论
状态
CLOSED
TODO
WIP
DONE
CLOSED
REJECTED
负责人
未设置
标签
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (9)
标签 (6)
master
branch_v7.0.RC1
branch_v7.1.RC1
revert-merge-1410-master
revert-merge-1413-branch_v7.1.RC1
branch_v6.0.0
branch_v6.0.0-RC3
branch_v6.0.0-RC2
branch_v6.0.0-RC1
v7.1.RC1
v7.0.RC1
v6.0.0-RC2
branch_v6.0.0
branch_v6.0.0-RC3
branch_v6.0.0-RC2
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(1)
1
https://gitee.com/ascend/DrivingSDK.git
git@gitee.com:ascend/DrivingSDK.git
ascend
DrivingSDK
DrivingSDK
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册