一、问题现象(附报错日志上下文):
单机多卡训练时,任务拉起失败。
Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001
[ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed
[Error]: Failed to apply for memory.
Check the remaining storage space in the hardware environment.
EE9999: Inner Error!
pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251]
EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317]
TraceBack (most recent call last):
Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700]
rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50]
Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791]
Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781]
invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394]
[Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161]
二、软件版本:
-- CANN 版本: 7.0.0.beta1
--Pytorch 版本:2.1.0
--Python 版本 : Python 3.9
--操作系统版本 : Ubuntu 20.04 LTS
三、测试步骤:
torchrun --nproc-per-node=8 main_train.py
四、日志信息:
before: torch_npu.distributed.is_hccl_available(): True
/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:209: ImportWarning:
*************************************************************************************************************
The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now..
The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now..
The backend in torch.distributed.init_process_group set to hccl now..
The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now..
The device parameters have been replaced with npu in the function below:
torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.autocast, torch.load, torch.Generator, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_empty
*************************************************************************************************************
main_train: 2024-12-24 11:59:09.291022
learning_rate: 0.0001
optim: lamb
weight_decay: 0
clamp_value: 10
use_clip_grad_norm: True
max_norm: 1.0
use_l1: False
BATCH_SIZE: 640
num_workers: 32
pred_dim: 1
years: 2
ASCEND_LAUNCH_BLOCKING = 1
OMP_NUM_THREADS = 1
HCCL_CONNECT_TIMEOUT = 300
NPU 内存状态:
已分配: 0.00 MB
已缓存: 0.00 MB
最大内存: 0.00 MB
after: torch_npu.distributed.is_hccl_available(): True
is_distributed: True, local_rank: 6, world_size: 8, device: npu:6, device_count: 8
NPU 内存状态:
已分配: 0.00 MB
已缓存: 0.00 MB
最大内存: 0.00 MB
after: torch_npu.distributed.is_hccl_available(): True
is_distributed: True, local_rank: 2, world_size: 8, device: npu:2, device_count: 8
NPU 内存状态:
已分配: 0.00 MB
已缓存: 0.00 MB
最大内存: 0.00 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
Model creation failed: InnerRun:torch_npu/csrc/framework/OpParamMaker.cpp:197 NPU error, error code is 207001
[ERROR] 2024-12-24-11:59:10 (PID:307680, Device:4, RankID:4) ERR01100 OPS call acl api failed
[Error]: Failed to apply for memory.
Check the remaining storage space in the hardware environment.
EE9999: Inner Error!
pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251]
EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317]
TraceBack (most recent call last):
Failed to load args and l2ctrl, stream_id=15, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700]
rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50]
Call rtKernelLaunchWithFlagV2(stub_func_, block_dim_, &args_ex_, sm_desc, stream, 0U, &cfg_) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791]
Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781]
invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394]
[Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161]
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
NPU 内存状态:
已分配: 265.96 MB
已缓存: 274.00 MB
最大内存: 265.96 MB
[2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307676 closing signal SIGTERM
[2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307677 closing signal SIGTERM
[2024-12-24 11:59:20,846] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307678 closing signal SIGTERM
[2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307679 closing signal SIGTERM
[2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307681 closing signal SIGTERM
[2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307682 closing signal SIGTERM
[2024-12-24 11:59:20,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 307683 closing signal SIGTERM
[2024-12-24 11:59:50,847] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307676 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL
[2024-12-24 11:59:51,044] torch.distributed.elastic.multiprocessing.api: [WARNING] Unable to shutdown process 307677 via Signals.SIGTERM, forcefully exiting via Signals.SIGKILL
Process ForkServerPoolWorker-9:
Process ForkServerPoolWorker-4:
Process ForkServerPoolWorker-8:
Traceback (most recent call last):
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
Process ForkServerPoolWorker-7:
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
BrokenPipeError: [Errno 32] Broken pipe
Process ForkServerPoolWorker-2:
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker
put((job, i, (False, wrapped)))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker
put((job, i, (False, wrapped)))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
BrokenPipeError: [Errno 32] Broken pipe
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
Traceback (most recent call last):
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
Traceback (most recent call last):
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 131, in worker
put((job, i, result))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/queues.py", line 377, in put
self._writer.send_bytes(obj)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/pool.py", line 136, in worker
put((job, i, (False, wrapped)))
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
...
报错显示算子调用底层接口失败,麻烦按照置顶issue提供相关信息
在 AICC 上拉起的训练,没有root权限, 获取不了event日志
少量数据能正常拉起训练,当增大训练数据量后,即使 batch_size 设置很小也拉不起训练
[2024-12-27 16:03:54,815] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py:162: ImportWarning:
*************************************************************************************************************
The torch.Tensor.cuda and torch.nn.Module.cuda are replaced with torch.Tensor.npu and torch.nn.Module.npu now..
The torch.cuda.DoubleTensor is replaced with torch.npu.FloatTensor cause the double type is not supported now..
The backend in torch.distributed.init_process_group set to hccl now..
The torch.cuda.* and torch.cuda.amp.* are replaced with torch.npu.* and torch.npu.amp.* now..
The device parameters have been replaced with npu in the function below:
torch.logspace, torch.randint, torch.hann_window, torch.rand, torch.full_like, torch.ones_like, torch.rand_like, torch.randperm, torch.arange, torch.frombuffer, torch.normal, torch._empty_per_channel_affine_quantized, torch.empty_strided, torch.empty_like, torch.scalar_tensor, torch.tril_indices, torch.bartlett_window, torch.ones, torch.sparse_coo_tensor, torch.randn, torch.kaiser_window, torch.tensor, torch.triu_indices, torch.as_tensor, torch.zeros, torch.randint_like, torch.full, torch.eye, torch._sparse_csr_tensor_unsafe, torch.empty, torch._sparse_coo_tensor_unsafe, torch.blackman_window, torch.zeros_like, torch.range, torch.sparse_csr_tensor, torch.randn_like, torch.from_file, torch._cudnn_init_dropout_state, torch._empty_affine_quantized, torch.linspace, torch.hamming_window, torch.empty_quantized, torch._pin_memory, torch.Tensor.new_empty, torch.Tensor.new_empty_strided, torch.Tensor.new_full, torch.Tensor.new_ones, torch.Tensor.new_tensor, torch.Tensor.new_zeros, torch.Tensor.to, torch.nn.Module.to, torch.nn.Module.to_empty
*************************************************************************************************************
main_train: 2024-12-27 16:04:38.251957
learning_rate: 0.0001
optim: lamb
weight_decay: 0
clamp_value: 10
use_clip_grad_norm: True
max_norm: 1.0
use_l1: False
BATCH_SIZE: 608
num_workers: 32
pred_dim: 1
years: 10
world_size: 8, local_rank: 2, rank: 2, device: npu:2
world_size: 8, local_rank: 1, rank: 1, device: npu:1
Traceback (most recent call last):
File "/home/ma-user/work/mywork/stock/source-torch-npu/main_train.py", line 137, in
model = model.to(device)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/contrib/transfer_to_npu.py", line 50, in decorated
return fn(*args, **kwargs)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 60, in to
self.cast_weight(device)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 126, in cast_weight
sub_module.cast_weight(device)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 126, in cast_weight
sub_module.cast_weight(device)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 126, in cast_weight
sub_module.cast_weight(device)
[Previous line repeated 4 more times]
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 119, in cast_weight
format_cast(self, current_class)
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch_npu/utils/module.py", line 91, in format_cast
module.weight.data = torch_npu.npu_format_cast(module.weight.data, 3) # ACL_FORMAT_NC1HWC0
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torch/ops.py", line 692, in call
return self.op(*args, **kwargs or {})
RuntimeError: InnerRun:build/CMakeFiles/torch_npu.dir/compiler_depend.ts:208 NPU error, error code is 207001
[Error]: Failed to apply for memory.
Check the remaining storage space in the hardware environment.
EE9999: Inner Error!
pool is nullptr or id less than zero, now id=-1.[FUNC:GetItemById][FILE:pool.cc][LINE:251]
EE9999 load args failed, retCode=0x711000e.[FUNC:Load][FILE:arg_loader.cc][LINE:317]
TraceBack (most recent call last):
Failed to load args and l2ctrl, stream_id=0, retCode=0x711000e.[FUNC:LaunchKernel][FILE:context.cc][LINE:700]
rtKernelLaunchWithFlagV2 execute failed, reason=[alloc memory error][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:50]
Call rtKernelLaunchWithFlagV2(stub_func, block_dim, &args_ex, sm_desc, stream, 0U, &cfg) fail, ret: 0x32899[FUNC:DoLaunchKernelWithArgsEx][FILE:op_task.cc][LINE:791]
Call static_cast<rtError_t>(DoLaunchKernelWithArgsEx(stream)) fail, ret: 0x32899[FUNC:DoLaunchKernel][FILE:op_task.cc][LINE:781]
invoke rtKernelLaunch failed, ret = 207001, task = 0/1_-1_0_trans_TransData_1_tvmbin[FUNC:LaunchKernel][FILE:op_task.cc][LINE:394]
[Exec][Op]Execute op failed. op type = Identity, ge result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161]
/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 41 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
plog日志一般在/用户名/ascend/log,如果不是root用户,可以查看对应用户下;
另外落盘日志可以配置对应路径 https://www.hiascend.com/document/detail/zh/canncommercial/80RC3/apiref/envvar/envref_07_0121.html
或者日志打屏export ASCEND_SLOG_PRINT_TO_STDOUT=1重定向到指定文件都可以。
同时算子报错时一般会在当前目录生成exception-l0,里面记录算子报错相关信息。
最后,如果文件较多,可以单独上传到gitee仓上,方便查看
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
登录 后才可以发表评论