[Bugfix][Qwen] fixes the weights dtype in qwen3_next: it is actually a bfloat16 (#27030)

Signed-off-by: Tao He <linzhu.ht@alibaba-inc.com>
This commit is contained in:
Tao He
2025-10-17 11:37:52 +08:00
committed by GitHub
parent 08405609cc
commit bde9e2272a

View File

@ -325,7 +325,6 @@ class Qwen3NextGatedDeltaNet(nn.Module, MambaBase):
self.A_log = nn.Parameter(
torch.empty(
divide(self.num_v_heads, self.tp_size),
dtype=torch.float32,
)
)