Skip to content

Commit

Permalink
fix gpt2 quantize error
Browse files Browse the repository at this point in the history
  • Loading branch information
ZX-ModelCloud committed Dec 19, 2024
1 parent ea79496 commit f7edb8e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions gptqmodel/quantization/gptq.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ def _clone_layer(self):
else:
clone = self.layer.weight.data.clone()

if isinstance(clone, nn.Conv2d):
if isinstance(self.layer, nn.Conv2d):
clone = clone.flatten(1)

if isinstance(clone, transformers.pytorch_utils.Conv1D):
if isinstance(self.layer, transformers.pytorch_utils.Conv1D):
clone = clone.t()

return clone.to(device=self.device, dtype=torch.float)
Expand Down

0 comments on commit f7edb8e

Please sign in to comment.