Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add substitution for functional linear #1266

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

itai-berman
Copy link
Collaborator

Pull Request Description:

Checklist before requesting a review:

  • I set the appropriate labels on the pull request.
  • I have added/updated the release note draft (if necessary).
  • I have updated the documentation to reflect my changes (if necessary).
  • All function and files are well documented.
  • All function and classes have type hints.
  • There is a licenses in all file.
  • The function and variable names are informative.
  • I have checked for code duplications.
  • I have added new unittest (if necessary).

Copy link
Collaborator

@elad-c elad-c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about torch.nn.functional.conv2d and it's friends?

from model_compression_toolkit.logger import Logger


class FunctionalLinear(common.BaseSubstitution):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just import BaseSubstitution from common


class FunctionalLinear(common.BaseSubstitution):
"""
Replace functional layer_norm with LayerNorm.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

layer_norm?

Logger.critical(f'Weight input missing for node {func_node.name}.') # pragma: no cover

weight = func_node.weights[1]
bias = func_node.weights.get(2, None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sure the bias positional weight is saved as 2 even when the linear is called with bias as kwarg

name=func_node.name,
framework_attr=framework_attr,
input_shape=func_node.input_shape[0],
output_shape=tuple(func_node.output_shape[0]),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just output_shape=func_node.output_shape?


def forward(self, x):
x = F.linear(x, self.fc1.weight, self.fc1.bias)
y = F.linear(x, self.fc2.weight, self.fc2.bias)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change to:
y = F.linear(x, bias=self.fc2.bias, weight=self.fc2.weight)
to make sure everything still works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants