Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use loguru and appdirs to replace logging and hard coded path #82

Merged
merged 16 commits into from
Oct 16, 2023
2 changes: 0 additions & 2 deletions .cmake-format.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# ------------------------------------------------
# Options affecting comment reflow and formatting.
# ------------------------------------------------
from __future__ import annotations

with section("markup"):
# enable comment markup parsing and reflow
enable_markup = False
14 changes: 5 additions & 9 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,8 @@ repos:
- id: end-of-file-fixer
- id: mixed-line-ending
- id: trailing-whitespace
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/psf/black
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ruff-format 目前可以替代 black 吗?

Using Ruff's formatter (unstable)
https://github.com/astral-sh/ruff-pre-commit#using-ruffs-formatter-unstable

这里是 "unstable" 的状态, 另外这里需要加上

...
  hooks:
    - id: ruff-format

吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这块我也不太确定,我把autoflake的配置给恢复回去了

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议 暂时保留 black 的配置, 可以在后面替换(如果有必要)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

另外这里需要加上 ... 吗?

需要的,ruff 和 ruff-format 是两个不同的 hook,ruff 仅仅包含 lint 而不包含 format(https://github.com/astral-sh/ruff-pre-commit/blob/42f98979dbdfcd148dff424477552b8816a7cf01/.pre-commit-hooks.yaml#L4

format 还用 black 吧,ruff 还太早期了 0.0.291 还只是第一个用户可见的版本,距离可用还需要很久的打磨

不过我相信 isort 已经可以替换了,已经打算 ruff 过两天发布 0.1.0 (astral-sh/ruff#7931)就在 Paddle 那边将 isort 替换为 ruff 的 isort 实现

rev: 23.1.0
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.9.1
Comment on lines +13 to +14
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

将 black 加回来并更新为最新版本和使用 mirror 版本(就相当于从 PyPI 下载而不是 clone 源码构建),因为 mirror 版本比 source 版本的优势是通过 mypyc 编译,有更快的速度,也是现在推荐的方式,相关讨论见 psf/black#3405 ,文档见 https://black.readthedocs.io/en/stable/integrations/source_version_control.html

hooks:
- id: black
- repo: https://github.com/PyCQA/autoflake
Expand All @@ -31,10 +27,10 @@ repos:
# - "--remove-unused-variables"
- "--remove-all-unused-imports"
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.254
rev: v0.0.292
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix, --no-cache]
- id: ruff
args: [--fix, --exit-non-zero-on-fix, --no-cache]
- repo: https://github.com/asottile/yesqa
rev: v1.4.0
hooks:
Expand Down
3 changes: 3 additions & 0 deletions examples/native_interpreter/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[tool.ruff.isort]
lines-between-types = 1
known-first-party = ["build"]
7 changes: 3 additions & 4 deletions examples/native_interpreter/use_interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,9 @@

import paddle

from build import myinterpreter

import paddlefx

from build import myinterpreter
from paddlefx import symbolic_trace


Expand All @@ -22,8 +21,8 @@ def net(a, b):
traced_layer.graph.print_tabular()

# the very simple IR we want to lower fx graph to
# each instruction is a list of string of: operation, left_operand, right_operand, result
# only two op supported: add, mul
# each instruction is a list of string of: operation, left_operand,
# right_operand, result only two op supported: add, mul
input_names = []
instructions = []

Expand Down
2 changes: 1 addition & 1 deletion examples/resnet_dynamo.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from paddlefx.compiler.tvm import TVMCompiler

paddle.seed(1234)
# logging.getLogger().setLevel(logging.DEBUG)


compiler = TVMCompiler(
full_graph=True,
Expand Down
2 changes: 1 addition & 1 deletion examples/simple_compiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

paddle.seed(1234)

# logging.getLogger().setLevel(logging.DEBUG)



def inner_func(x, y):
Expand Down
3 changes: 0 additions & 3 deletions examples/simple_dynamo.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
from __future__ import annotations

import logging

import numpy as np
import paddle
import paddle.nn
Expand All @@ -10,7 +8,6 @@

from paddlefx.compiler import DummyCompiler, TVMCompiler

logging.getLogger().setLevel(logging.DEBUG)
static_compier = DummyCompiler(full_graph=True, print_tabular_mode="rich")
compiler = TVMCompiler(full_graph=True, print_tabular_mode="rich")

Expand Down
2 changes: 0 additions & 2 deletions examples/targets/target_0_add.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@

import paddlefx

logging.basicConfig(level=logging.DEBUG, format="%(message)s")


def my_compiler(gl: paddlefx.GraphLayer, example_inputs: list[paddle.Tensor] = None):
print("my_compiler() called with FX graph:")
Expand Down
2 changes: 0 additions & 2 deletions examples/targets/target_1_print.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@

import paddlefx

logging.basicConfig(level=logging.DEBUG, format="%(message)s")

# TODO: support grpah break for `print`


Expand Down
9 changes: 0 additions & 9 deletions examples/targets/target_2_func.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,11 @@
from __future__ import annotations

import logging

# ignore DeprecationWarning from `pkg_resources`
logging.captureWarnings(True)


import paddle
import paddle.nn

import paddlefx
import paddlefx.utils

# logging.basicConfig(level=logging.DEBUG, format="%(message)s")
logging.basicConfig(level=logging.INFO, format="%(message)s")


def my_compiler(gl: paddlefx.GraphLayer, example_inputs: list[paddle.Tensor] = None):
print("my_compiler() called with FX graph:")
Expand Down
10 changes: 0 additions & 10 deletions examples/targets/target_3_add_paddle.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,12 @@
from __future__ import annotations

import logging

# ignore DeprecationWarning from `pkg_resources`
logging.captureWarnings(True)

import paddle
import paddle._C_ops

import paddlefx

from paddlefx.compiler import TVMCompiler

logging.basicConfig(level=logging.DEBUG, format="%(message)s")
# logging.basicConfig(level=logging.INFO, format="%(message)s")

paddle.seed(1234)


def func(x, y):
z = paddle.add(x, y)
Expand Down
24 changes: 15 additions & 9 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,25 @@ skip-string-normalization = true
# default value may too big to run
workers = 4

# https://pycqa.github.io/isort/docs/configuration/options.html
[tool.isort]
profile = "black"
lines_between_types = 1
known_first_party = ["paddlefx"]
add_imports = ["from __future__ import annotations"]

# https://beta.ruff.rs/docs/configuration/
[tool.ruff]
select = ["UP"]
ignore = ["UP015"]
exclude = [".cmake-format.py"]
select = [
"UP",
"F",
"I"
]
ignore = [
"UP015",
"F405"
]
target-version = "py38"

[tool.ruff.isort]
lines-between-types = 1
known-first-party = ["paddlefx"]
required-imports = ["from __future__ import annotations"]

[tool.pytest.ini_options]
minversion = "7.0.0"
pythonpath = "tests"
Expand Down
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
paddlepaddle>=2.4.0
pydot
loguru
appdirs
12 changes: 7 additions & 5 deletions src/paddlefx/cache_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@

from typing import TYPE_CHECKING, Callable

from loguru import logger

if TYPE_CHECKING:
GuardFunction = Callable[[types.FrameType], bool]
GuardedCodes = list["GuardedCode"]
Expand All @@ -28,7 +30,7 @@ def add_cache(cls, code: types.CodeType, guarded_code: GuardedCode):
def get_cache(cls, frame: types.FrameType) -> GuardedCode | None:
code: types.CodeType = frame.f_code
if code not in cls.cache_dict:
print(f"Firstly call {code}\n")
logger.success(f"Firstly call {code}\n")
return None
return cls.lookup(frame, cls.cache_dict[code])

Expand All @@ -44,17 +46,17 @@ def lookup(
try:
guard_fn = guarded_code.guard_fn
if guard_fn(frame):
print(
logger.success(
f"[Cache]: Cache hit, GuardFunction is {guard_fn}\n",
)
return guarded_code
else:
print(
logger.info(
f"[Cache]: Cache miss, GuardFunction is {guard_fn}\n",
)
except Exception as e:
print(f"[Cache]: GuardFunction function error: {e}\n")
logger.exception(f"[Cache]: GuardFunction function error: {e}\n")
continue

print("[Cache]: all guards missed\n")
logger.success("[Cache]: all guards missed\n")
return None
18 changes: 11 additions & 7 deletions src/paddlefx/compiler/tvm.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
import paddle.device
import tvm

from appdirs import user_cache_dir
from loguru import logger
from tvm import auto_scheduler, relay

import paddlefx
Expand Down Expand Up @@ -36,14 +38,16 @@ def __init__(
self.tune_mode = tune_mode

def compile(self, gl: paddlefx.GraphLayer, example_inputs: list) -> Callable:
cache_path = user_cache_dir('paddlefx')

shape_dict = {}
for node in gl.graph.nodes:
if node.op == "placeholder":
shape_dict[node.name] = example_inputs[self.input_index].shape
self.input_index += 1
static_func = paddle.jit.to_static(gl.forward)
static_func(*example_inputs)
model_path = f"~/.cache/paddlefx/model_{id(gl)}"
model_path = f"{cache_path}/paddle_static_model/{id(gl)}"
paddle.jit.save(static_func, model_path)
translated_layer = paddle.jit.load(model_path)
mod, params = relay.frontend.from_paddle(translated_layer)
Expand All @@ -53,21 +57,21 @@ def compile(self, gl: paddlefx.GraphLayer, example_inputs: list) -> Callable:
)

for idx, task in enumerate(tasks):
print(
"========== Task %d (workload key: %s) =========="
% (idx, task.workload_key)
logger.info(
f"========== Task {idx} (workload key: {task.workload_key}) =========="
)
print(task.compute_dag)
logger.info(task.compute_dag)

tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
log_file_path = f"{cache_path}/paddle_static_model/{id(gl)}"
tune_option = auto_scheduler.TuningOptions(
num_measure_trials=200,
early_stopping=10,
measure_callbacks=[auto_scheduler.RecordToFile("log_file")],
measure_callbacks=[auto_scheduler.RecordToFile(log_file_path)],
)

tuner.tune(tune_option)
with auto_scheduler.ApplyHistoryBest("log_file"):
with auto_scheduler.ApplyHistoryBest(log_file_path):
with tvm.transform.PassContext(
opt_level=3, config={"relay.backend.use_auto_scheduler": True}
):
Expand Down
9 changes: 5 additions & 4 deletions src/paddlefx/convert_frame.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
from __future__ import annotations

import logging
import types

from typing import TYPE_CHECKING, Callable

from loguru import logger

from .bytecode_transformation import Instruction, transform_code_object
from .cache_manager import CodeCacheManager, GuardedCode
from .paddle_utils import Tensor, skip_paddle_filename, skip_paddle_frame
Expand All @@ -31,7 +32,7 @@ def skip_frame(frame: types.FrameType) -> bool:

def convert_frame(frame: types.FrameType, compiler_fn: Callable) -> GuardedCode | None:
if skip_frame(frame):
logging.debug(f"skip_frame: {frame}")
logger.debug(f"skip_frame: {frame}")
return None
# TODO: guard_fn is not declared in this scope
guard_fn = None
Expand All @@ -45,12 +46,12 @@ def transform(instructions: list[Instruction], code_options: dict):
code_options.update(tracer.output.code_options)
instructions[:] = tracer.output.instructions

logging.info(f"convert_frame: {frame}")
logger.info(f"convert_frame: {frame}")
code = frame.f_code
log_code(code, "ORIGINAL_BYTECODE")

if (cached_code := CodeCacheManager.get_cache(frame)) is not None:
logging.info(f"cached_code: {cached_code}")
logger.info(f"cached_code: {cached_code}")
return cached_code

# TODO: rm torch code dependency
Expand Down
5 changes: 3 additions & 2 deletions src/paddlefx/eval_frame.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
from __future__ import annotations

import functools
import logging
import types

from typing import Callable

from loguru import logger

from ._eval_frame import set_eval_frame
from .compiler import DummyCompiler
from .convert_frame import convert_frame
Expand Down Expand Up @@ -54,7 +55,7 @@ def __fn(frame: types.FrameType):
guarded_code = convert_frame(frame, backend)
return guarded_code
except NotImplementedError as e:
logging.debug(f"!! NotImplementedError: {e}")
logger.debug(f"!! NotImplementedError: {e}")
except Exception:
raise
return None
Expand Down
9 changes: 5 additions & 4 deletions src/paddlefx/legacy_module/translator.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
from __future__ import annotations

import logging
import operator
import types

Expand All @@ -9,6 +8,8 @@
import paddle
import paddle.nn

from loguru import logger

from ..bytecode_transformation import Instruction, create_instruction
from ..output_graph import OutputGraph
from ..proxy import Attribute, Proxy
Expand Down Expand Up @@ -127,7 +128,7 @@ def call_function(self, fn, args, kwargs):
res = self.output.create_node('call_function', fn, args, kwargs)
self.stack.push(res)
elif is_custom_call:
raise NotImplementedError(f"custom_call is not supported")
raise NotImplementedError("custom_call is not supported")
else:
raise NotImplementedError(f"call function {fn} is not supported")

Expand Down Expand Up @@ -236,7 +237,7 @@ def STORE_SUBSCR(self, inst):
self.output.create_node('call_method', "__setitem__", [root, idx, value], {})

def POP_TOP(self, inst: Instruction):
value = self.stack.pop()
self.stack.pop()

def STORE_FAST(self, inst: Instruction):
self.f_locals[inst.argval] = self.stack.pop()
Expand Down Expand Up @@ -323,7 +324,7 @@ def step(self, inst: Instruction):
if not hasattr(self, inst.opname):
raise NotImplementedError(f"missing: {inst.opname}")

logging.debug(f"TRACE {inst.opname} {inst.argval} {self.stack}")
logger.debug(f"TRACE {inst.opname} {inst.argval} {self.stack}")
getattr(self, inst.opname)(inst)

def run(self):
Expand Down
Loading
Loading