Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]add mluop cholesky #1146

Open
wants to merge 45 commits into
base: master
Choose a base branch
from
Open

[WIP]add mluop cholesky #1146

wants to merge 45 commits into from

Conversation

dglr
Copy link
Collaborator

@dglr dglr commented Nov 13, 2024

Thanks for your contribution and we appreciate it a lot. 🚀🚀

1. Motivation

Please describe your motivation and the goal you want to achieve through this pull request.

2. Modification

Please briefly describe what modification is made in this pull request, and indicate where to make the modification.

Are new test cases added? If so, please post the corresponding generator-PR link here.

3. Test Report

If you want to know how to do operator testing, you can see GTest-User-Guide-zh.

3.1 Modification Details

3.1.1 Accuracy Acceptance Standard

For static threshold standard details, see: MLU-OPS™ Accuracy Acceptance Standard.

  • static threshold
    • diff1
      • float32 mlu diff1 <= 1e-5
      • float32 mlu diff1 <= 3e-3
      • float16 mlu diff1 <= 3e-3
    • diff2
      • float32 mlu diff2 <= 1e-5
      • float32 mlu diff2 <= 3e-3
      • float16 mlu diff2 <= 3e-3
    • diff3
      • mlu diff3 == 0
      • mlu diff3_1 == 0
      • mlu diff3_2 == 0
  • dynamic threshold
    • diff1: mlu diff1 <= max(baseline diff1 * 10, static threshold)
    • diff2: mlu diff2 <= max(baseline diff2 * 10, static threshold)
    • diff3: mlu diff3 <= max(baseline diff3 * 10, static threshold)
      • float32, threshold = 1e-5
      • float16, threshold = 1e-3

3.1.2 Operator Scheme checklist

  • Supported hardware
    • MLU370
    • MLU590
  • Job types
    • BLOCK
    • UNION1
    • UNION2
    • UNION4
    • The operator will dynamically select the most suitable task type, for example, UNION8

3.2 Accuracy Test

3.2.1 Accuracy Test

If you have checked the following items, please tick the relevant box.

  • Data type test (e.g. float32/int8)
  • Multi-dimensional tensor test
  • Layout test
  • Different size/integer remainder end segment/alignment misalignment test
  • Zero dimensional tensor test/zero element test
  • stability test
  • Multiple platform test
  • Gen_case module test, see: Gencase-User-Guide-zh
  • Nan/INF tests
  • Bug fix tests
  • For memory leak check details, see: GTest-User-Guide-zh
  • For code coverage check details, see: GTest-User-Guide-zh
  • For I/O calculation efficiency check details, see: MLU-OPS™-Performance-Acceptance-Standard

3.2.2 Parameter Check

Test Point-1: When a new operator is submitted, the test points are given and the test results are stated. Acceptance Standard: Normal error.

Please fill your test results(Error Message) in here, ...

Test Point-2: Whether illegal parameters are passed. Acceptance Standard: Normal error.

Test results...

3.3 Performance Test

See MLU-OPS™ Performance Acceptance Standard for details.

Platform:MLU370

# The test results should contain Op name, Shape, Data type,  
#   MLU Hardware Time(us), MLU Interface Time(us), MLU IO Efficiency, 
#   MLU Compute Efficiency, and Mlu Workspace Size(Bytes)
# 
# for example:
#
# ----------- case0 -----------
# case0
# [Op name                ]: abs
# [Shape                  ]: input.shape=[1024,1024,3,4], output.shape=[1024,1024,3,4]
# [Data type]             ]: float32
# [MLU Hardware Time      ]: 15728 (us)
# [MLU Interface Time     ]: 369.008 (us)
# [MLU IO Efficiency      ]: 0.23275
# [MLU Compute Efficiency ]: 0.5
# [Mlu Workspace Size     ]: -1 (Bytes)
# 
# ----------- case1 -----------
# ...

Platform:MLU590

# ----------- case0 -----------
# ----------- case1 -----------
# ...

3.4 Summary Analysis

Please give a brief overview here, if you want to note and summarize the content.

}

__mlu_global__ void inverse_kernel(int batch, float* d_input, int ld_input,
int stride_input, float* d_output,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.mlu文件中的一些共性问题:
1.关键步骤缺少不要的注释
2.sync_cluster存在多用的问题,建议对于每个sync和sync_cluster加上必要的注释说明同步了什么操作,目的是啥
3.调用cnnl代码的逻辑不要放到.mlu中,.mlu文件本质上是deivce上的函数,调用cnnl的接口是个host侧行为
4.不要自己创建cnrtQueue,统一使用外部传入的handle->queue
5.涉及到cnrtDim的,用policyFunc函数封装
6.变量命名不清晰,建议不要缩写名字,提升可读性

const int lda, int width, float* sram_buffer,
float* dst) {
int id = taskId % 4;
int span = CPOTF_NB;
Copy link
Collaborator

@ArtIntAI ArtIntAI Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.mlu文件中的一些共性问题:
1.关键步骤缺少必要的注释
2.sync_cluster存在多用的问题,建议对于每个sync和sync_cluster加上必要的注释说明同步了什么操作,目的是啥
3.调用cnnl代码的逻辑不要放到.mlu中,.mlu文件本质上是deivce上的函数,调用cnnl的接口是个host侧行为
4.不要自己创建cnrtQueue,统一使用外部传入的handle->queue
5.涉及到cnrtDim的,用policyFunc函数封装
6.变量命名不清晰,建议不要缩写名字,提升可读性

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

*************************************************************************/

#include "cholesky.h"
#define COMPLEX_OFFSET(A, off) (((float*)A) + (2 * (off)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的目的是啥,建议加上注释

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

#define CLUSTER_NUM 1
#define M (TASK_NUM * POTF_NB)
#define ZERO 0.0
#define SHARED_MEM_SIZE (((M * POTF_NB / TASK_NUM * 4) + (POTF_NB * POTF_NB)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议加上注释,说明空间是怎么使用的

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

#define M (TASK_NUM * POTF_NB)
#define ZERO 0.0
#define SHARED_MEM_SIZE (((M * POTF_NB / TASK_NUM * 4) + (POTF_NB * POTF_NB)))
#define OFFSET_ROW(A, i, j) A + ((i) * (lda) + (j))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议加上注释,说明这些offset宏的目的

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

@ArtIntAI
Copy link
Collaborator

测试报告最后也更新下吧,另外测试报告中性能部分建议测试下float/complex upper=false和upper=true的性能

@ArtIntAI
Copy link
Collaborator

ArtIntAI commented Nov 25, 2024

随机测试时较多case会出现精度问题&coredum,建议使用下面的脚本生成下case并做自我测试

```python
import sys
import os
import numpy as np
def dShape(shapes):
    shape_val = '"shape":['
    for i in range(len(shapes)-1):
        shape_val += str(shapes[i])+','
    shape_val += str(shapes[len(shapes)-1]) + ']'
    return  shape_val

def dType(data_type):
    return '"dtype":"' + data_type + '"'

def dRandomDistribution(start, end):
    return '"random_distribution":{"uniform":[' + str(start) + ',' + str(end) + ']}'

def dlayout(data_layout):
    return '"layout":"' + data_layout + '"'

def dNanInf():
    naninf_bool_list = ['true', 'false']
    has_nan = np.random.choice(naninf_bool_list)
    has_inf = np.random.choice(naninf_bool_list)
    result_str = f'"contain_nan": {has_nan}, "contain_inf": {has_inf}'
    return result_str

def dInplace():
    inplace_bool_list = ['true', 'false']
    is_inplace = np.random.choice(inplace_bool_list)
    result_str = f'"inplace": {is_inplace}'
    return result_str

def dUpper():
    upper_bool_list = ['true', 'false']
    is_upper = np.random.choice(upper_bool_list)
    result_str = f'"upper": {is_upper}'
    return result_str

def genSingleCase(dtype='float32', params_list=[1,1,False]):
    B = params_list[0]
    N = params_list[1]
    nan_inf = params_list[2]

    input_shape = [B, N, N]
    output_shape = [B, N, N]

    inputs = '     {\n       "inputs":['
    if nan_inf:
      input1 = '{' + dShape(input_shape) + ',' + dType(dtype) + ',' + dRandomDistribution(0,100) + ","+ dNanInf() + "," + dlayout("ARRAY") + '}'
    else :
      input1 = '{' + dShape(input_shape) + ',' + dType(dtype) + ',' + dRandomDistribution(0,100) + "," + dlayout("ARRAY") + '}'

    outputs = '       "outputs":['
    output1 = '{' + dShape(output_shape) + ',' + dType(dtype) + ',' + dlayout("ARRAY") + '}'


    inputs += input1 + '],\n'
    outputs += output1  + '],\n'

    op_param = '       "op_params":{' + dInplace() + ","+ dUpper() + '},\n'

    proto_param = '       "proto_params":{"write_data":true}'

    cur_res = inputs + outputs + op_param + proto_param + '\n     }'
    return cur_res

def genCase(dtype = "float32", nan_inf=False):
    count = 1
    cur_res = '     "manual_data":[\n'
    B = np.random.randint(1,32)
    N = np.random.randint(1,128)
    param = [B, N, nan_inf]
    cur_res += genSingleCase(dtype = dtype, params_list=param)

    for i in range(5):
        if i % 2 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(128,256)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)

        if i % 3 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(256,512)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)

        if i % 5 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(512,1024)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)
    cur_res += '\n     ]\n}'
    print("the count of cases:", count)
    return cur_res

if __name__ == "__main__":
    res = '{\n\
    "op_name":"cholesky",\n\
    "device":"gpu",\n\
    "require_value":true,\n\
    "evaluation_criterion":["diff1","diff2", "diff4"],\n\
    "threshold_rate":[10,10,1],\n\
    "if_dynamic_threshold": true,\n\
    "supported_mlu_platform":["370", "590"],\n'
    dtype = "float32"
    res_fp32 = res + genCase(dtype)
    file = open("./cholesky_random_float32.json",'w')
    file.write(res_fp32)
    res_fp32_nan_inf = res + genCase(dtype, True)
    file = open("./cholesky_random_float32_nan_and_inf.json",'w')
    file.write(res_fp32_nan_inf)
    dtype = "complex_float"
    res_complex_fp32 = res + genCase(dtype)
    file = open("./cholesky_random_complex_fp32.json",'w')
    file.write(res_complex_fp32)
    res_complex_fp32_nan_inf = res + genCase(dtype, True)
    file = open("./cholesky_random_complex_fp32_nan_and_inf.json",'w')
    file.write(res_complex_fp32_nan_inf)
    file.close()
出错场景下的json配置:
[
[cholesky_random_float32.json](https://github.com/user-attachments/files/17898254/cholesky_random_float32.json)
[cholesky_random_float32_nan_and_inf.json](https://github.com/user-attachments/files/17898256/cholesky_random_float32_nan_and_inf.json)
[cholesky_random_complex_fp32.json](https://github.com/user-attachments/files/17898257/cholesky_random_complex_fp32.json)
[cholesky_random_complex_fp32_nan_and_inf.json](https://github.com/user-attachments/files/17898258/cholesky_random_complex_fp32_nan_and_inf.json)
](url)

@ArtIntAI
Copy link
Collaborator

另外针对用户感知到的一些tensor信息,如下所列,支持的做下测试,不支持的可以参考下其他算子做好参数拦截
1.large tensor(tensor单个维度超过2G num, tensor的所有维度乘积超过2G num)
2. inplace,输入和输出tensor地址一致
3. stride,如果不支持做好参数检查报错
4. 广播,如果不支持做好参数检查报错
5. 输入和输出包含nan/inf时精度是否和GPU精度对齐
6. 输入tensor是0元素,某个维度是0

@ArtIntAI
Copy link
Collaborator

ArtIntAI commented Nov 25, 2024

测试的generator代码中也有问题,会有下面的问题,也请修复下
RROR:root [06:50:42.801] [builder.py:249] got exception when running case {'inputs': [{'require_value': True, 'shape': [23, 872, 872], 'random_distribution': {'uniform': [0, 100]}, 'layout': 'ARRAY', 'dtype': 'complex_float', 'position': None, 'scale': None, 'offset': None, 'onchip_dtype': 'unset', 'contain_nan': None, 'contain_inf': None}], 'outputs': [{'require_value': True, 'shape': [23, 872, 872], 'layout': 'ARRAY', 'dtype': 'complex_float', 'position': None, 'scale': None, 'offset': None, 'onchip_dtype': 'unset', 'handle_param': HandleParam(round_mode=<QuantizeRoundMode.ROUND_OFF_ZERO: 2>)}], 'handle_param': HandleParam(round_mode=<QuantizeRoundMode.ROUND_OFF_ZERO: 2>), 'src_schema': 'json', 'cast_mode': None}, reason: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 779 is not positive-definite).
Traceback (most recent call last):
File "/ict/mlu-ops-generator/framework/builder.py", line 247, in run
is_success = runner(case)
File "/ict/mlu-ops-generator/framework/builder.py", line 231, in
runner = lambda case: case.run()
File "/ict/mlu-ops-generator/framework/builder.py", line 349, in run
op_test.run()
File "/ict/mlu-ops-generator/nonmlu_ops/base/optest.py", line 59, in run
outputs_baseline = self.compute()
File "/ict/mlu-ops-generator/nonmlu_ops/cholesky/compute.py", line 273, in compute
result_L_complex64 = torch.linalg.cholesky(A_complex64,upper=upper)
torch._C._LinAlgError: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 779 is not positive-definite).

@dglr
Copy link
Collaborator Author

dglr commented Nov 25, 2024

随机测试时较多case会出现精度问题&coredum,建议使用下面的脚本生成下case并做自我测试

```python
import sys
import os
import numpy as np
def dShape(shapes):
    shape_val = '"shape":['
    for i in range(len(shapes)-1):
        shape_val += str(shapes[i])+','
    shape_val += str(shapes[len(shapes)-1]) + ']'
    return  shape_val

def dType(data_type):
    return '"dtype":"' + data_type + '"'

def dRandomDistribution(start, end):
    return '"random_distribution":{"uniform":[' + str(start) + ',' + str(end) + ']}'

def dlayout(data_layout):
    return '"layout":"' + data_layout + '"'

def dNanInf():
    naninf_bool_list = ['true', 'false']
    has_nan = np.random.choice(naninf_bool_list)
    has_inf = np.random.choice(naninf_bool_list)
    result_str = f'"contain_nan": {has_nan}, "contain_inf": {has_inf}'
    return result_str

def dInplace():
    inplace_bool_list = ['true', 'false']
    is_inplace = np.random.choice(inplace_bool_list)
    result_str = f'"inplace": {is_inplace}'
    return result_str

def dUpper():
    upper_bool_list = ['true', 'false']
    is_upper = np.random.choice(upper_bool_list)
    result_str = f'"upper": {is_upper}'
    return result_str

def genSingleCase(dtype='float32', params_list=[1,1,False]):
    B = params_list[0]
    N = params_list[1]
    nan_inf = params_list[2]

    input_shape = [B, N, N]
    output_shape = [B, N, N]

    inputs = '     {\n       "inputs":['
    if nan_inf:
      input1 = '{' + dShape(input_shape) + ',' + dType(dtype) + ',' + dRandomDistribution(0,100) + ","+ dNanInf() + "," + dlayout("ARRAY") + '}'
    else :
      input1 = '{' + dShape(input_shape) + ',' + dType(dtype) + ',' + dRandomDistribution(0,100) + "," + dlayout("ARRAY") + '}'

    outputs = '       "outputs":['
    output1 = '{' + dShape(output_shape) + ',' + dType(dtype) + ',' + dlayout("ARRAY") + '}'


    inputs += input1 + '],\n'
    outputs += output1  + '],\n'

    op_param = '       "op_params":{' + dInplace() + ","+ dUpper() + '},\n'

    proto_param = '       "proto_params":{"write_data":true}'

    cur_res = inputs + outputs + op_param + proto_param + '\n     }'
    return cur_res

def genCase(dtype = "float32", nan_inf=False):
    count = 1
    cur_res = '     "manual_data":[\n'
    B = np.random.randint(1,32)
    N = np.random.randint(1,128)
    param = [B, N, nan_inf]
    cur_res += genSingleCase(dtype = dtype, params_list=param)

    for i in range(5):
        if i % 2 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(128,256)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)

        if i % 3 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(256,512)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)

        if i % 5 == 0:
            count += 1
            B = np.random.randint(1,32)
            N = np.random.randint(512,1024)
            param = [B, N, nan_inf]
            cur_res += ',\n' + genSingleCase(dtype = dtype, params_list=param)
    cur_res += '\n     ]\n}'
    print("the count of cases:", count)
    return cur_res

if __name__ == "__main__":
    res = '{\n\
    "op_name":"cholesky",\n\
    "device":"gpu",\n\
    "require_value":true,\n\
    "evaluation_criterion":["diff1","diff2", "diff4"],\n\
    "threshold_rate":[10,10,1],\n\
    "if_dynamic_threshold": true,\n\
    "supported_mlu_platform":["370", "590"],\n'
    dtype = "float32"
    res_fp32 = res + genCase(dtype)
    file = open("./cholesky_random_float32.json",'w')
    file.write(res_fp32)
    res_fp32_nan_inf = res + genCase(dtype, True)
    file = open("./cholesky_random_float32_nan_and_inf.json",'w')
    file.write(res_fp32_nan_inf)
    dtype = "complex_float"
    res_complex_fp32 = res + genCase(dtype)
    file = open("./cholesky_random_complex_fp32.json",'w')
    file.write(res_complex_fp32)
    res_complex_fp32_nan_inf = res + genCase(dtype, True)
    file = open("./cholesky_random_complex_fp32_nan_and_inf.json",'w')
    file.write(res_complex_fp32_nan_inf)
    file.close()
出错场景下的json配置:
[
[cholesky_random_float32.json](https://github.com/user-attachments/files/17898254/cholesky_random_float32.json)
[cholesky_random_float32_nan_and_inf.json](https://github.com/user-attachments/files/17898256/cholesky_random_float32_nan_and_inf.json)
[cholesky_random_complex_fp32.json](https://github.com/user-attachments/files/17898257/cholesky_random_complex_fp32.json)
[cholesky_random_complex_fp32_nan_and_inf.json](https://github.com/user-attachments/files/17898258/cholesky_random_complex_fp32_nan_and_inf.json)
](url)

请问具体是在哪个规模下出现了精度问题或者coredump问题呢,麻烦举出一些例子我优先复现然后修复

@ArtIntAI
Copy link
Collaborator

ArtIntAI commented Nov 26, 2024

出错的case:
’‘’
14 * 679 * 679 upper
18 * 925 * 925, upper
‘’‘

if (batch == 1) {
func_type = CNRT_FUNC_TYPE_UNION1;
} else if (batch == 2) {
func_type = CNRT_FUNC_TYPE_UNION2;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

板卡上不一定有这个类型,建议参考这里进行设置:

*k_type = mluop::runtime::getJobLimitCapabilityCnrtFuncType(handle);

int task_type = mluop::runtime::getJobLimitCapability(handle);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改为U1类型

type_size * size_a * lda * ((uint64_t)batch_size - 16),
CNRT_MEM_TRANS_DIR_DEV2DEV));
} else {
CNRT_CHECK(cnrtMemcpy(d_output, workspace,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不建议使用cnrtMemcpy和cnrtMemset,cnrtQueueSync,会对上层使用mlu_graph有问题
建议cnrtMemcpy使用片上的__memcpy来替换
cnrtMemset使用片上设置数据来替换
cnrtQueueSync可以去掉,对于同一个queue来说,queue内的kernel调用(使用<<<>>>)是串行的

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

func_type = CNRT_FUNC_TYPE_UNION4;
carry_batch = 4;
} else {
func_type = CNRT_FUNC_TYPE_UNION8;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里要根据板卡的实际最大cluster数目来,这里写死了U8,有些板卡没有U8这个类型
可以参考这里的写法

int task_type = mluop::runtime::getJobLimitCapability(handle);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其他类似的写死U8的地方也请一起修改下

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改




if (result_mul) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

结果验收上参考svd,需要验收输出结果L或者U, L@LT,以及output结果是下三角或者上三角,当前的处理,只处理了第一种方式,需要增加另外两种的测试。
关于结果这块的比较上:当前result_mul默认是false,只测试了结果的上下三角这块
同一个case还需要同时测试result_mul是true时,结果的还原性
另外还需要增加测试结果一定是上三角或者下三角的测试,这个可以参考https://github.com/pytorch/pytorch/blob/main/test/test_linalg.py#L622

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

另外generator的逻辑也麻烦根据上面的comments做下update

void cpu_compute(float* cpu_c, int n_, int ldda_, bool upper_, bool trans_,
mluOpDataType_t type_) {
if (trans_) {
for (int64_t i = 0; i < n_; i++) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu计算过程这里加上关键的计算步骤吧,方便后续维护和阅读

if (parser_->device() != CPU) {
if (result_mul) {
for (int i = 0; i < batch_size_; i++) {
if (type_ == MLUOP_DTYPE_FLOAT) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里做的trans和fill_zeo,设置1加下注释说明下意图,方便理解和后续维护

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants