Skip to content

Release v0.2.2

Compare
Choose a tag to compare
@LiSu LiSu released this 05 Feb 06:35
· 19 commits to main since this release
910cb55

We're excited to announce the release of GraphLearn for PyTorch v0.2.2. This update brings numerous fixes and features enhancing the framework's functionality, performance, and user experience. We extend our gratitude to all contributors who have made this release possible.

What's Changed

  • [Fix] ensure consistency between the seeds added to and retrieved from the multiprocessing queue using the put and get methods by @Zhanghyi in #65
  • [Fix] skip sampling empty inputs by @LiSu in #67
  • [Fix] try to fix tensor.nbr by @husimplicity in #71
  • Fix igbh example by using proper parameters by @LiSu in #70
  • [Feat] put input data on server and allow for n to n connection between servers and clients by @Zhanghyi in #59
  • [Build] adjust setup.py and create an ext_module util function by @Zhanghyi in #73
  • [Feat] add "trim_to_layer" support to igbh example by @kaixuanliu in #74
  • [Feat] Add edge weight sample for cpu by @husimplicity in #72
  • [Fix] Skip mem-sharing the feature tensor in cpu mode by @LiSu in #75
  • [Fix] fix empty out_cols calling torch.cat by @husimplicity in #78
  • [Fix] enable gc when evaluation by @husimplicity in #81
  • [Feat] supports node split for both Dataset and DistDataset by @Zhanghyi in #82
  • [Feat] load dataset from vineyard by @Zhanghyi in #80
  • [Feat] Refactor RPC connection in server-client mode by @Jia-zb in #83
  • [Feat] Add fields parsing for GraphScope side by @Jia-zb in #84
  • [Build] split building of glt and glt_v6d by @Zhanghyi in #85
  • [Feat] Multithread partition by @Jia-zb in #88
  • [Fix] fix get GRAPHSCOPE_HOME in os.environ by @Zhanghyi in #89
  • [CI] glt v6d ci by @Zhanghyi in #90
  • [Feat] add trim_to_layer support to igbh distributed training by @kaixuanliu in #87
  • [Fix] fix test: check partition dir exists before test by @LiSu in #91
  • [Feat] Update IGBH example by @LiSu in #92
  • [Fix] Fixes the build failure on MacOS and the compile flags settings in CMakeLists.txt by @sighingnow in #93
  • [Feat] enable continuous downloading for large dataset by @kaixuanliu in #94
  • [Feat] support two-stage partitioning by @LiSu in #95
  • [Fix] update the label index range for igbh-full dataset by @LiSu in #96
  • IGBH: synchronize after evaluation completes by @LiSu in #97
  • IGBH updates by @LiSu in #98
  • IGBH: persist feature when using FP16 by @LiSu in #99
  • Fp16 support by @kaixuanliu in #100
  • [Fix] fix GPU allocation while splitting training and sampling in distributed training by @LiSu in #101
  • [Fix] Large file process by @kaixuanliu in #103
  • [Feat] Refine IGBH preprocessing by @LiSu in #105
  • [Feat] Expose random seed configuration for single-node and distributed training by @LiSu in #106
  • [Fix] ML Perf code freeze minors by @LiSu in #108
  • [Fix] Fixes include path resolution on MacOS by @sighingnow in #109
  • [Fix] Use a lock to protect the critical path of sampler initialization in neighbor sampler by @LiSu in #110
  • [Fix] adjust the lock location by @kaixuanliu in #111
  • [Fix] add argument of channel size by @kaixuanliu in #113
  • [Feat] IGBH: add MLPerf logging and control of evaluation frequency by @LiSu in #114
  • [Feat] Add gpt example by @husimplicity in #115
  • [Feat] IGBH: support specifying the fraction of validation seeds by @LiSu in #117
  • [Fix] delete unused code by @kaixuanliu in #121
  • [Feat] Separate training batch size and validation batch size in IGBH by @LiSu in #122
  • [Feat] Add mechanism of save/load checkpoint by @LiSu in #123
  • [Fix] add random seed para for link and subgraph loader by @LiSu in #124
  • [Fix] properly handle drop last in distributed sampler by @LiSu in #125

New Contributors

Full Changelog: v0.2.1...v0.2.2