You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of this below since we must exactly follow the MLPerf, we cannot have the other way around like this (torch.cat([cites_edge[0, :], cites_edge[1, :]]), torch.cat([cites_edge[1, :], cites_edge[0, :]])
In GNN training, we only care if the edge counts (graph topology) are the same, which means that you should have the exact number of edges (a->b for any combination of a, b in the reference implementation graph) as the reference implementation.
However, the order does not matter here - they will be rearranged in the COO -> CSC conversion process anyways, so you can use cites_edge[0, :] + cites_edge[1, :] for the source and cites_edge[1, :] + cites_edge[0, :] for the destination as well.
training/graph_neural_network/dataset.py
Lines 137 to 139 in cdd928d
Let me use an example.
, which gives us this:
, we should have this:
Instead of this below since we must exactly follow the MLPerf, we cannot have the other way around like this (torch.cat([cites_edge[0, :], cites_edge[1, :]]), torch.cat([cites_edge[1, :], cites_edge[0, :]])
Am I understanding this correctly? Does the order matters here? Thank you!
The text was updated successfully, but these errors were encountered: