About this deal
Creates a criterion that measures the loss given input tensors x 1 x_1 x 1 , x 2 x_2 x 2 and a Tensor label y y y with values 1 or -1. The DimeNet++ from the "Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules" paper. The Efficient Graph Convolution from the "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions" paper. Applies layer normalization over each individual example in a batch of features as described in the "Layer Normalization" paper. g., the j j j-th channel of the i i i-th sample in the batched input is a 3D tensor input [ i , j ] \text{input}[i, j] input [ i , j ]).
The Adversarially Regularized Graph Auto-Encoder model from the "Adversarially Regularized Graph Autoencoder for Graph Embedding" paper.Notably, all aggregations share the same set of forward arguments, as described in detail in the torch_geometric.
The path integral based convolutional operator from the "Path Integral Based Convolution and Pooling for Graph Neural Networks" paper. The Graph Neural Network from the "Semi-supervised Classification with Graph Convolutional Networks" paper, using the GCNConv operator for message passing.
The label embedding and masking layer from the "Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification" paper. The powermean aggregation operator based on a power term, as described in the "DeeperGCN: All You Need to Train Deeper GCNs" paper. The Gini coefficient from the "Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity" paper.
Applies the Softplus function Softplus ( x ) = 1 β ∗ log ( 1 + exp ( β ∗ x ) ) \text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) Softplus ( x ) = β 1 ∗ lo g ( 1 + exp ( β ∗ x )) element-wise. The approximate personalized propagation of neural predictions layer from the "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" paper. Creates a criterion that optimizes a two-class classification logistic loss between input tensor x x x and target tensor y y y (containing 1 or -1). Performs Deep Sets aggregation in which the elements to aggregate are first transformed by a Multi-Layer Perceptron (MLP) \(\phi_{\mathbf{\Theta}}\), summed, and then transformed by another MLP \(\rho_{\mathbf{\Theta}}\), as suggested in the "Graph Neural Networks with Adaptive Readouts" paper. The Light Graph Convolution (LGC) operator from the "LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation" paper.First, aggregations can be resolved from pure strings via a lookup table, following the design principles of the class-resolver library, e.