easygraph.nn.convs.hypergraphs package
Submodules
easygraph.nn.convs.hypergraphs.dhcf_conv module
- class easygraph.nn.convs.hypergraphs.dhcf_conv.JHConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The Jump Hypergraph Convolution layer proposed in Dual Channel Hypergraph Collaborative Filtering paper (KDD 2020).
Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{D}_v^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta} + \mathbf{X} \right).\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hgnn_conv module
- class easygraph.nn.convs.hypergraphs.hgnn_conv.HGNNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HGNN convolution layer proposed in Hypergraph Neural Networks paper (AAAI 2019). Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{D}_v^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta} \right).\]where \(\mathbf{X}\) is the input vertex feature matrix, \(\mathbf{H}\) is the hypergraph incidence matrix, \(\mathbf{W}_e\) is a diagonal hyperedge weight matrix, \(\mathbf{D}_v\) is a diagonal vertex degree matrix, \(\mathbf{D}_e\) is a diagonal hyperedge degree matrix, \(\mathbf{\Theta}\) is the learnable parameters.
- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hgnnp_conv module
- class easygraph.nn.convs.hypergraphs.hgnnp_conv.HGNNPConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HGNN + convolution layer proposed in HGNN+: General Hypergraph Neural Networks paper (IEEE T-PAMI 2022).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} m_{\beta}^{t} &=\sum_{\alpha \in \mathcal{N}_{v}(\beta)} M_{v}^{t}\left(x_{\alpha}^{t}\right) \\ y_{\beta}^{t} &=U_{e}^{t}\left(w_{\beta}, m_{\beta}^{t}\right) \\ m_{\alpha}^{t+1} &=\sum_{\beta \in \mathcal{N}_{e}(\alpha)} M_{e}^{t}\left(x_{\alpha}^{t}, y_{\beta}^{t}\right) \\ x_{\alpha}^{t+1} &=U_{v}^{t}\left(x_{\alpha}^{t}, m_{\alpha}^{t+1}\right) \\ \end{aligned} \right.\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-1} \mathbf{H} \mathbf{W}_e \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{X} \mathbf{\Theta} \right).\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hnhn_conv module
- class easygraph.nn.convs.hypergraphs.hnhn_conv.HNHNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HNHN convolution layer proposed in HNHN: Hypergraph Networks with Hyperedge Neurons paper (ICML 2020).
- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
easygraph.nn.convs.hypergraphs.hypergcn_conv module
- class easygraph.nn.convs.hypergraphs.hypergcn_conv.HyperGCNConv(in_channels: int, out_channels: int, use_mediator: bool = False, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The HyperGCN convolution layer proposed in HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs paper (NeurIPS 2019).
- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
use_mediator (
str
) – Whether to use mediator to transform the hyperedges to edges in the graph. Defaults toFalse
.bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg[, cached_g])The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph, cached_g: Optional[Graph] = None) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((N, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(N\) vertices.cached_g (
dhg.Graph
) – The pre-transformed graph structure from the hypergraph structure that contains \(N\) vertices. If not provided, the graph structure will be transformed for each forward time. Defaults toNone
.
- training: bool
easygraph.nn.convs.hypergraphs.unignn_conv module
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGATConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, atten_neg_slope: float = 0.2, is_last: bool = False)[source]
Bases:
Module
The UniGAT convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} \alpha_{i e} &=\sigma\left(a^{T}\left[W h_{\{i\}} ; W h_{e}\right]\right) \\ \tilde{\alpha}_{i e} &=\frac{\exp \left(\alpha_{i e}\right)}{\sum_{e^{\prime} \in \tilde{E}_{i}} \exp \left(\alpha_{i e^{\prime}}\right)} \\ \tilde{x}_{i} &=\sum_{e \in \tilde{E}_{i}} \tilde{\alpha}_{i e} W h_{e} \end{aligned} \right. .\end{split}\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – The dropout probability. Ifdropout <= 0
, the layer will not drop values. Defaults to0.5
.atten_neg_slope (
float
) – Hyper-parameter of theLeakyReLU
activation of edge attention. Defaults to0.2
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGCNConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniGCN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= \frac{1}{\sqrt{d_{i}}} \sum_{e \in \tilde{E}_{i}} \frac{1}{\sqrt{\tilde{d}_{e}}} W h_{e} \end{aligned} \right. .\end{split}\]where \(\tilde{d}_{e} = \frac{1}{|e|} \sum_{i \in e} d_{i}\).
Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \mathbf{D}_v^{-\frac{1}{2}} \mathbf{H} \tilde{\mathbf{D}}_e^{-\frac{1}{2}} \cdot \mathbf{D}_e^{-1} \mathbf{H}^\top \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniGINConv(in_channels: int, out_channels: int, eps: float = 0.0, train_eps: bool = False, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniGIN convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left((1+\varepsilon) x_{i}+\sum_{e \in E_{i}} h_{e}\right) \end{aligned} \right. .\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \left( \left( \mathbf{I} + \varepsilon \right) + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
eps (
float
) – \(\varepsilon\) is the learnable parameter. Defaults to0.0
.train_eps (
bool
) – If set toTrue
, the layer will learn the \(\varepsilon\) parameter. Defaults toFalse
.bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool
- class easygraph.nn.convs.hypergraphs.unignn_conv.UniSAGEConv(in_channels: int, out_channels: int, bias: bool = True, use_bn: bool = False, drop_rate: float = 0.5, is_last: bool = False)[source]
Bases:
Module
The UniSAGE convolution layer proposed in UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks paper (IJCAI 2021).
Sparse Format:
\[\begin{split}\left\{ \begin{aligned} h_{e} &= \frac{1}{|e|} \sum_{j \in e} x_{j} \\ \tilde{x}_{i} &= W\left(x_{i}+\text { AGGREGATE }\left(\left\{x_{j}\right\}_{j \in \mathcal{N}_{i}}\right)\right) \end{aligned} \right. .\end{split}\]Matrix Format:
\[\mathbf{X}^{\prime} = \sigma \left( \left( \mathbf{I} + \mathbf{H} \mathbf{D}_e^{-1} \mathbf{H}^\top \right) \mathbf{X} \mathbf{\Theta} \right) .\]- Parameters
in_channels (
int
) – \(C_{in}\) is the number of input channels.out_channels (int) – \(C_{out}\) is the number of output channels.
bias (
bool
) – If set toFalse
, the layer will not learn the bias parameter. Defaults toTrue
.use_bn (
bool
) – If set toTrue
, the layer will use batch normalization. Defaults toFalse
.drop_rate (
float
) – If set to a positive number, the layer will use dropout. Defaults to0.5
.is_last (
bool
) – If set toTrue
, the layer will not apply the final activation and dropout functions. Defaults toFalse
.
Methods
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(X, hg)The forward function.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
state_dict
(*args[, destination, prefix, ...])Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
__call__
- forward(X: Tensor, hg: Hypergraph) Tensor [source]
The forward function.
- Parameters
X (
torch.Tensor
) – Input vertex feature matrix. Size \((|\mathcal{V}|, C_{in})\).hg (
dhg.Hypergraph
) – The hypergraph structure that contains \(|\mathcal{V}|\) vertices.
- training: bool