Graph Structure of Neural Networks何凯明团队

该文探讨了图理论如何帮助理解神经网络,并通过分析LeNet、AlexNet等模型的图结构,揭示了复杂连接模式对性能的影响。研究发现,在随机连接的神经网络中,存在一个性能最优的图结构区域,平均路径长度约为2.5,聚类系数约为0.4。这一发现对于未来网络结构设计具有启示意义。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

explore the implications of graphs and how graph theory tools can helps us to learn better presentation.
before jumping to deep learning ,i want to say that graphs are everywhere in real world .
在这里插入图片描述首先,从图结构角度来回顾一些神经网络。

the first one is original LeNet, the connections between the s2 and c3 layers are actually not fully connected . In the LeNet , the handcrafted connectivity pattern(模式) was used. we can see the adjacency(邻接)matrix on the right ,so according from the paper , the motivation of doing this is for different feature maps are forced to extract different hopefully complementary features because they get different set of inputs, however there’s no e

### Graph Transformer Networks Overview Graph Transformer Networks (GTNs) combine the strengths of graph neural networks with those of transformer models, enabling more sophisticated processing on graph data structures. In traditional graph convolutional networks (GCNs), operations are often limited to fixed or predefined graphs. GTNs introduce learnable transformations that allow dynamic changes in the graph structure during training[^1]. #### Architecture Components The core components of a typical Graph Transformer Network include: - **Graph Convolution Layers**: These layers perform convolutions over nodes within a certain neighborhood defined by edges in the input graph. - **Transformer Blocks**: Each block contains multi-head self-attention mechanisms which help capture long-range dependencies between different parts of the graph without being constrained by adjacency matrices. - **Edge Feature Encoding**: To incorporate edge information into node representations effectively, specific encoding schemes can be applied before feeding features through attention heads. For implementing such architectures efficiently while maintaining scalability across large datasets, PyTorch Geometric provides useful tools like `MessagePassing` base class and specialized modules tailored specifically towards handling irregularly structured inputs including point clouds as well as molecular compounds represented via chemical bonds forming complex topologies beyond simple Euclidean spaces. ```python import torch from torch_geometric.nn import MessagePassing from torch_geometric.utils import add_self_loops, degree class EdgeConv(MessagePassing): def __init__(self, in_channels, out_channels): super(EdgeConv, self).__init__(aggr='max') # Max pooling aggregation. self.lin = torch.nn.Linear(in_channels * 2, out_channels) def forward(self, x, edge_index): # Add self-loops to the adjacency matrix. edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0)) return self.propagate(edge_index, size=(x.size(0), x.size(0)), x=x) def message(self, x_i, x_j): # Concatenate source and target embeddings for each edge. tmp = torch.cat([x_i, x_j - x_i], dim=-1) return self.lin(tmp) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SimonChenHere

打赏奖励,以资鼓励

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值