

""" layer_name = 'relu_6' for ( name, module ) in net. append ( fea_out ) return None net = TestForHook () """ DGL supports two modes: sequentially apply. relu6 ( relu ) layers_in = ( x, linear_1, linear_2 ) layers_out = ( linear_1, linear_2, relu ) return relu_6, layers_in, layers_out features_in_hook = features_out_hook = def hook ( module, fea_in, fea_out ): features_in_hook. A sequential container for stacking graph neural network modules. initialize () def forward ( self, x ): linear_1 = self. Linear ( in_features = 2, out_features = 1 ) self. Linear ( in_features = 2, out_features = 2 ) self. issubset (): raise ValueError ( "return_layers are not present in model" ) orig_return_layers = return_layers return_layers = ) out = new_m ( torch. """ def _init_ ( self, model, return_layers ): if not set ( return_layers ). Of the returned activation (which the user can specify). The key of the dict, and the value of the dict is the name Of the modules for which the activations will be returned as Return_layers (Dict): a dict containing the names nn / a Go to file Go to file T Go to line L Copy path Copy This commit does not belong to any branch on this repository, and may belong to a. Model (nn.Module): model on which we will extract the features So if `model` is passed, `model.feature1` canīe returned, but not `2`.

Twice in the forward if you want this to work.Īdditionally, it is only able to query submodules that are directlyĪssigned to the model. This means that one should **not** reuse the same nn.Module Into the model in the same order as they are used. It has a strong assumption that the modules have been registered Module wrapper that returns intermediate layers from a model From collections import OrderedDict import torch from torch import nn class IntermediateLayerGetter ( nn.
