Hello Guys, How are you all? Hope You all Are Fine. Today We Are Going To learn about **How to initialize weights in PyTorch** **in Python**. So Here I am Explain to you all the possible Methods here.

Without wasting your time, Let’s start This Article.

Table of Contents

## How to initialize weights in PyTorch?

**How to initialize weights in PyTorch?**Alternatively, you can modify the parameters by writing to

`conv1.weight.data`

(which is a`torch.Tensor`

)**initialize weights in PyTorch**Alternatively, you can modify the parameters by writing to

`conv1.weight.data`

(which is a`torch.Tensor`

)

## Method 1

To initialize the weights of a single layer, use a function from `torch.nn.init`

. For instance:

conv1 = torch.nn.Conv2d(...) torch.nn.init.xavier_uniform(conv1.weight)

Alternatively, you can modify the parameters by writing to `conv1.weight.data`

(which is a `torch.Tensor`

). Example:

conv1.weight.data.fill_(0.01)

The same applies for biases:

conv1.bias.data.fill_(0.01)

`nn.Sequential`

or custom `nn.Module`

Pass an initialization function to `torch.nn.Module.apply`

. It will initialize the weights in the entire `nn.Module`

recursively.

apply(Appliesfn):`fn`

recursively to every submodule (as returned by`.children()`

) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).

Example:

def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights)

## Method 2

To initialize layers you typically don’t need to do anything. PyTorch will do it for you. If you think about it, this makes a lot of sense. Why should we initialize layers, when PyTorch can do that following the latest trends.

Check for instance the Linear layer.

In the `__init__`

method it will call Kaiming He init function.

def reset_parameters(self): init.kaiming_uniform_(self.weight, a=math.sqrt(3)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) init.uniform_(self.bias, -bound, bound)

The similar is for other layers types. For `conv2d`

for instance check here.

To note : The gain of proper initialization is the faster training speed. If your problem deserves special initialization you can do it afterwards.

**Summery**

It’s all About this issue. Hope all Methods helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which Method worked for you? Thank You.

**Also, Read**