dimension degree yes Press mirror enclosed Number Of By layer horse with deep Enter into Of . \ \ \ \ \ \ \ Dimensions are matched and deepened layer by layer according to brackets . dimension degree yes Press mirror enclosed Number Of By layer horse with deep Enter into Of .
You can see the index corresponding to the row and column of the following matrix through this example :cat
You can see the index corresponding to the row and column of the following matrix through this example :normallize
import torch
b = torch.randn((2,3))
print(b)
a = torch.nn.functional.normalize(b,dim = 1)
print(a)
print(a[0,0]**2+a[0,1]**2+a[0,2]**2)
print(a[0,0]**2+a[1,0]**2)
# result
tensor([[-1.9235, 0.0256, -0.1232],
[ 0.0669, 0.1589, 0.2435]])
tensor([[-0.9979, 0.0133, -0.0639],
[ 0.2241, 0.5325, 0.8162]])
tensor(1.0000)
dim=1 That is, the elements in brackets , That is, every 3 A digital
tensor(1.0460)
pytorch in squeeze() and unsqueeze() function
One 4 D matrix :
X=torch.Size([6, 12, 18, 18])
take (X[:, 0, :, :])=torch.Size([6, 18, 18])
take (X[:, 0, :, :]).unsqueeze(-3)=torch.Size([1, 1, 18, 18])
view: However, the total number of elements before and after adjustment must be consistent ,a.view(2, 3)
view() Methods and resize() Differences in methods :view Not changing the data
There is another kind. einops Method ( Cross frame )
x = torch.randn(1, 12, 3, 3)
And
y = torch.randn(3, 3)
Multiply x @ y = (108,3)@(3, 3) =>(1, 12, 3, 3)
x =torch.tensor([[[1,2,3],[1,1,0]],[[1,1,0],[1,1,0]]]) # torch.Size([2, 2, 3])
y= torch.tensor([[1],[1],[0]]) #torch.Size([3, 1])
res= torch.matmul(x, y)# torch.Size([2, 2, 1])
tensor([[[3], [2]],
[[2], [2]]])
Matrix multiplication :torch.matmul() It can be considered that the multiplication is calculated using the last two dimensions using two parameters , Other dimensions can be considered as batch dimension .
einsum:einsum It is a kind of dot product that can be expressed concisely 、 Exoproduct 、 Transposition 、 matrix - Vector multiplication 、 matrix - Domain specific language for matrix multiplication and other operations .
einsum: Einstein's summation Convention
Matrix multiplication :
torch.einsum('ik,kj->ij',[a,b])
Batch matrix multiplication :
torch.einsum('ijk,ikl->ijl',[a,b])
Exoproduct :
torch.einsum('i,j->ij',[a,b])
# Einstein's summation Convention
import torch
a=torch.arange(6).reshape(2,3,1)
covs = torch.einsum("wib,wjb->wij", a, a)
means = torch.einsum("wib->wi", a) # Sum the data of the last dimension , The experiments give b=1 and 2 The situation of
print(a)
print(covs)
print(means)
tensor([[[0],
[1],
[2]],
[[3],
[4],
[5]]])
tensor([[[ 0, 0, 0],
[ 0, 1, 2],
[ 0, 2, 4]],
[[ 9, 12, 15],
[12, 16, 20],
[15, 20, 25]]])
tensor([[0, 1, 2],
[3, 4, 5]])
# a=torch.arange(12).reshape(2,3,2)
tensor([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]]])
tensor([[[ 1, 3, 5],
[ 3, 13, 23],
[ 5, 23, 41]],
[[ 85, 111, 137],
[111, 145, 179],
[137, 179, 221]]])
tensor([[ 1, 5, 9],
[13, 17, 21]])
∑ b A w , i , b B w , j , b = C w , i , j take A Of i That's ok take Out Come on , B Of j That's ok that Out Come on phase ride phase Add , discharge To C Of i , j position Set up On \sum_b{A_{w,i,b}}{B_{w,j,b}}=C_{w,i,j}\\ take A Of i Take it out ,B Of j OK, then multiply and add , Put it in C Of i,j position b∑Aw,i,bBw,j,b=Cw,i,j take A Of i That's ok take Out Come on ,B Of j That's ok that Out Come on phase ride phase Add , discharge To C Of i,j position Set up On
data = torch.ones(9).a.view(-1, 9).cuda()
let = torch.randn(3, 3)
k=0
for i in range(let.shape[0]):
for j in range(i):
let[i, j] = -1 * data[0, k]
let[j, i] = data[0, k]
k = k + 1
Can multiply , No mistake. :
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)