From f92fd9d8e85dc12f51a1c286113f47c24bad41e6 Mon Sep 17 00:00:00 2001 From: ziqguo Date: Mon, 14 Feb 2022 09:29:01 +0800 Subject: [PATCH] pytorch basic knowledge --- Day81-90/code/PyTorch 预备知识.ipynb | 2045 ++++++++++++++++++++++ 1 file changed, 2045 insertions(+) create mode 100644 Day81-90/code/PyTorch 预备知识.ipynb diff --git a/Day81-90/code/PyTorch 预备知识.ipynb b/Day81-90/code/PyTorch 预备知识.ipynb new file mode 100644 index 0000000..12e26d1 --- /dev/null +++ b/Day81-90/code/PyTorch 预备知识.ipynb @@ -0,0 +1,2045 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# PyTorch 基本数据操作\n", + "\n", + "PyTorch 是一个基于 Python 的科学计算包,充分发挥 GPU 能力而设计的 NumPy 的 GPU 版本替代方案,提供更大灵活性和速度的深度学习研究平台。\n", + "\n", + "在深度学习中,我们通常会频繁地对数据进行操作。\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 创建 Tensor \n", + "\n", + "在 PyTorch 中,`Tensor` 是一个类,也是存储和变换数据的主要工具。为了简洁,人们常将 `Tensor` 实例直接称作 `Tensor`。如果你之前用过 NumPy,你会发现 `Tensor` 和 NumPy 的多维数组非常类似。然而,`Tensor` 提供 GPU 计算和自动求梯度等更多功能,这些使 `Tensor` 更加适合深度学习。\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们先介绍 `Tensors` 的最基本功能。\n", + "\n", + "- 张量初始化" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'1.10.2'" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import torch # 加载 torch 库 \n", + "torch.__version__ # 查看版本" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们首先用一个非常实用的 `arange` 函数创建一个行向量。`arange` 函数用于生成一定范围内等间隔的一维数组。参数有三个,分别是范围的起始值、范围的结束值和步长。\n", + "\n", + "(`Shift + TAB` 快捷键可以在 Jupyter 中的代码块中快速查看帮助)" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + }, + { + "data": { + "text/plain": [ + "tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "x = torch.arange(12)\n", + "print(type(x))\n", + "x" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "这时返回了一个 `Tensor` 实例,其中包含了从 0 开始的 12 个连续整数。可以打印 `x` 显示出属性 ``。\n", + "\n", + "我们可以通过 `shape` 属性来获取 `Tensor` 实例的形状。" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "torch.Size([12])" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "x.shape" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们也能够通过 `size` 函数得到 `Tensor` 实例中元素(element)的总数。\n", + "\n", + "> 注意区别:NumPy 中是调用 NDArray 的 `size` 属性。" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "torch.Size([12])" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "x.size()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`torch.Size` 本质上是一个 `tuple`,通过上面的例子也可以看出,它支持元组的操作。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "下面使用 `reshape` 函数把行向量 `x` 的形状改为 (3, 4),也就是一个 3 行 4 列的矩阵,并记作 `X`。除了形状改变之外,`X` 中的元素保持不变。" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.Size([3, 4])\n" + ] + }, + { + "data": { + "text/plain": [ + "tensor([[ 0, 1, 2, 3],\n", + " [ 4, 5, 6, 7],\n", + " [ 8, 9, 10, 11]])" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X = x.reshape((-1, 4))\n", + "print(X.shape)\n", + "X" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注意X属性中的形状发生了变化。上面 `x.reshape((3, 4))` 也可写成 `x.reshape((-1, 4))` 或 `x.reshape((3, -1))`。由于 `x` 的元素个数是已知的,这里的 `-1` 是能够通过元素个数和其他维度的大小推断出来的。\n", + "\n", + "接下来,我们创建一个各元素为 0,形状为 (2, 3, 4) 的张量。实际上,之前创建的向量和矩阵都是一种特殊的张量。" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[[0., 0., 0., 0.],\n", + " [0., 0., 0., 0.],\n", + " [0., 0., 0., 0.]],\n", + "\n", + " [[0., 0., 0., 0.],\n", + " [0., 0., 0., 0.],\n", + " [0., 0., 0., 0.]]])" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.zeros((2, 3, 4))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "类似地,我们可以创建各元素为 1 的张量。" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1., 1., 1., 1.],\n", + " [1., 1., 1., 1.],\n", + " [1., 1., 1., 1.]])" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.ones((3, 4))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "还可以构建一个未初始化的 (5, 3) 的空矩阵(张量):" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0.0000e+00, 3.6893e+19, -7.7206e+29],\n", + " [ 8.5920e+09, 5.4526e-26, 4.5894e-41],\n", + " [ 5.4873e-26, 4.5894e-41, 5.4912e-26],\n", + " [ 4.5894e-41, 5.4909e-26, 4.5894e-41],\n", + " [ 5.4908e-26, 4.5894e-41, 5.4945e-26]])" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.empty(5, 3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注意,对于未初始化的张量,它的取值是不固定的,取决于它创建时分配的那块内存的取值。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 可以通过 `dtype` 属性来指定 `tensor` 的数据类型。\n", + "\n", + "这里我们再次构建一个使用 0 填充的 `tensor`,将 `dtype` 属性设置为长整型,并打印结果的数据类型,注意观察区别" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.float32\n", + "torch.int64\n" + ] + } + ], + "source": [ + "print(torch.zeros(2, 3).dtype)\n", + "tensor_zeros_int = torch.zeros(2, 3, dtype=torch.long)\n", + "print(tensor_zeros_int.dtype)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注意,对于未初始化的张量,它的取值是不固定的,取决于它创建时分配的那块内存的取值。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 随机初始化\n", + "\n", + "和 NumPy 类似,除了常见的 0/1 取值的初始化,我们还可以进行随机初始化,或者直接用现有数据进行张量的初始化。\n", + "\n", + "有些情况下,我们需要随机生成 `tensor` 中每个元素的值。下面我们创建一个形状为 (3, 4) 的 `tensor`。它的每个元素都随机采样于均值为 0、标准差为 1 的正态分布。" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[-0.7534, -1.2917, -0.1485, -0.9819],\n", + " [-0.7884, -0.0641, -1.5173, -1.4442],\n", + " [-1.8179, 0.0217, -0.1954, -0.6445]])" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.randn(3,4)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "生成服从区间 `[0,1)` 均匀分布的随机张量:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[0.0797, 0.3468, 0.4761],\n", + " [0.7740, 0.7935, 0.4432],\n", + " [0.4779, 0.8248, 0.4850],\n", + " [0.4818, 0.5336, 0.8103],\n", + " [0.5565, 0.8913, 0.9144]])" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.rand(5, 3)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 直接用现有数据进行张量的初始化\n", + "\n", + "我们也可以通过 Python 的列表(list)指定需要创建的 `Tensor` 中每个元素的值。" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[2, 1, 4, 3],\n", + " [1, 2, 3, 4],\n", + " [4, 3, 2, 1]])" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "Y = torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])\n", + "Y" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "也可以基于已有的 `tensor` 来创建新的 `tensor`,通常是为了复用已有 `tensor` 的一些属性,包括 `shape` 和 `dtype`。观察下面的示例:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.float64\n", + "tensor([[1., 1., 1.],\n", + " [1., 1., 1.],\n", + " [1., 1., 1.],\n", + " [1., 1., 1.],\n", + " [1., 1., 1.]], dtype=torch.float64)\n", + "tensor([[-0.5130, -1.9519, -1.3216],\n", + " [ 0.2640, 0.9898, 0.9462],\n", + " [-0.8379, 0.4547, -0.5359],\n", + " [ 0.5690, -2.2511, 0.8680],\n", + " [-0.6998, -0.4122, -0.5754]])\n" + ] + } + ], + "source": [ + "x = torch.tensor([5.5, 3], dtype=torch.double)\n", + "print(x.dtype)\n", + "x = x.new_ones(5, 3) # new_* methods take in sizes\n", + "print(x)\n", + "x = torch.randn_like(x, dtype=torch.float) # override dtype!\n", + "print(x)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "可以看到,`new_ones` 函数复用了 `x` 的 `dtype` 属性,`randn_like` 函数复用了 `x` 的 `shape` 同时通过手动指定数据类型覆盖了原有的 `dtype` 属性." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## Tensor 运算\n", + "\n", + "`Tensor` 支持大量的运算符operator,涉及的语法和函数很多,但大多数都是相通的,下面我们列举一些常用操作及其用法示例。\n", + "\n", + "- 四则运算\n", + "\n", + "下面来看下张量间的元素级的四则运算,即加减乘除的用法。\n", + "\n", + "例如,我们可以对之前创建的两个形状为 (3, 4) 的 `Tensor` 做按元素加法。所得结果形状不变。" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 2, 2, 6, 6],\n", + " [ 5, 7, 9, 11],\n", + " [12, 12, 12, 12]])" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 按元素加法如下:\n", + "X + Y" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 2, 2, 6, 6],\n", + " [ 5, 7, 9, 11],\n", + " [12, 12, 12, 12]])" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.add(X, Y)" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 2, 2, 6, 6],\n", + " [ 5, 7, 9, 11],\n", + " [12, 12, 12, 12]])" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# alpha 参数\n", + "torch.add(X, Y, alpha = 1)" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 2., 2., 6., 6.],\n", + " [ 5., 7., 9., 11.],\n", + " [12., 12., 12., 12.]])" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# out 参数\n", + "Z = torch.empty(3, 4)\n", + "torch.add(X, Y, out=Z)\n", + "Z" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 2., 3., 8., 9.],\n", + " [ 9., 12., 15., 18.],\n", + " [20., 21., 22., 23.]])" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 通过 in-place 操作直接将计算结果覆盖到 Y 上 \n", + "Z.add_(X)\n", + "Z" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注意:在 PyTorch 中,我们约定凡是会覆盖函数调用主体的 in-place 操作,都以后缀 `_` 结束,例如:`x.copy_(y)`,`x.t_()` 等等,都会改变 `x` 的取值。\n", + "\n", + "张量之间的减法、点乘和点除的用法是类似的:" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[True, True, True, True],\n", + " [True, True, True, True],\n", + " [True, True, True, True]])" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 按元素减法如下:\n", + "X - Y == torch.sub(X, Y)" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[True, True, True, True],\n", + " [True, True, True, True],\n", + " [True, True, True, True]])" + ] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 按元素乘法如下:\n", + "X * Y == torch.mul(X, Y)" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[True, True, True, True],\n", + " [True, True, True, True],\n", + " [True, True, True, True]])" + ] + }, + "execution_count": 21, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 按元素除法如下:\n", + "X / Y == torch.div(X, Y)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "当然,张量和常数间的基本运算也是支持的。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "除了按元素计算外,我们还可以使用 `torch.mm` 函数做矩阵乘法计算。\n", + "\n", + "下面将 X 与 Y 的转置做矩阵乘法。由于 X 是 3 行 4 列的矩阵,Y 转置 (`Y.T`) 为 4 行 3 列的矩阵,因此两个矩阵相乘得到 3 行 3 列的矩阵。" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 18, 20, 10],\n", + " [ 58, 60, 50],\n", + " [ 98, 100, 90]])" + ] + }, + "execution_count": 22, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.mm(X, Y.T)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "下面我们来看看其他的一些基础操作:\n", + "\n", + "- `torch.abs` 函数可以用来计算张量的绝对值;\n", + "- `torch.exp` 函数用于进行求幂操作。\n", + "- `torch.pow` 函数用于进行求幂操作;\n", + "\n", + "(`Ctrl + /` 或 `Commond + /` 快捷键快速注释掉某行)" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[-0.6457, -0.7714, -0.0353],\n", + " [-0.0553, -0.0271, -0.1796]])\n", + "tensor([[0.4169, 0.5950, 0.0012],\n", + " [0.0031, 0.0007, 0.0323]])\n" + ] + } + ], + "source": [ + "a =torch.randn(2, 3)\n", + "print(a)\n", + "# b =torch.abs(a)\n", + "# b =torch.exp(a)\n", + "b =torch.pow(a, 2)\n", + "print(b)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们也可以将多个 `Tensor` 连结(concatenate)。下面分别在行上(维度 0,即形状中的最左边元素)和列上(维度1,即形状中左起第二个元素)连结两个矩阵。可以看到,输出的第一个 `Tensor` 在维度0的长度( 6 )为两个输入矩阵在维度 0 的长度之和( 3+3 ),而输出的第二个NDArray在维度1的长度( 8 )为两个输入矩阵在维度1的长度之和( 4+4 )。" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0, 1, 2, 3],\n", + " [ 4, 5, 6, 7],\n", + " [ 8, 9, 10, 11],\n", + " [ 0, 1, 2, 3],\n", + " [ 4, 5, 6, 7],\n", + " [ 8, 9, 10, 11]])" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.cat((X, X))" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0, 1, 2, 3, 2, 1, 4, 3],\n", + " [ 4, 5, 6, 7, 1, 2, 3, 4],\n", + " [ 8, 9, 10, 11, 4, 3, 2, 1]])" + ] + }, + "execution_count": 25, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# torch.cat((X, X), dim=1)\n", + "torch.cat((X, Y), dim=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "使用条件判断式可以得到元素为 0 或 1 的新的 `Tensor`。以 `X == Y` 为例,如果 X 和 Y 在相同位置的条件判断为真(值相等),那么新的 `Tensor` 在相同位置的值为 1 或 `True`;反之为 0 或 `False`。" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[False, True, False, True],\n", + " [False, False, False, False],\n", + " [False, False, False, False]])" + ] + }, + "execution_count": 26, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X == Y" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "如果你想要对 `Tensor` 进行类似 resize/reshape 的操作,你可以使用 `torch.view`:" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])\n" + ] + } + ], + "source": [ + "x = torch.randn(4, 4)\n", + "y = x.view(16)\n", + "z = x.view(-1, 8) # 使用 -1 时 pytorch 将会自动根据其他维度进行推导\n", + "print(x.size(), y.size(), z.size()) " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "对 `Tensor` 中的所有元素求和得到只有一个元素的 `Tensor`:" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor(66)" + ] + }, + "execution_count": 28, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X.sum()" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "66" + ] + }, + "execution_count": 29, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X.sum().item()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们可以通过 `.item()` 函数将结果变换为 Python 中的标量。下面例子中 Z 的 L2 范数结果同上例一样是单元素 `Tensor`,但最后结果变换成了 Python 中的标量。" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor(52.7826)" + ] + }, + "execution_count": 30, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# norm only supports floating-point dtypes\n", + "Z.norm()" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "52.78257369995117" + ] + }, + "execution_count": 31, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "Z.norm().item()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们也可以把 `Y.exp()`、`X.sum()`、`X.norm()` 等分别改写为 `torch.exp(Y)`、`torch.sum(X)`、`torch.norm(X)` 等。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "通常我们在需要控制张量的取值范围不越界时,需要用到 `torch.clamp` 函数,它可以对输入参数按照自定义的范围进行裁剪,最后将参数裁剪的结果作为输出。输入参数一共有三个,分别是需要进行裁剪的 `Tensor` 变量、裁剪的下边界和裁剪的上边界。" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[ 0.6062, -0.0078, -0.2199],\n", + " [-0.3511, 0.4959, 0.6582]])\n", + "tensor([[ 0.5000, -0.0078, -0.2199],\n", + " [-0.3511, 0.4959, 0.5000]])\n" + ] + } + ], + "source": [ + "a =torch.randn(2,3)\n", + "print(a)\n", + "b =torch.clamp(a, -0.5, 0.5)\n", + "print(b)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "\n", + "## 广播机制\n", + "\n", + "前面我们看到如何对两个形状相同的 `Tensor` 做按元素运算。当对两个形状不同的 `Tensor` 按元素运算时,可能会触发**广播(broadcasting)机制**:先适当复制元素使这两个 `Tensor` 形状相同后再按元素运算。\n", + "\n", + "先定义两个 `Tensor`。" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(tensor([[0],\n", + " [1],\n", + " [2]]),\n", + " tensor([[0, 1]]))" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "A = torch.arange(3).reshape((3, 1))\n", + "B = torch.arange(2).reshape((1, 2))\n", + "A, B" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "由于A和B分别是 3 行 1 列和 1 行 2 列的矩阵,如果要计算 A + B,那么 A 中第一列的 3 个元素被广播(复制)到了第二列,而 B 中第一行的 2 个元素被广播(复制)到了第二行和第三行。如此,就可以对 2 个 3 行 2 列的矩阵按元素相加。" + ] + }, + { + "cell_type": "raw", + "metadata": {}, + "source": [ + "A + B" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "简单地说,对两个矩阵进行元素级操作时,PyTorch/NumPy 逐元素地比较它们的形状。只有两种情况下 NumPy/PyTorch 会认为两个矩阵内的两个对应维度是兼容的:\n", + "\n", + ">1. 它们相等;\n", + ">2. 其中一个是 1 维的。\n", + "\n", + "举个牛逼的例子:\n", + "\n", + "```text\n", + "A (4d array): 8 x 1 x 6 x 1\n", + "B (3d array): 7 x 1 x 5\n", + "Result (4d array): 8 x 7 x 6 x 5\n", + "```\n", + "\n", + "当任何一个维度是 1,那么另一个不为 1 的维度将被用作最终结果的维度。也就是说,尺寸为 1 的维度将延展或“逐个复制”到与另一个维度匹配。" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "torch.Size([8, 7, 6, 5])" + ] + }, + "execution_count": 35, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "A = torch.arange(48).reshape((8, 1, 6, 1))\n", + "B = torch.arange(35).reshape((7, 1, 5))\n", + "(A + B).size()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "--- \n", + "## 索引\n", + "\n", + "在 `Tensor` 中,可以使用标准的 **NumPy-like** 的索引(index)代表了元素的位置。`Tensor` 的索引从 0 开始逐一递增。例如,一个 3 行 2 列的矩阵的行索引分别为 0、1 和 2,列索引分别为 0 和 1。\n", + "\n", + "在下面的例子中,我们指定了 `Tensor` 的行索引截取范围 `[1:3]`。依据**左闭右开**指定范围的惯例,它截取了矩阵 X 中行索引为 1 和 2 的两行。" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 4, 5, 6, 7],\n", + " [ 8, 9, 10, 11]])" + ] + }, + "execution_count": 36, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X[1:3]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们可以指定 `Tensor` 中需要访问的单个元素的位置,如矩阵中行和列的索引,并为该元素重新赋值。" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0, 1, 2, 3],\n", + " [ 4, 5, 9, 7],\n", + " [ 8, 9, 10, 11]])" + ] + }, + "execution_count": 37, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X[1, 2] = 9\n", + "X" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "当然,我们也可以截取一部分元素,并为它们重新赋值。在下面的例子中,我们为行索引为 1 的每一列元素重新赋值。" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0, 1, 2, 3],\n", + " [12, 12, 12, 12],\n", + " [ 8, 9, 10, 11]])" + ] + }, + "execution_count": 38, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X[1:2, :] = 12\n", + "X" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 运算的内存开销" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在前面的例子里我们对每个操作新开内存来存储运算结果。举个例子,即使像 `Y = X + Y` 这样的运算,我们也会新开内存,然后将 Y 指向新内存。为了演示这一点,我们可以使用Python 自带的 `id` 函数:如果两个实例的ID一致,那么它们所对应的内存地址相同;反之则不同。" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "False" + ] + }, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "before = id(Y)\n", + "Y = Y + X\n", + "id(Y) == before" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "如果想指定结果到特定内存,我们可以使用前面介绍的索引来进行替换操作。在下面的例子中,我们先通过 `torch.zeros_like` 创建和 Y 形状相同且元素为 0 的 `Tensor`,记为 Z。接下来,我们把 X + Y 的结果通过 `[:]` 写进 Z 对应的内存中。" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 40, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "Z = torch.zeros_like(Y)\n", + "before = id(Z)\n", + "Z[:] = X + Y\n", + "id(Z) == before" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "实际上,上例中我们还是为 X + Y 开了临时内存来存储计算结果,再复制到 Z 对应的内存。如果想避免这个临时内存开销,我们可以使用运算符全名函数中的 out 参数。" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 41, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "torch.add(X, Y, out=Z)\n", + "id(Z) == before" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "False" + ] + }, + "execution_count": 42, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 与上例对比着看\n", + "Z = torch.add(X, Y)\n", + "id(Z) == before" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "如果 X 的值在之后的程序中不会复用,我们也可以用 `X[:] = X + Y` 或者 `X += Y` 来减少运算的内存开销。" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 43, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "before = id(X)\n", + "X += Y\n", + "id(X) == before" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## `Tensor` 和 NumPy 相互变换" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "PyTorch 中可以很方便的将 Torch 的 `Tensor` 同 NumPy 的 `ndarray` 进行互相转换,相当于在 NumPy 和 PyTorch 间建立了一座沟通的桥梁,这将会让我们的想法实现起来变得非常方便。\n", + "\n", + ">注意:Torch Tensor 和 NumPy ndarray 底层是分享内存空间的,也就是说改变其中之一会同时改变另一个(前提是你是在 CPU 上使用 Torch Tensor)。\n", + "\n", + "我们可以通过 `torch.from_numpy` 函数和 `.numpy()` 函数令数据在 `Tensor` 和 NumPy 格式之间相互变换。下面将 NumPy 实例变换成 `Tensor` 实例。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "将一个 Torch Tensor 转换为 Numpy Array:" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([1., 1., 1., 1., 1.]) \n", + "[1. 1. 1. 1. 1.] \n" + ] + } + ], + "source": [ + "a = torch.ones(5)\n", + "print(a, type(a))\n", + "b = a.numpy()\n", + "print(b, type(b))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们来验证下它们的取值是如何互相影响的:" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([2., 2., 2., 2., 2.])\n", + "[2. 2. 2. 2. 2.]\n" + ] + } + ], + "source": [ + "a.add_(1)\n", + "print(a)\n", + "print(b)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "将一个 NumPy Array 转换为 Torch Tensor:" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[2. 2. 2. 2. 2.]\n", + "tensor([2., 2., 2., 2., 2.], dtype=torch.float64)\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "\n", + "a = np.ones(5)\n", + "b = torch.from_numpy(a)\n", + "np.add(a, 1, out=a)\n", + "print(a)\n", + "print(b) " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注:所有 CPU 上的 Tensors,除了 CharTensor 均支持与 NumPy 的互相转换" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "--- \n", + "\n", + "## CUDA Tensors\n", + "\n", + "可以通过以下代码进行验证,是否安装 GPU 版本的 PyTorch。" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "False\n" + ] + } + ], + "source": [ + "print(torch.cuda.is_available())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "如果输出 `True`,代表你安装了 GPU 版本的 PyTorch。\n", + "\n", + "`Tensor` 可以通过 `.to` 函数移动到任何我们定义的设备 device 上,观察如下代码:" + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "metadata": {}, + "outputs": [], + "source": [ + "# let us run this cell only if CUDA is available\n", + "# We will use ``torch.device`` objects to move tensors in and out of GPU\n", + "if torch.cuda.is_available():\n", + " device = torch.device(\"cuda\") # a CUDA device object\n", + " y = torch.ones_like(x, device=device) # directly create a tensor on GPU\n", + " x = x.to(device) # or just use strings ``.to(\"cuda\")``\n", + " z = x + y\n", + " print(z)\n", + " print(z.to(\"cpu\", torch.double)) # ``.to`` can also change dtype together! " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "想要学习更多?这里的[官方教程](https://pytorch.org/docs/stable/torch.html)有更多关于 `tensor` 操作的介绍,介绍了 100 多个Tensor运算,包括转置,索引,切片,数学运算,线性代数,随机数等。\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# PyTorch 自动求梯度\n", + "\n", + "\n", + "在深度学习中,我们经常需要对函数求梯度(gradient)。PyTorch 提供的 `autograd` 包能够根据输入和前向传播过程自动构建计算图,并执行反向传播。本节将介绍如何使用 `autograd` 包来进行自动求梯度的有关操作。\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 基本概念\n", + "\n", + "`Tensor` 是 PyTorch 实现多维数组计算和自动微分的关键数据结构。一方面,它类似于 NumPy 的 NDArray,用户可以对 `Tensor` 进行各种数学运算;另一方面,当设置 `.requires_grad = True` 之后,在其上进行的各种操作就会被记录下来,它将开始追踪在其上的所有操作,从而利用链式法则进行梯度传播。完成计算后,可以调用 `.backward()` 来完成所有梯度计算。此 `Tensor` 的梯度将累积到 `.grad` 属性中。\n", + "\n", + "如果不想要被继续追踪,可以调用 `.detach()` 将其从追踪记录中分离出来,可以防止将来的计算被追踪,这样梯度就传不过去了。此外,还可以用 `with torch.no_grad()` 将不想被追踪的操作代码块包裹起来,这种方法在评估模型的时候很常用,因为在评估模型时,我们并不需要计算可训练参数(`requires_grad=True`)的梯度。\n", + "\n", + "\n", + "我们先看一个简单例子:对函数 $y=2x^⊤x$ 求关于列向量 $x$ 的梯度。\n", + "\n", + "我们先创建变量 x,并赋初值。" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[0.],\n", + " [1.],\n", + " [2.],\n", + " [3.]])" + ] + }, + "execution_count": 49, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "x = torch.arange(4, dtype=torch.float).reshape((4, 1))\n", + "x" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "设定 `requires_grad` 为 `True`,因为需要计算梯度:" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[0.],\n", + " [1.],\n", + " [2.],\n", + " [3.]], requires_grad=True)" + ] + }, + "execution_count": 50, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# only Tensors of floating point dtype can require gradients\n", + "x.requires_grad = True \n", + "x" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "下面定义有关变量 x 的函数,$y=2x^⊤x$。" + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[28.]], grad_fn=)" + ] + }, + "execution_count": 51, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "y = 2 * torch.mm(x.T, x)\n", + "y" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "由于 x 的形状为(4, 1),y 是一个标量。接下来我们可以通过调用 `backward` 函数自动求梯度。" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": {}, + "outputs": [], + "source": [ + "# y.backward()\n", + "y.backward(retain_graph=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "函数 $y=2x^⊤x$ 关于 x 的梯度应为 4x 。现在我们来验证一下求出来的梯度是正确的。" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[ 0.],\n", + " [ 4.],\n", + " [ 8.],\n", + " [12.]])" + ] + }, + "execution_count": 53, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# assert (x.grad - 4 * x).norm().item() == 0\n", + "x.grad # 注意,x.grad 是和 x 同形的张量。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "从代码中可以发现对 y 求导使用的是 `y.backward()` 方法,也就是张量中的类方法,是 `Tensor.py` 中的一个类方法的函数,这个函数只有一行代码,就是调用 `torch.autograd.backward()`:\n", + "\n", + "```python\n", + "def backward(self, gradient=None, retain_graph=None, create_graph=False):\n", + " torch.autograd.backward(self, gradient, retain_graph, create_graph)\n", + "```\n", + "\n", + "从代码调试中可以知道张量中的 `backward()` 方法实际直接调用了 `torch.autograd` 中的 `backward()`。 `backward()` 中有一个 `retain_grad` 参数,它是用来保存计算图的,如果还想执行一次反向传播 ,必须将`retain_grad` 参数设置为 `retain_grad=True`,否则代码会报错。因为如果没有 `retain_grad=True`,每进行一次 `backward` 之后,计算图都会被清空,没法再进行一次 `backward()` 操作。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "再来个例子:" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([5.])\n" + ] + } + ], + "source": [ + "torch.manual_seed(10) #用于设置随机数\n", + "\n", + "w = torch.tensor([1.], requires_grad=True) #创建叶子张量,并设定requires_grad为True,因为需要计算梯度;\n", + "x = torch.tensor([2.], requires_grad=True) #创建叶子张量,并设定requires_grad为True,因为需要计算梯度;\n", + "\n", + "a = torch.add(w, x) #执行运算并搭建动态计算图\n", + "b = torch.add(w, 1)\n", + "y = torch.mul(a, b)\n", + "\n", + "y.backward(retain_graph=True) \n", + "print(w.grad) #输出为tensor([5.])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**我们不允许张量对张量求导,只允许标量对张量求导,求导结果是和自变量同形的张量**。所以必要时我们要把张量通过将所有张量的元素加权求和的方式转换为标量。举个例子,假设 `y` 由自变量 `x` 计算而来,`w` 是和 `y` 同形的张量,则 `y.backward(w)` 的含义是:先计算 `l = torch.sum(y * w)`,则 `l` 是个标量,然后求 `l` 对自变量 `x` 的导数。\n", + "\n", + "我们举个例子看看:" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[2., 4.],\n", + " [6., 8.]], grad_fn=)\n" + ] + } + ], + "source": [ + "x = torch.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)\n", + "y = 2 * x\n", + "z = y.view(2, 2)\n", + "print(z)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "现在 `z` 不是一个标量,所以在调用 `backward` 时需要传入一个和 `z` 同形的权重向量进行加权求和得到一个标量。" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "metadata": {}, + "outputs": [], + "source": [ + "# z.backward() # 会报错" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([2.0000, 0.2000, 0.0200, 0.0020])\n" + ] + } + ], + "source": [ + "v = torch.tensor([[1.0, 0.1], [0.01, 0.001]], dtype=torch.float)\n", + "z.backward(v, retain_graph=True)\n", + "print(x.grad)\n", + "# 注意,x.grad 是和 x 同形的张量。" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 训练模式和测试模式\n", + "\n", + "在有些情况下,同一个模型在训练模式和预测模式下的行为并不相同。与训练模型不同的是,由于不需要计算梯度,所以测试网络的代码通常使用 `.detach()` 函数或者在 `torch.no_grad()` 下完成。\n", + "\n", + "来看看,我们可以中断梯度追踪的例子:" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "True\n", + "tensor(1., grad_fn=) True\n", + "tensor(1.) False\n", + "tensor(2., grad_fn=) True\n" + ] + } + ], + "source": [ + "x = torch.tensor(1.0, requires_grad=True)\n", + "y1 = x ** 2 \n", + "\n", + "# Way 1\n", + "with torch.no_grad():\n", + " y2 = x ** 3\n", + "################\n", + "\n", + "# # Way 2\n", + "# y2 = x ** 3\n", + "# y2 = y2.detach()\n", + "################\n", + "\n", + "y3 = y1 + y2\n", + " \n", + "print(x.requires_grad)\n", + "print(y1, y1.requires_grad) # True\n", + "print(y2, y2.requires_grad) # False\n", + "print(y3, y3.requires_grad) # True" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "可以看到,上面的 `y2` 是没有 `grad_fn` 而且 `y2.requires_grad=False` 的,而 `y3` 是有 `grad_fn` 的。如果我们将 `y3` 对 `x` 求梯度的话会是多少呢?" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor(2.)\n" + ] + } + ], + "source": [ + "y3.backward()\n", + "print(x.grad)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "为啥是 2 呢?$y_3 = y_1 + y_2 = x^2 + x^3$,当初始 $x=1$ 时,$y_3$ 对 $x$ 的梯度 $\\frac{dy_3}{dx}=2x+3x^2$ 不该是 5 嘛?\n", + "\n", + "事实上,由于 y2 的定义是被 `torch.no_grad():` 所包裹的,所以与 y2 有关的梯度是不会回传的,只有与 $y_1$ 有关的梯度才会回传。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " 此外,如果我们想要修改 `tensor` 的数值,但是又不希望被 `autograd` 记录(即不会影响反向传播),那么我么可以对 `tensor.data` 进行操作。" + ] + }, + { + "cell_type": "code", + "execution_count": 60, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([1.])\n", + "False\n", + "tensor([100.], requires_grad=True)\n", + "tensor([2.])\n" + ] + } + ], + "source": [ + "x = torch.ones(1,requires_grad=True)\n", + "\n", + "print(x.data) # 还是一个tensor\n", + "print(x.data.requires_grad) # 但是已经是独立于计算图之外\n", + "\n", + "y = 2 * x\n", + "x.data *= 100 # 只改变了值,不会记录在计算图,所以不会影响梯度传播\n", + "\n", + "y.backward()\n", + "print(x) # 更改data的值也会影响tensor的值\n", + "print(x.grad)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 对 Python 控制流求梯度\n", + "\n", + "使用 Pytorch 的一个便利之处是,即使函数的计算图包含了 Python 的控制流(如条件和循环控制),我们也有可能对变量求梯度。\n", + "\n", + "考虑下面程序,其中包含 Python 的条件和循环控制。需要强调的是,这里循环(while循环)迭代的次数和条件判断(if 语句)的执行都取决于输入 `a` 的值。" + ] + }, + { + "cell_type": "code", + "execution_count": 61, + "metadata": {}, + "outputs": [], + "source": [ + "def f(a):\n", + " b = a * 2\n", + " while b.norm().item() < 1000:\n", + " b = b * 2\n", + " if b.sum().item() > 0:\n", + " c = b\n", + " else:\n", + " c = 100 * b\n", + " return c" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们像之前一样使用 `requires_grad` 函数记录计算梯度,并调用 `backward` 函数求梯度。" + ] + }, + { + "cell_type": "code", + "execution_count": 62, + "metadata": {}, + "outputs": [], + "source": [ + "a = torch.randn(1, requires_grad=True)\n", + "c = f(a)\n", + "c.backward()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们来分析一下上面定义的 `f` 函数。事实上,给定任意输入 `a`,其输出必然是 `f(a) = Para * a` 的形式,其中标量系数 `Para` 的值取决于输入 `a`。由于 `c = f(a)` 有关 `a` 的梯度为 `Para`,且值为 `c / a`,我们可以像下面这样验证对本例中控制流求梯度的结果的正确性。" + ] + }, + { + "cell_type": "code", + "execution_count": 63, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([True])" + ] + }, + "execution_count": 63, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "a.grad == c / a" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "> 注意事项:\n", + ">1. 梯度不自动清零,如果不清零梯度会累加,所以需要在每次梯度后人为清零。\n", + "2. 依赖于叶子结点的结点,`requires_grad` 默认为 `True`。\n", + "3. 叶子结点不可执行 in-place,因为其他节点在计算梯度时需要用到叶子节点,所以叶子地址中的值不得改变否则会是其他节点求梯度时出错。所以叶子节点不能进行原位计算。\n", + "3. 注意在 `y.backward()` 时,如果 y 是标量,则不需要为 `backward()` 传⼊任何参数;否则,需要传⼊一个与 y 同形的 `Tensor`。" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "pytorch", + "language": "python", + "name": "pytorch" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.12" + }, + "toc-autonumbering": true, + "toc-showcode": false, + "toc-showmarkdowntxt": false + }, + "nbformat": 4, + "nbformat_minor": 4 +}