From bdf8894bfb04ab9804fa785902fc7606b10cca3e Mon Sep 17 00:00:00 2001 From: wyy4github Date: Tue, 13 Aug 2019 20:44:09 +0800 Subject: [PATCH] ergre r --- Content.md | 2 +- SeventhSection/ReinforcementLearning.md | 385 ++++++++++++ SeventhSection/notation/B.gif | Bin 0 -> 155 bytes SeventhSection/notation/Q.gif | Bin 0 -> 169 bytes SeventhSection/notation/Q_.gif | Bin 0 -> 196 bytes SeventhSection/notation/Q_s_left.gif | Bin 0 -> 451 bytes SeventhSection/notation/Q_s_right.gif | Bin 0 -> 495 bytes SeventhSection/notation/Q_sa.gif | Bin 0 -> 390 bytes SeventhSection/notation/R_t0.gif | Bin 0 -> 209 bytes SeventhSection/notation/V()=.gif | Bin 0 -> 841 bytes SeventhSection/notation/V(s)=0.gif | Bin 0 -> 346 bytes SeventhSection/notation/V_st+1.gif | Bin 0 -> 337 bytes SeventhSection/notation/delta.gif | Bin 0 -> 131 bytes SeventhSection/notation/delta_.gif | Bin 0 -> 155 bytes SeventhSection/notation/gamma.gif | Bin 0 -> 124 bytes SeventhSection/notation/公式1.PNG | Bin 0 -> 1738 bytes SeventhSection/notation/公式2.PNG | Bin 0 -> 1913 bytes SeventhSection/notation/公式3.PNG | Bin 0 -> 1985 bytes SeventhSection/notation/公式4.PNG | Bin 0 -> 2378 bytes SeventhSection/notation/公式5.PNG | Bin 0 -> 2586 bytes SeventhSection/notation/公式6.PNG | Bin 0 -> 6089 bytes SixthSection/Dcgan.md | 747 +++++++++++++++++++++++ SixthSection/notation/D(x).gif | Bin 0 -> 278 bytes SixthSection/notation/D.gif | Bin 0 -> 149 bytes SixthSection/notation/D_G(z).gif | Bin 0 -> 404 bytes SixthSection/notation/G(z).gif | Bin 0 -> 270 bytes SixthSection/notation/G.gif | Bin 0 -> 148 bytes SixthSection/notation/P_g=P_data.gif | Bin 0 -> 352 bytes SixthSection/notation/log(1-).gif | Bin 0 -> 633 bytes SixthSection/notation/log(1-D(G(x))).gif | Bin 0 -> 638 bytes SixthSection/notation/log(1-x).gif | Bin 0 -> 383 bytes SixthSection/notation/log(x).gif | Bin 0 -> 330 bytes SixthSection/notation/log+log(1-).gif | Bin 0 -> 961 bytes SixthSection/notation/log+log_DGz.gif | Bin 0 -> 915 bytes SixthSection/notation/logD(x).gif | Bin 0 -> 400 bytes SixthSection/notation/log_DGz.gif | Bin 0 -> 578 bytes SixthSection/notation/p_data.gif | Bin 0 -> 239 bytes SixthSection/notation/p_g.gif | Bin 0 -> 175 bytes SixthSection/notation/x.gif | Bin 0 -> 124 bytes SixthSection/notation/z.gif | Bin 0 -> 117 bytes SixthSection/notation/公式1.PNG | Bin 0 -> 4759 bytes SixthSection/notation/公式2.PNG | Bin 0 -> 3601 bytes 42 files changed, 1133 insertions(+), 1 deletion(-) create mode 100644 SeventhSection/ReinforcementLearning.md create mode 100644 SeventhSection/notation/B.gif create mode 100644 SeventhSection/notation/Q.gif create mode 100644 SeventhSection/notation/Q_.gif create mode 100644 SeventhSection/notation/Q_s_left.gif create mode 100644 SeventhSection/notation/Q_s_right.gif create mode 100644 SeventhSection/notation/Q_sa.gif create mode 100644 SeventhSection/notation/R_t0.gif create mode 100644 SeventhSection/notation/V()=.gif create mode 100644 SeventhSection/notation/V(s)=0.gif create mode 100644 SeventhSection/notation/V_st+1.gif create mode 100644 SeventhSection/notation/delta.gif create mode 100644 SeventhSection/notation/delta_.gif create mode 100644 SeventhSection/notation/gamma.gif create mode 100644 SeventhSection/notation/公式1.PNG create mode 100644 SeventhSection/notation/公式2.PNG create mode 100644 SeventhSection/notation/公式3.PNG create mode 100644 SeventhSection/notation/公式4.PNG create mode 100644 SeventhSection/notation/公式5.PNG create mode 100644 SeventhSection/notation/公式6.PNG create mode 100644 SixthSection/Dcgan.md create mode 100644 SixthSection/notation/D(x).gif create mode 100644 SixthSection/notation/D.gif create mode 100644 SixthSection/notation/D_G(z).gif create mode 100644 SixthSection/notation/G(z).gif create mode 100644 SixthSection/notation/G.gif create mode 100644 SixthSection/notation/P_g=P_data.gif create mode 100644 SixthSection/notation/log(1-).gif create mode 100644 SixthSection/notation/log(1-D(G(x))).gif create mode 100644 SixthSection/notation/log(1-x).gif create mode 100644 SixthSection/notation/log(x).gif create mode 100644 SixthSection/notation/log+log(1-).gif create mode 100644 SixthSection/notation/log+log_DGz.gif create mode 100644 SixthSection/notation/logD(x).gif create mode 100644 SixthSection/notation/log_DGz.gif create mode 100644 SixthSection/notation/p_data.gif create mode 100644 SixthSection/notation/p_g.gif create mode 100644 SixthSection/notation/x.gif create mode 100644 SixthSection/notation/z.gif create mode 100644 SixthSection/notation/公式1.PNG create mode 100644 SixthSection/notation/公式2.PNG diff --git a/Content.md b/Content.md index 5c992a9..982b8d4 100644 --- a/Content.md +++ b/Content.md @@ -33,5 +33,5 @@ ### 4.深度学习NLP ### 5.用序列翻译网络和注意的顺序 -## 第六章:PyTorch之生成对抗网络 +## 第六章:PyTorch之深度卷积对抗生成网络 ## 第七章:PyTorch之强化学习 \ No newline at end of file diff --git a/SeventhSection/ReinforcementLearning.md b/SeventhSection/ReinforcementLearning.md new file mode 100644 index 0000000..636f117 --- /dev/null +++ b/SeventhSection/ReinforcementLearning.md @@ -0,0 +1,385 @@ +# 强化学习(DQN)教程 +本教程介绍如何使用PyTorch从[OpenAI Gym](https://gym.openai.com/)中的 CartPole-v0 任务上训练一个Deep Q Learning (DQN) 代理。 + +## 1.任务 + +代理人必须在两个动作之间做出决定 - 向左或向右移动推车 - 以使连接到它的杆保持直立。您可以在[Gym](https://gym.openai.com/envs/CartPole-v0) +网站上找到官方排行榜,里面包含各种算法以及可视化。 + +![](https://pytorch.org/tutorials/_images/cartpole1.gif) + +当代理观察环境的当前状态并选择动作时,环境转换到新状态,并且还返回指示动作的后果的奖励。在此任务中,每增加一个时间步长的 +奖励为+1,如果杆落得太远或者推车距离中心超过2.4个单位,则环境终止。这意味着更好的表现场景将持续更长的时间,以及积累更大的回报。 + +CartPole任务的设计使得代理的输入是4个实际值,表示环境状态(位置,速度等)。然而,神经网络可以纯粹通过观察场景来解决任务, +因此我们将使用以cart为中心的屏幕补丁作为输入。也因为如此,我们的结果与官方排行榜的结果无法直接比较 - 因为我们的任务 +要困难得多。而且不幸的是,这确实减慢了训练速度,因为我们必须渲染所有帧。 + +严格地说,我们将状态显示为当前屏幕补丁与前一个补丁之间的差异。这将允许代理从一个图像中考虑杆的速度。 + +## 2.需要的包 + +首先,让我们导入所需的包。首先,我们需要[gym](https://gym.openai.com/docs)来得到环境(使用`pip install gym`)。我们还将 +使用PyTorch中的以下内容: + +* 神经网络(`torch.nn`)
+* 优化(`torch.optim`)
+* 自动分化(`torch.autograd`)
+* 视觉任务的实用程序(`torchvision`)- 一个单独的包![](https://github.com/pytorch/vision)
+ +```buildoutcfg +import gym +import math +import random +import numpy as np +import matplotlib +import matplotlib.pyplot as plt +from collections import namedtuple +from itertools import count +from PIL import Image + +import torch +import torch.nn as nn +import torch.optim as optim +import torch.nn.functional as F +import torchvision.transforms as T + + +env = gym.make('CartPole-v0').unwrapped + +# set up matplotlib +is_ipython = 'inline' in matplotlib.get_backend() +if is_ipython: + from IPython import display + +plt.ion() + +# if gpu is to be used +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") +``` + +## 3. 复现记忆(Replay Memory) +我们将使用经验重播记忆来训练我们的DQN。它存储代理观察到的转换,允许我们之后重用此数据。通过随机抽样,转换构建相关的一个批次。 +已经表明经验重播记忆极大地稳定并改善了DQN训练程序。 + +为此,我们需要两个阶段: +* `Transition`:一个命名元组,表示我们环境中的单个转换。它实际上将(状态,动作)对映射到它们的(next_state,reward)结果, +状态是屏幕差异图像,如稍后所述
+* `ReplayMemory`:有界大小的循环缓冲区,用于保存最近观察到的过渡。它还实现了一个`.sample()`方法,用于为训练选择随机batch +的转换。 + +```buildoutcfg +Transition = namedtuple('Transition', + ('state', 'action', 'next_state', 'reward')) + + +class ReplayMemory(object): + + def __init__(self, capacity): + self.capacity = capacity + self.memory = [] + self.position = 0 + + def push(self, *args): + """Saves a transition.""" + if len(self.memory) < self.capacity: + self.memory.append(None) + self.memory[self.position] = Transition(*args) + self.position = (self.position + 1) % self.capacity + + def sample(self, batch_size): + return random.sample(self.memory, batch_size) + + def __len__(self): + return len(self.memory) +``` + +现在,让我们定义我们的模型。但首先,让我们快速回顾一下DQN是什么。 + +## 4. DQN 算法 +我们的环境是确定性的,因此为了简单起见,这里给出的所有方程式也是确定性的。在强化学习文献中,它们还包含对环境中随机转变的 +期望。 + +我们的目标是训练出一种政策,试图最大化折现累积奖励![](notation/公式1.PNG),其中![](notation/R_t0.gif)也称为回报。折扣![](notation/gamma.gif) +应该是介于0和1之间的常数,以确保总和收敛。对于我们的代理来说,对比不确定的远期未来,它更看重它们相当有信心的不久的将来。 + +Q-learning背后的主要思想是,如果我们有一个函数![](notation/公式2.PNG),它可以告诉我们的回报是什么,如果我们要在给定状态下 +采取行动,那么我们可以轻松地构建最大化我们奖励的政策: + +![](notation/公式3.PNG) + +但是,我们不了解世界的一切,因此我们无法访问![](notation/Q_.gif)。但是,由于神经网络是通用函数逼近器,我们可以简单地创建 +一个并训练从而使得它类似于![](notation/Q_.gif)。 + +对于我们的训练更新规则,我们将使用一个事实,即某些策略的每个![](notation/Q.gif)函数都服从 Bellman 方程: + +![](notation/公式4.PNG) + +平等的两边之间的差异被称为时间差异误差,![](notation/delta.gif): + +![](notation/公式5.PNG) + +为了最大限度地降低此错误,我们将使用[Huber损失](https://en.wikipedia.org/wiki/Huber_loss)。当误差很小时,Huber损失就像均 +方误差一样,但是当误差很大时,就像平均绝对误差一样 - 当![](notation/Q.gif)的估计噪声很多时,这使得它对异常值更加鲁棒。 +我们通过从重放内存中采样的一批转换![](notation/B.gif)来计算: + +![](notation/公式6.PNG) + +## 5. Q_网络(Q_network) +我们的模型将是一个卷积神经网络,它接收当前和之前的屏幕补丁之间的差异。它有两个输出,分别代表![](notation/Q_s_left.gif)和 +![](notation/Q_s_right.gif)(其中s是网络的输入)。实际上,网络正在尝试预测在给定当前输入的情况下采取每个动作的预期回报。 + +```buildoutcfg +class DQN(nn.Module): + + def __init__(self, h, w, outputs): + super(DQN, self).__init__() + self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) + self.bn1 = nn.BatchNorm2d(16) + self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) + self.bn2 = nn.BatchNorm2d(32) + self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) + self.bn3 = nn.BatchNorm2d(32) + + # 线性输入连接的数量取决于conv2d层的输出,因此取决于输入图像的大小,因此请对其进行计算。 + def conv2d_size_out(size, kernel_size = 5, stride = 2): + return (size - (kernel_size - 1) - 1) // stride + 1 + convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) + convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) + linear_input_size = convw * convh * 32 + self.head = nn.Linear(linear_input_size, outputs) + + # 使用一个元素调用以确定下一个操作,或在优化期间调用batch。返回tensor([[left0exp,right0exp]...]). + def forward(self, x): + x = F.relu(self.bn1(self.conv1(x))) + x = F.relu(self.bn2(self.conv2(x))) + x = F.relu(self.bn3(self.conv3(x))) + return self.head(x.view(x.size(0), -1)) +``` + +## 6. 输入提取 +下面的代码是用于从环境中提取和处理渲染图像的实用程序。它使用了`torchvision`软件包,可以轻松构成图像变换。运行单元后,它将 +显示一个提取的示例补丁。 + +```buildoutcfg +resize = T.Compose([T.ToPILImage(), + T.Resize(40, interpolation=Image.CUBIC), + T.ToTensor()]) + + +def get_cart_location(screen_width): + world_width = env.x_threshold * 2 + scale = screen_width / world_width + return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART + +def get_screen(): + # gym要求的返回屏幕是400x600x3,但有时更大,如800x1200x3。 将其转换为torch order(CHW)。 + screen = env.render(mode='rgb_array').transpose((2, 0, 1)) + # cart位于下半部分,因此不包括屏幕的顶部和底部 + _, screen_height, screen_width = screen.shape + screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)] + view_width = int(screen_width * 0.6) + cart_location = get_cart_location(screen_width) + if cart_location < view_width // 2: + slice_range = slice(view_width) + elif cart_location > (screen_width - view_width // 2): + slice_range = slice(-view_width, None) + else: + slice_range = slice(cart_location - view_width // 2, + cart_location + view_width // 2) + # 去掉边缘,使得我们有一个以cart为中心的方形图像 + screen = screen[:, :, slice_range] + # 转换为float类型,重新缩放,转换为torch张量 + # (this doesn't require a copy) + screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 + screen = torch.from_numpy(screen) + # 调整大小并添加batch维度(BCHW) + return resize(screen).unsqueeze(0).to(device) + + +env.reset() +plt.figure() +plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), + interpolation='none') +plt.title('Example extracted screen') +plt.show() +``` + +## 7. 训练 +#### 7.1 超参数和实用程序 +这个单元实例化我们的模型及其优化器,并定义了一些实用程序: +* `select_action`:将根据epsilon贪婪政策选择一项行动。简而言之,我们有时会使用我们的模型来选择动作,有时我们只会统一采样。 +选择随机操作的概率将从`EPS_START`开始,并将以指数方式向`EPS_END`衰减。`EPS_DECAY`控制衰减的速度
+* `plot_durations`:帮助绘制episodes的持续时间,以及过去100个episodes的平均值(官方评估中使用的度量)。该图将位于包含主 +要训练循环的单元下方,并将在每个episodes后更新。
+ +```buildoutcfg +BATCH_SIZE = 128 +GAMMA = 0.999 +EPS_START = 0.9 +EPS_END = 0.05 +EPS_DECAY = 200 +TARGET_UPDATE = 10 + +# 获取屏幕大小,以便我们可以根据AI gym返回的形状正确初始化图层。 +# 此时的典型尺寸接近3x40x90 +# 这是get_screen()中的限幅和缩小渲染缓冲区的结果 +init_screen = get_screen() +_, _, screen_height, screen_width = init_screen.shape + +# 从gym行动空间中获取行动数量 +n_actions = env.action_space.n + +policy_net = DQN(screen_height, screen_width, n_actions).to(device) +target_net = DQN(screen_height, screen_width, n_actions).to(device) +target_net.load_state_dict(policy_net.state_dict()) +target_net.eval() + +optimizer = optim.RMSprop(policy_net.parameters()) +memory = ReplayMemory(10000) + + +steps_done = 0 + + +def select_action(state): + global steps_done + sample = random.random() + eps_threshold = EPS_END + (EPS_START - EPS_END) * \ + math.exp(-1. * steps_done / EPS_DECAY) + steps_done += 1 + if sample > eps_threshold: + with torch.no_grad(): + # t.max(1)将返回每行的最大列值。 + # 最大结果的第二列是找到最大元素的索引,因此我们选择具有较大预期奖励的行动。 + return policy_net(state).max(1)[1].view(1, 1) + else: + return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long) + + +episode_durations = [] + + +def plot_durations(): + plt.figure(2) + plt.clf() + durations_t = torch.tensor(episode_durations, dtype=torch.float) + plt.title('Training...') + plt.xlabel('Episode') + plt.ylabel('Duration') + plt.plot(durations_t.numpy()) + # 取100个episode的平均值并绘制它们 + if len(durations_t) >= 100: + means = durations_t.unfold(0, 100, 1).mean(1).view(-1) + means = torch.cat((torch.zeros(99), means)) + plt.plot(means.numpy()) + + plt.pause(0.001) # 暂停一下,以便更新图表 + if is_ipython: + display.clear_output(wait=True) + display.display(plt.gcf()) +``` + +## 8. 训练循环 +在这里,您可以找到执行优化的单个步骤的`optimize_model`函数。它首先对一个batch进行采样,将所有张量连接成一个整体,计算![](notation/Q_sa.gif) +和![](notation/V()=.gif),并将它们组合成我们的损失。通过定义,如果s是终端状态,则设置![](notation/V(s)=0.gif)。我们还使 +用目标网络来计算![](notation/V_st+1.gif)以增加稳定性。目标网络的权重在大多数时间保持冻结状态,但每隔一段时间就会更新策略 +网络的权重。这通常是一系列步骤,但为了简单起见,我们将使用episodes。 + +```buildoutcfg +def optimize_model(): + if len(memory) < BATCH_SIZE: + return + transitions = memory.sample(BATCH_SIZE) + # 转置batch(有关详细说明,请参阅https://stackoverflow.com/a/19343/3343043)。 + # 这会将过渡的batch数组转换为batch数组的过渡。 + batch = Transition(*zip(*transitions)) + + # 计算非最终状态的掩码并连接batch元素(最终状态将是模拟结束后的状态) + non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, + batch.next_state)), device=device, dtype=torch.uint8) + non_final_next_states = torch.cat([s for s in batch.next_state + if s is not None]) + state_batch = torch.cat(batch.state) + action_batch = torch.cat(batch.action) + reward_batch = torch.cat(batch.reward) + + # 计算Q(s_t,a) - 模型计算Q(s_t),然后我们选择所采取的动作列。 + # 这些是根据policy_net对每个batch状态采取的操作 + state_action_values = policy_net(state_batch).gather(1, action_batch) + + # 计算所有下一个状态的V(s_{t+1}) + # non_final_next_states的操作的预期值是基于“较旧的”target_net计算的; + # 用max(1)[0]选择最佳奖励。这是基于掩码合并的,这样我们就可以得到预期的状态值,或者在状态是最终的情况下为0。 + next_state_values = torch.zeros(BATCH_SIZE, device=device) + next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach() + # 计算预期的Q值 + expected_state_action_values = (next_state_values * GAMMA) + reward_batch + + # 计算Huber损失 + loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1)) + + # 优化模型 + optimizer.zero_grad() + loss.backward() + for param in policy_net.parameters(): + param.grad.data.clamp_(-1, 1) + optimizer.step() +``` + +下面,您可以找到主要的训练循环。在开始时,我们重置环境并初始`state`张量。然后,我们采样一个动作并执行它,观察下一个屏幕和 +奖励(总是1),并优化我们的模型一次。当episode结束时(我们的模型失败),我们重新开始循环。 + +下面,*num_episodes*设置为小数值。您应该下载笔记本并运行更多的epsiodes,例如300+以进行有意义的持续时间改进。 + +```buildoutcfg +num_episodes = 50 +for i_episode in range(num_episodes): + # 初始化环境和状态 + env.reset() + last_screen = get_screen() + current_screen = get_screen() + state = current_screen - last_screen + for t in count(): + # 选择动作并执行 + action = select_action(state) + _, reward, done, _ = env.step(action.item()) + reward = torch.tensor([reward], device=device) + + # 观察新的状态 + last_screen = current_screen + current_screen = get_screen() + if not done: + next_state = current_screen - last_screen + else: + next_state = None + + # 在记忆中存储过渡 + memory.push(state, action, next_state, reward) + + # 移动到下一个状态 + state = next_state + + # 执行优化的一个步骤(在目标网络上) + optimize_model() + if done: + episode_durations.append(t + 1) + plot_durations() + break + # 更新目标网络,复制DQN中的所有权重和偏差 + if i_episode % TARGET_UPDATE == 0: + target_net.load_state_dict(policy_net.state_dict()) + +print('Complete') +env.render() +env.close() +plt.ioff() +plt.show() +``` + +以下是说明整体结果数据流的图表。 + +![](https://pytorch.org/tutorials/_images/reinforcement_learning_diagram.jpg) + +可以随机选择或根据策略选择操作,从gym环境中获取下一步样本。我们将结果记录在重放内存中,并在每次迭代时运行优化步骤。优化从 +重放内存中选择一个随机batch来进行新策略的训练。“较旧的”target_net也用于优化以计算预期的Q值; 它会偶尔更新以保持最新状态。 \ No newline at end of file diff --git a/SeventhSection/notation/B.gif b/SeventhSection/notation/B.gif new file mode 100644 index 0000000000000000000000000000000000000000..cb7c2865c8c94f006d44493dc262bf9c8b3f3b6b GIT binary patch literal 155 zcmZ?wbhEHbX4kG=@7}%Z=;$yq zG7=CFm@#8UNJz+?J9iX+vM>TQFzA2?kQodt9s(W;ZVFBllAh_bl(TfOv&zzWffha;4Vr={d|6sk9Uew*uxWA#IHcn%wc`1sNs1OS(GDyO F)&S&0Hxd8< literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/Q.gif b/SeventhSection/notation/Q.gif new file mode 100644 index 0000000000000000000000000000000000000000..8178189d8ad2d25b42ddb9b6fbb35fccdcea94f7 GIT binary patch literal 169 zcmZ?wbhEHbTQFzA2?kQodtVFDhSY??KJvcQ)eAL^IpS~fGdohQ1 V?*!fpLdTjuwuuWkda^KB0|0D5JnjGh literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/Q_.gif b/SeventhSection/notation/Q_.gif new file mode 100644 index 0000000000000000000000000000000000000000..544e080687493df15d086603b9bc852c9fae730a GIT binary patch literal 196 zcmZ?wbhEHb6k!l#*v!E2|Nnmm28KI#?v$04b#!#hm@y+QEiEJ@L_k2`%$YMTE-p$+ zN=8OTyLRn*_wL=QRjU+#vM>TQFzA2?kQodtg#sr$wF{0i8G0zT8SwI5w7F6cz%EMFNNfAesBp-ZaGV)b^u)~5){J;i9o<9u!t$x7@Sxnl!A&)r(b}-ITov=|xLaNjE wlct@-(JhBvwMFt?p8S}Q-Z80CluL-ur%{C4)uUa6hnuInCx;_4Q;@+L0Cbi`UH||9 literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/Q_s_left.gif b/SeventhSection/notation/Q_s_left.gif new file mode 100644 index 0000000000000000000000000000000000000000..516b585847d274a4a2b3da5c7cfab22c2fcb98b8 GIT binary patch literal 451 zcmV;!0X+UkNk%w1VM_oK0J8u9|Ns90007+F+;(<$h=_=on3!f}W>i#E5D*Z|%*;eY zL?R+0GBPr{y1MS}?y9P)A^8LW00000EC2ui080Q8000F35XedEbj9k;doPaH8Db|8 zW$KWcFc1Y{2o;f}+JI!`Dp!J=q=Bjo4io`GC}H3>3tCC>c(5i4UbCXOP6&%hLHoQQ zRDqBsAe9gv6NA&zNJ|v$vgCm*J;D|VXA}v2bqyMILW30#V+DB{2V{=}6b=Im30K$O*0SZhT1ilCckIewpiL4XT00FrY2XpD{6MXhuJk^mD4a3BIQ7FYC zMu-Otnz5rYAs+=c4g^$d)@+=HS`KcllHg#8wlQ&D?&MLRCIJfp2hgeG!QfVba0jkY zSa3sTp$kXGFp$!0jK4=(1jQSr4b!;_3pCyvmFLX{1spXxfa*uj!zc8lg?L3_KsXNC t%7kb@wM2t>6GCm$5K;pHbSZXlQqh(|0R#dsp`{SE%qW>2Kum}L06SQ6r5OMK literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/Q_s_right.gif b/SeventhSection/notation/Q_s_right.gif new file mode 100644 index 0000000000000000000000000000000000000000..48c635fbe144fc00c8d86a6e7e0d88d3fabbe540 GIT binary patch literal 495 zcmVi#E5D*Z|%*;eY zL?R+0GBPr{y1MS}?y9P)A^8LW00000EC2ui08{`H000F35XedEbj9k;yWfTLamHw# zl%R;6Q)U z4&+K;Dj_^3hHa%0U@6$<`~-oYB!qrebPXDGM1T|zV+DN`4>JNZ84U$q5)K2E21gPC zOo~i#E5D*Z|%*;eY zL?R+0GBPr{y1MS}?y9P)A^8LW00000EC2ui06+i|000F35XedEbj9k;d2wB6gk(uM zqb3YQK^Q_sBxX%Egf$$LfY1|$ZP*b~6e$W}F)3t-P}yT*Fjg7?5I_-&dI1R@0Nc=D zfe6jRS?JaXgvZ0WNpG2&LRMm@GzJnZHVRJ%VjKYy0f!X|cmjA711VZyl^X?C2Tzt3 z3l0teIToQ)0FV|50Raj?01R$*oCt2Po*)676bDugeH9M}eZMLJT?xO$d=f|!E+BK4 z#0sVWX$@5bmldiL1PWu=4B8cTz6qvINfTU0S_vs|JqOzeAOZ}48S>nR4m(b40l4JE z4;xOcC?L}{W`G<42goIqvH{${G8Zg%PzIn{2#z##DtrO}sKNkO8XO2==#hg;0J<75 kq`+W7J`L+|7C=d^gn^wGa#kpSfYFJWAzIq_Fd+f}JIbYlivR!s literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/R_t0.gif b/SeventhSection/notation/R_t0.gif new file mode 100644 index 0000000000000000000000000000000000000000..1e117d6a86364a44ada2dd951bc9bca4fd1ff529 GIT binary patch literal 209 zcmV;?051PWNk%w1VHp4s0J8u9|Ns90004G&b|NAoW@ctoR8;Qn?nFdHs;a8Y%*?vF zx)2Z$GBPrln3#x&h}_)VA^8LW00000EC2ui02u%g000Dj5J(_KAc7QPrvKkTcok@R z0pcM+a+*reO~vw>0^<>jdw+}qqETQtex}xh(D)=~rV4@%x zp&vs)5k3GOY{&7G3kb=K#{A#|G!zYeNB~^`319$jg8&9PO$c*iivUb6dJ>Y86NeIt LniCFyE)f7bD#%Op literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/V()=.gif b/SeventhSection/notation/V()=.gif new file mode 100644 index 0000000000000000000000000000000000000000..24326dc54a3fe1ae5b48be3240f08ee975801540 GIT binary patch literal 841 zcmV-P1GfA}Nk%w1VaNaz0J8u9|Ns90004G&b`TH{W@cv0%*>dWm@+als;a7ph=}g) z?z+0VR8&+%L`2-&+#(_(A^8LW00000EC2ui0LTCn000F35Xea@RmSSgrBPidj$}z< zW@>&aMYQJ&&vY%xwyofRFa(4G#Sl0kC<$S1vq&tTG>0HzNDv&AB(&B<1}rl17zPXrw+xdBe+Y4|0t>DTu@;^S8lFnC6|Q^7 zTfPXu70v(w0u!pM$lE>LHD5Uj3l0H%2oHP)4HON044CHtZAWe#4Tl;;c-RsafSZCQ zo`iW7$ibmENrD4@6a>h?1E2uK4i4T~cyMt7fqDKQLoxtILD@P)5Y-@x!$4g?fExl} zvart~#0e55P_#HAqX+{m`ApzAfx(zm8y;G46rjPI1JVj88zSK5(2g_+kQ_O|po9TJ z2S`&HqLesNrfe<^(DZ54sfenI06Rf!1)>ss^x&WXrUV2GoWUe!L8-SV1K=js8K4rn zN(EpZ5ZWP~2BW0>)(Kn}*{KPJIXspS`LKs{4JiO@aEjG}O}0vYR%^P#N0A{(vQda` zWu%G<6TE|4suElQ1_q5lf_;&2h0q%uaT~6>WPr0Yk2SzkK!z{Rg{@Yc zX#*%CjQ|KZQ8fUBg9FrTLj1;M8f&%by;Fp7-skdP$Dc~XjUK(g*&IMU+ zu;py_sK;gpUydL^Y8qZx0hhFO0wtI!;8+nyaNPxhZh6tE(-=_V*`@_=GO=d~2ij)9 zBzV5|T_s`gnUw*6fj7Yc4_IhHq%KT)=WLcH5KC+$WC{ZcDJZH3R+^GwzyR!B@Ms9K TGFk!)n$lY9uw)3kL<9gkmCisl literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/V(s)=0.gif b/SeventhSection/notation/V(s)=0.gif new file mode 100644 index 0000000000000000000000000000000000000000..a8b8f32d6c2475feff7b22350903e8a4b9dc86c3 GIT binary patch literal 346 zcmV-g0j2&&Nk%w1VMYKF0J8u9|Ns90004G&b`TH{W@cv0%*>dWm@+als;a7ph=}g) z?z+0VR8&+%L`2-&+#(_(A^8LW00000EC2ui07d{3000F35Xea@RmM`IwEtij5}`th zs3;5pQEb71P{|uENvb1>ku0=;#bi08j5^VxF*(=(h(L#CF=SAQSrXLH0t}2vA@Bq& z%94g7RS0iCQWGnc9}0sN2n@{t z(ZY;C5(v!(0S0Il(OrVx9;~nBu@ntg010jq#+Mpu=u-tJwC&XHOcDo(018MzSfQ}s sk2(oi@;U24z}glC`$Pm(NKt}-ZjugGq-f6rk&Or-F$h!zl0pOkJCs#?2mk;8 literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/V_st+1.gif b/SeventhSection/notation/V_st+1.gif new file mode 100644 index 0000000000000000000000000000000000000000..dc0167088a3bd39c807112b89c8ec3ecceaced52 GIT binary patch literal 337 zcmV-X0j~Z>Nk%w1VLAX40J8u9|Ns90004G&b`TH{W@cv0%*>dWm@+als;a7ph=}g) z?z+0VR8&+%L`2-&+#(_(A^8LW00000EC2ui06G8@000F35Xea@RmSSgrBPi75MsG0 znZa5K65CKrlV#pAO37Vu3xJee)NyG6R78nynKnhUs zECUUqY2bD%=}AB^IioELIzV85OMFFO09YCh4;BI`N&p3m9S#i|4~q;!W*deUQ~?cE z2W}Gu3l#{su3j_xVjTQFzA2?kQodt9s(yk8LLG*9J_LQ5_BdfdbGFja9JvP zn&x;~Z526e66UHPD&4g(Fhz}B?ty@XmxzOR!wg4@3D+Mt8VG9i$8GywqtB_*XZXU?ozwJIbeq@$x_*REYNX3Q`$ zGP-l;&bxQ-%F4=sDiwdSFajwC9S{LBgMmd;fWgpX;(>LW82T?}q!KG*lINAlcO??fdAYXIb}EPx#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D246`;K~#8N?V7=B z;#w5O-(QoR(1lRwqEJ_6HS!jfZUTh~2n}6SX3;PpL3o62(oLlxp)g2cu#3T23_cie zV8Hj_VHbm43>_G7pbvzCi$WH{_s&hOj>aU_G}g}R{XxJ)G3MUyob#R2{sw^{0wEAV zBtjs9NQ6KHkqChZA`t=+L?Q$th(riP5Qz}Tig6tNVo`<;oFRX`M1(+Ai1XP%YRf=t zz%R^A54CLx`STt>UmijrE5kvnBtd#vN1tDrn{PGbqzue|;-;VZ5dv8O&Q$}}u>p09 z1hQR(b=IC!Bz?517OYo!NF)%oY@uJ3n0-$@-7Mdks>So{x1k!wyUmkp3|liG&DbfX=yvA~|JA z%A?2>{6`V_7xel(Idv|~k2+L(cxk)#bMyT;J#?xX(ozc6rFUuYp$PS2Vk?F{)C(zk zHVNum-#7hbK0}yLT9bn&|BgQDFX{Yn zbp9|RG6=u%PRP!?1yjqSE`=uK|@5CUn0*wn+Hy`NIeo2&KLZZG!$MkG$c4>ShY59bU^hu_l2(nlOVr z#N=_>4o{IoO3_(bgta1?xBS{YFq5k77GaiDenaJaqLN80*w5~ej{8+ejaH&%}QGCu#WD{0KJctrPeHT zSQgu%O-ua#!5$9~%O>prCO z0;OX2obkB*co1wOm>*Ft=81ahg32jm7BGZU%$+JJd%UBK7-w0wPBOH8l{$;%M3URZ zZ19pYY*Wd}`0nLpnJJ?_;+}a-us~+~$FyMLPXampHtd-`-*=pM?%zn3MSLYM6#h(s zi*z5~YsgbK7q|WGy;aB)nCQeQ$MeAFAWF%GUDl}UY@s)qsaz4rs9f=Lm-(i$!2Zn$(Aw?*K8#Nn}I0Mqc((1 zQqOhBRCBFM{@@SsAD0#x;c5S|P#};`QhdX>r^4)1mcKDUOo0SU0=ZA|3q&IE#K_8l zu&j`x)>|I^y~TfQdZp1lvea6RZtQw0ZxPCujh(nGheSX$|EEotkEz}a3 ztt^3=sq8E<1rmT8NdW>G_vb6*wh4VxhI(kgSmHC+F@{6Kh0BqxxN z3^E?r{GxQv@<>xFZ}xPCR`|aoOXPV3f=99fc`iJYKwt=g2qF;z5kw*cB8Wr? gL=cG(h#&&sKNbjanUbe3NdN!<07*qoM6N<$f(!^Qng9R* literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/公式2.PNG b/SeventhSection/notation/公式2.PNG new file mode 100644 index 0000000000000000000000000000000000000000..ad3a190ad9cf5da0ab272fa9fa2c8cb2d36a4d99 GIT binary patch literal 1913 zcmV-<2Zs2GP)Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D2M$R@K~#8N?OD%D z<7gP(zoxf>2Vud3Fg=-*#e>1a76b#r6b}Xuh5-qJgu)On7D2+mP=uj9ND4c2K(@mQ z^&oX&sE46~P+<;%fgXe$glC^GNq?tW-N5d`_khrvWWRge>l1vhbL6S)#F-S5=BnC+)iNx?~ zGBpp=P)^#>HwlSxGCg+Ds#?fz7_bg)nCl9Zw+>nl(RHFz)sf}z%Ev0S<1&hGRT$My zboFKE-jtxP>wK39ZCQhLph2s6;l)QE{$u!YCcO0i1kwGhBmJg`R`mT@#Kh`-Yoarh z=YJPI8rJUK*(-2g4fV z-<6=HQb;Z6aPR!+w=aW#Z9`cyP`i(Q!8O&Cg5r(%;-j&jMrN;$ZgBBAbWmDNK|OsP zss9lk8nE6l7bzXJnSzap)vH0Dn@7GHHPU|s%|eFtXmWp1qx^I!G!&9T7UtKZhmbCySTO^3OqK-;sBTU_A6vz2@RK~ffA%N-(%2--qZ z#OT*y@|}H^dJeZGWK$W&_(EbobkK@xh951oe@365gwJuGgSwVsj`DD4dFHjTCXH?# z*82u`FdY+Q3Tnj9R~9nIlR6xb$^U91EjD-=vos$RWYRy65~f*5#V#144QGjDAvxo< zB2=Cfl_-DW4i(f#>&%9|uOY*CsyjBEi-=HOgCms^dtyOb%OSU7qv3m~pYZ-2m1||7 z?b~o~!nOSGF036L_NfD7jS3i7=tn-kpB>qo|Gk9#Iuka_$hj#ZNA+w~wBY@rszwl& zp<=?=cixOYrAr#3ABCxB4!dlx3}RTgOt-kUJF zp0TRH{PNOf77z-l!F{4%C&(n4eAPkiqXy%A@|`487qx8`zNZRf=zPirMIT9&z>^-u zN4v@-R@>8=ACN=Aycn0_XURr0Dq#C#b~K&`ZfGe@r3%w6h&&j*k;-k9(a#HutT5(Y z_csnQ)75i`P%k3VV;iL{lJo~ZM-A6gZxK_opdh_Tf-p}azVvdD`x%S%k^5SidTS1?Gus77N$PU7>uC9Jh+=wy&=EU zIThW^23NSh@FZm-}VBgZ+j%PxlED5s>QeSGNm~xf&3UgkG8A74K?c zY|s3d>kCLPazx)n&kbRqOcsg-74}&f8XIJqn2}60-oik}u-m6!%FNNxzt|H!1rOy7 z6{X?e@5{*b)NjHsh2%=DMkBLJv&uplpFMO%;U_#u*c=10J&#b4I8w((-cm!_`|;dht-+}_tdDZg2$S@cQeE*RGWw#a~#Duv4i>yE&9@ z1zA&AI2!o5&bb5kdpxy9Mq7-BnuJh3HrOyBA~N+nq<?Bark+_;e`TOFy~bLag{q^?{rLB7bY^YwK0*U7PKV=k63I~yOY}=w`x-oI}`<8^giha=4Pjl;NA=zAvYh`jzc2l-_U zc7=2Pfa^?|@$jgETcdwR3kX%vP>e}(_o~FPFwe$IBg?TAXonald%4j2(N9>P#Jj{A zD%5u+lq)u@J(_Jn=D1j|*%ZHWKRbgsJT|q)Lxc0dg1+mZ5sY_nfr`jUQM?Xg>yktZ zOF86u&#dFQkHAb%KOl9k#ow70V4Rj4dV1 zZiCWhpgz+vS>pc&|23oW3B;2rnEC89adbbE|1y!74Y6brQs260pn3b0caV@6MLd}# zBwhhYCW*u#$s~~&B$*@Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D2UbZ$K~#8N?V8O? z<60QT?_ZN$xh@J9TnKj6t}L@KxR`=41;OBj1{V|LmM|bexU?HfLCU}o+|+ND#skM1l~WAQFV|1d$+wCx`?gJV7J~;RzyPCcG zHY8*Zg8TQ*r!=Ik##HY6m*b|2>|6};{;)@BZj)z|47A*~Ll4z=64WE_<_~X!aM)<% zC8R%w^LpcjcT9!xS(cV*_>{L+5$aZ={eufSJ@!Qf%Cd&ap~>?eMMqLGJ$!TQ-N|Ke*ZhVf4>CfO&Zzt z3=+TR@YkPd#Axm^d6m*Z3F=Y``ko2nLkVV4L25}y{nmN)s6MT8x=kFGpdnxKa z4tdi+{hHt6ffUKzFkI=8gSd|VXXT+hx8KSy)pVr#)**WA!IwojgKk)SrJs24NPj(mzS1{VyC zWyddh@1pva`cJYobpJJqX>8!-j^>t(k;in#$>*J{pwGWe;VdSH+IwaUTEtrtB3o$R z(T#u)X!sdcL@-a2&Kam2cVKxJj~m!;hdu-H$h|A_Fdr4_*Yob7@pUQ3q=hA8GnnvV zYc*#lyTW@2W847Da88R0uP@yg)hFXejY6n|`HeAEJRh3`<>R7{+C zrZHpTu>vTbR8TqV(p=aAjT0WrF=fmrWvF?Tht=A5cWfbqH?~g(^CFbO#Kx`XC3z%I ze^1!7Argo!8E6Et#WXf}#5os~nEhFkd*K*)=^iX^_@BM-oMe@U7cM-BLhi&p_5?Rp zf7rB+HJYd!vhSN@FekTqWK?LNu{1vxP$-W3m}vDq$Gr4Y=ybwff`hO55Z0_#7YCA{gU^XHluVs-nira#H_h2Kuq@GL68=nfeTQQ-ivq z!91zM{Gh`)?DV;9K8c4NHUqjxI^<*qg@Y;z+d35mg9e)4jqv);%)|M|B<&p8E+ zLH9McS^u=L*U7WG{r6$DQO(PUld;vJ26>JhtFW+Wqx+q_se9Xa!q~vsG~V^Muc}ZN zD90}w(COanLJBfzqT(4ZJBdi-8~!=s2qx!v&!oLeVS#dyU{DzOKtc#vn$HZDJ1vsX z9=r)9-s~HJS#E^salP>u(&HC{HcN0rhp|ELq6Xtln6Sz6;smsDRk05*EL06hNX5Y`5gyOCT+ix6p@4+3!CeDdnE6t5oLVu_nN(d5 z@bbbVY|5q5Zk{lbd#ac=GALaH3m`9pjn1Kg>{1pw%Ng?uGMf~J9}0{>&^r!vDD+%` z!n3@Zf%17qvhB6eJZqsd#l{R|0~&j4m(r=AU#+nsX;ni#Xuj#;rwET>TbBPJ^8=K( zSXPvxn+@HAEMiFmE#_T#Voz1zEBUp^L>q*~w)(5v5d1XZ{V%Ya{5#agb1urop3cAo zUWD*w2|qJSJcg=vIl)i)+jleeG`|2sc(a6`LAvL4{FgT5{n&5ha>DBkB7`?nEHrmy zO5#~zqC?-55nq;}Zq@w{iT}q4;Rzx^2u~0RLU@8m5W*8gf)Jh{5`^#s5di!Lk&n@1 TgMD5-00000NkvXXu0mjfso20~ literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/公式4.PNG b/SeventhSection/notation/公式4.PNG new file mode 100644 index 0000000000000000000000000000000000000000..326e214663536b449082c4a3068811ee31073bb4 GIT binary patch literal 2378 zcmb`J`9Bkk1IM?Sxgyur9L=yYSI$Z$CUajSr4_?+MMQGT-)p!^Lf2rug~wFPa@9RM37&S9{>Odnwetlj`RDm&htW! zU0v1P=s19Zb|%*V6+O}m#{l*;urdGu-lqy2xN;vepT8+S5C9OO|5rf9gHksDfOoncLv26dKr^Br&rFM?Nx(jtr#!HN7@sdKo9z^D5TIpi*{T#->1Goz{G1vbqw z(q{Et;2y5GFeeIu0tv3#Ec>XKE_aAPi*Fx~45hzJ+_Ka>g?g#tUa zhljCi2**$x@2PlKpoZil9KlSsmRLAb?$kNktC?)i-)=3Szo=ZYP6TaKlvfd6u78Eg zS3(80?`)rK&X3<9oW5$~>oZlbUzYIAov3gy=l@S!qigbrZrYPT5ncuZ;l1x(!{Z$x zI^o^XH`i|l4z$$Y(Q{u2@2~1Te6UsFgGK~(yamoXzR7{?>F1j_CP%MGp{JGRKXPf) zESY}wr45QHo*Jo|qma|Ti)k+>6Pq4fja^I`J|bZTH9ZRhEcm87unVbrStBi+#b3f3 zU-wl`=cy((XEcaA@7i>`iI4P74x6qmxjzCq4_)m+ZoV&}MUGC8L$?d+wv=k6fpt#W zLi`;hf1I95Gu1^ka`~ADUo#iqy>~I^_>|O+Zq(;3Jl1@EiPt?NZtV!&9O|Hu(SY>h zsqN>jS7~`kUMY)hxgyo$D)B~|d%y!awiYS(f^i#|!t)~v`s0MrtE$eoJKOM~5knsb zu-2}XRzHLD!6YqSi7=}pu9#XwI+MEBI1JOZg+9LfrOtlDLM{Lp<^X+FX{9swxb`VU zpAr>{w%D?2Qa?D{a7TOG;3M2vFTD%jfuM1UTTsG*}z_L(M!jld~i!AAcxAN}XG$}lhv=|dOG z4x{Kj0Qxy|!M13Z`tTjCM7AQQI!gZ4c47XK&x+d&OD4BbbMily8-d{1-Z1#8)N`j; zopk;X@r=nG9-L%4%GJRu9M^Y@GSlrD#l*De{}18bc+GheQ*2)oF%hA@5u3V3#pzdFbXc% zN$iOeHP7=9+nkOF(frzLo7h*+&NtL@MA;HWI9Bnu>*LI@=DJc62I3kQjsDf& za!T7XvDHOI{WZDOK_|!zSH6V9?q7Z+hh_rWxlwF|VTP~w)826Ag>tTd^A>} zQ}jcN>B^UDK8Hw_@`@3<-_JNtEIhQuvNONRfkg8pj{HcJRC5qwNWls?-+JpALXOL? zAtrnU(6}l2+E=3!?%!l>i#k)cb{$mxt)rEiUhvhJu9du-nUIa1lW;=@ zg*Qr>m=-?wEh=$5o3Z$b&!;|V<-5pLW|TGK(}c*$%g#Mn`Yv-&VQ$B>NV|ZOSg^|` z6|_$}pJnSsxLgu8rSK`8W$vXCX#mNH8q>2mU-77J%@>V3ojlx4KIeHd_BLBKL{M_lm)VnD%1&dw<`nM*_zy&HZtCJmqrGBc zGgxf>-r)jRW-*e}0sGXstYvx=Bmc{yxQ9D4p*o?iV+z59d0a4Fy=ysLoOB?~zP^JN zs~bJqF;;1}bQY(u`=YXwtp{R2q!sWw25O)?LYwEJrm7BZdK`uwNVh>B>op{+`Sn$1 zq~{_tl#oNKNwOi0>a=C43_Wga@O^7O!--nowy0?nm4G`kS^CrXGB=6uj=~N2%g=s? zrc^~hxCbtDldEik$N#pQEC_T{Wx^Pnf#B5BI+vunSqaFM6kmKP_@-J$nTY=~b~ literal 0 HcmV?d00001 diff --git a/SeventhSection/notation/公式5.PNG b/SeventhSection/notation/公式5.PNG new file mode 100644 index 0000000000000000000000000000000000000000..340b3e766aa2b05c87db1a3a1e557adac9b6b3f9 GIT binary patch literal 2586 zcmcJR=RX^Y7sja~gj%sjj8G#+t!j%BBWPpRh+8YvtX0&eMG&fz8ntS-)TkMI#%xhr zRgG)UM6bOrSKs~qf!~YgJfG+L=Da)4!5Hep>1nxW$;im)b#*k2uYK%V@&Kyqc@^b2 zcWvZ8#&8(fm+w5A*WspOjlARDbJX zhs@w?C6+^*914jxA_HRVKpij?2j%}2%}XV!&K$d+Brg6VL4cv=q2>Efd4$GUA=BFy zJ}6`@VX2_H0EeXmg-2^t^v}ZfJXJspdtU7dBVpyjb*AcJ^-Bo>M*rR;oF>lCkFEDvPYkUuj8zJMo+-M zkEC_CO90!gG?}y&HSYX)#{y++Elb~8QGD&EuBI%ZWzs<9{Ho#GY(bCbH`YML6r^@$ z(F*_r>sZU6K#Kh0C2yk;ZMJ&nKqxUmL3KQ8=STL(>hlHnjon9bW-Uz0#)D=iUTa9= zj6wtZVSP=8ilWhF8q9TEOH%d&id2i%8PjLVJo-rKelO-*sFIA(5|{6{98qt3gFE$j zw`^mXRA8}0*_TUvWO8c6zoC^-d84z)5bxWKJtb9HY0_eSi{`d&E_l8RI63ASH9k=4 z^h?yvOC)jY>iJ1Au_#xVxL9~)1=AkKssJBNf~r0flPK;R8mg^+(~bxMBvM{4lF7mL z2eD4EuYSk0nHZS}JTs;39^+wl0)9L`$YA`U)RNcXHuYQf!VvB~9_5o2qji)gE3#}E z&T8v%UzGIBuFMImTC7bHI%y$kO5}zmzhYx0N74k#Q7nHT_Zf)Z7mE~*TeZMF7B5`z zB~99fvHcLsMm-Lhhch57C*yv(Y#d}%E#`Wey~4wKwx`iGf4vH}6~x1)Bp!!d!I{=v zZ}kA#6??v(vUL}RB6fGP)`z)D`XF5HOuf|9t2PHzn}VXzHdN3?wOQ)cRHb({_jodI zEvYI>t|an0pos-}QwY{W%%>js@?iAo>zPP{=q&qvr^T8VmZr9Z)&BKt zh3%}|7Sy;W>0tCuRRv(Z)etU$uQNDST$O#8VO#V$NhzXt9r}V@*z(l3Y5hKuzoZsC zuuNQmz-WW}E=*a7&ABl{+4Z7G`js|xIH$-9@fNbs4SdPTB`g&E}m zn8BF%Z8P@3_Q+F4b7+xtt{m-GxS#nhDXn$`b>2ea>GyH@C-i-|mP)*ik{;l7$SxK; zBkCC+rPDPdkstD4D?ei{#UL9o^TsYyG)Y3rB|HUJ^eJ#3@lcd&X^J$5W$r*KB3Vq z{0kt0#8J_h@3)CRKw~r3Mc9%=;%W(}=7Iv$Ya6fU4gQ%@u*5gD8d^+5$MFjA&?7qr zUHZto`?SBWS8(a|Da!6{Jar(7e3wJ!GS6B6_2unX!7YZ-dts&ml#@@T>OoM?(t5oE ze$ODL&uKzog^6zk>%h8&>Eq%pR(mT1c5vWt)lazjmtug$^P3Js@(HQL--h};$O4Z_ zElRz$;Gt5mQSz+L1_W2hdf@+rPPtmO)zl?OFi$- z@zaZP%RLS|(r#g4Sz|O^v)PJ-3e!8(eMb4+*0@<~KKg{;ysb^r?~HD=#bBSa2B3R^ zm{0&T>kQRTLq>kQs4hklwu~82e8vQ@A|EH6gc#VTU$2r0!b4X_uhhd$Vz>6 zX@I-bf9|~DwGle0QuR}MGFrX)(-^OB!kI>2Eupkb20d%^+kN@Wb3_l%3Ru?^|7ZDt1nOgW*y?r z?4@Z;!(_a>+eaWgWT0$deK&Hd!z$tG1j>0#BhMsm#v>rg^WDauWGzp8Ai z`XzQ!Nu?!k@48ZQ-k+ThxCA)o*4h&U&)@$gbl&Vi$Rr#cEEfb?X&u7lE*HRS82X?7 z64jiCpdmRH&TpB{mOTjwLMG!F#jLOa1DmJAgoaFW|M%$ZEh$9qF!8`L}-Oa4CYjolcGN-;WqmBO$ z-?Q^}Dl!T4lp;T$Mfa2%gRi0Tv8ht2i$(aF-R|^WI6UJ#B!s0)8blUYI+kt_5UHglgq2Q}6j+u930azN zJ@5H`eZSswz8^Dl?wpx>?wxb*o#(j;dOB+OxHPyJ7#R3Kb!7wedW9bO&#=&Qw;z5W zdU@nypr(jXKS8&P7M?gMXe(e~d``x@wR?(|alF*cd@wKw2L6pl!=6=+7#Nh+KxGA^ z0P92Oq$aIR@9?ornpI!SRAOReeZKL)7cZ}qC(&5Z$csoNmk?y9I$lB{#SW8x&+C{KJZfM-Y1e-k#5Lkj(v^D-e}LQ9VS8FH8vIovgI zm2Xpc=4nd>m~KQoPxCVMjs?Etkr-g5%gnbUA<&!mX%8nj@2y6A*!Knf3|JAWm zGjI3OP@@=MWR(;Z^284Aq)3I@#?om{NDJ~@@z}w0 zp7lW{Niy>jFwkN`7edS&`*pfMpf{V{^1TXuW=Q5>5x`$ZV`@jHVyEQLcdq-9rBow- zr*rtH;;vK$bLnFha+GvppgmeINut+^f}2Geq|$(f%W3b3=%*9F+s7qccb;C071)Zf z&%bVbb{Ckd^IPORsy@hep-SKq%IDU3?i!(Jfp=I%kY&@RZ|3KAHfR6z(9JjenIr3; zdIc4gCyh&0?t7-$rIUh{1GnWz$Fpulw>`1o?eUAAyz?a^eW~4k*irFG55~TtOg0e- z56nWz#dCkBz&Mq-Y1-FA0n2M$m0t1u92qbpnMJI3TN5+W6x;9Qs&nAvSZsZ{an%B) z_|C6>Te#QSS3eNoUK~6v|4t(5fXVoA7HI9u<2@WneG+b1m0hNRWX*6aB?B86Sg)Gs=ADdHAEqu9R%*&#BDClkK){J7os?U+Y?;jN7sY7h(I=94e zQOMxyE~7j`G8Z}R54X6^meosw>^YEHf`&duD2{4$ZHYFwGOF;FxLl1`ohw7>u&rZY z(k!M#lB+p{@tbUI@t=O)IjkEuj`PYJ4fee^W56=Rg;P{myMWA))m+yqd*D-qvi)ZYMTMw)$TkHWF9MEk(hl?h0SY~DTMK)_PO)(T{MpH$FmKl5v< zGH!7cVMKzHzO3Mo7S=pOVlI3@c&d3lrP=(4p!W50SW*@9(mW^6)zT7!qqf|OKuV+r z?jrame?csC;`MPrFZb0Un`}8QaPv} zPG2!8zQf-#TqE2EP0JpB_bEcb*#jp_kjw70-7pnJPcdn2(oE@@_$jS(A}6uIW=+zh zLYbE!NUOhS*{j-~&wC8HH&L@9$JScS8&d*Mp_0nbu#CZY&~{q-j<|hC3Cjy$C z+X_Zg@J(g$amz|%f#R9b2WF`&x=7r{jpfH&ayuP$WVbQVwEgU_#}%4bAKaOG^Od%$ zaQ2IyIA{-aJ(JkQ9(T8Jeu!w&iF9sw!_41aReUzJz}*Pu&9=a58a-=DNDV7c+O5Lx zWQ>SE@v<4VMEMY32jOguOMk>PraDj6e$SNVVz0q0wVI9n0x;CYQB+W6W^jW%*VKJXA}Cx1lQ@V($a>k>4;*N1Jx+?d>@2-m8)nQ^MOj%-;tv}r#^Z!7=#(AiXT(e8hH*@1UfK7#U- z3YV>cbe2BmbZj@|C2m9*_Hnx%w+E&5y|eqY15Wm&p0e2yP!>{4o<(Vd%}2nD!w}8J zcXqNFlH9BlIXIh2f4p^(ySDJkzB19%o$Ul8tsM4j>tBk2U}msy;_BCc_>3?V(t|Ng zAnijo`b#jhrGO+}+}s8op8W8x)sFzm64ujjOeCofh%lkFx$IT{J=m^hy@|Gr!Zniw zSEH%CLkTOmK7?iI)Q@(dHR3kdn0qu)GzBj-<%XSHu+$D-8R0NcfN;bMDzJR$0*R}w zsL&6+!ulS}hAa0~YkxfDgPa8>a2~|Y;GAyv*X?6fN%Gi zHI6wz7a)NTk=z2B^`=k0n8+W79<{cXPky^8#{C2&-?t< zV}wmbvl=U^uOlPHaH;Nty1xJxSsuqv4AbGNiIQ=aP13jOhig6@yWg3(hEq@w9$LAo zC3o)e4VE##5#b3WOD(x%^LJ||o3|ZWp)sfQf1L^^2!V}8`b@4p1V!X$?mj$e?vCs6 zj~K~ojb4_|uIFQ!;~~|BDMubvCEnWnw%p=8tU;Z-Lg!Fq0NwQ`F~BwQwoULDeBNVocUDo_Qvla zm7Z$DJAJ^QC(OhE*T0$JENu{_SJZr^C9UTu>bR`3RDZ31fTbeKP^qc0y^|%)Oi6O{z ziNDFC#_C-aSZ1mdna}U9AsB0U`0PzN8##tYJ5!w{YA#i zlvSM1yQ{}yb-W}uJxpD0w9quJ(?=}_>IVeoH&;Y~BSRHWH911)@2vCjf+p?Vw)6b1 z?q-D31|VwWFm|(SZB*6DwV31e?=vp%h?WCq$s5?sQlA{x)Sw<6|JwvIp1|nTt4VRy zy>73OvgHp%-HD{&Um&n0XSKti4Bo)<%S2aqtV* zH~8hGYAZg3Y3fJKdgg+*R#SiX3C1$W47dRuN2OX?c5sJB^Bak*%l0^*Qoy>bS1#1M zM!0yL?tx#<1eF?!2X5Isg#^(?Vani1K=}VIMt(Z#$RdJBZBx`pz0S6gYHS#yr=LUZ zTqNAsbUciF-zg5_^8`G;aUk3_nJ9+)2;nx}Rr+tRr^QwPaB5t|58C_*Tb1P|6Qe$5)r7w9~ z{e9DQgPg<1n7S<$(-=J5lh+C#RVH4zP)2-RAWaHrM7*;FK+*=oONYa&@?<-(C7c_^ zyr3b%Lkef1G@6Wyjp1{8t8UU&Q`7*Mb7uSxlu#zZ4?D>U^P7!mb1d#{GYbPxARU$F zDc>GhWR|rHNU>afj^z7|_H$a}ENEaNd2{Fti12J`4JO(ix~eGtR<2KLr}#oBhHYN= zk6tVgYy$u>6;}4)v0WWgxQ3JK{ro&dAdD*rl!KGmqFrYdy$ZecC}|So$BR6Fvk4eH z2<2whjPwr@JJ7^Ud-Z&-@xIU24VG|Fn{y`%>{+p#qac6J8jP8S%bS`F<}hc+`G zhqVl$_)|ZNmT%`T5>x{Va(??^cn3j|S@^2>s!+VFBHS`W^h#HMf3NBu%2 z|JUZypL1DkkjRe+)0+U+-tNA9-od66@NT9OiK}aFk^DyQ^x}vlltLF&j4OXvR9l~% zIl>L|u~ZJ?!yYx1C+lfXPI~^M!5!3Nl42w5p~|fI6(SsQyY^@yy6WL%k~4arH+|b+ zgubJ=jt-D`9-Z|IxVi{r`P&4*lOkSml<|42*aB~$2PLlnl(qs)8{!Dj?kht_W?j}0`@S?VT_&9=_5sTJLgOmnI&sV(&~SEuZ2cx1G744`MfJ&kUsD#3)Pn@? zK(jd@2jvYcA4W0Y(9TaEs?kXpk+L3Sn$KvAFu_l6!$-6+im{4XymhspEHXr-ngb%{ zI24Y9pTa05$oxMDIo5w55A?%BN?G}JB@oBB%8yG<0Vp?*_a#3T4Nskvn8r)tJRY|Su}sLys!Y+CmmP@bR*POdAl&W${D4sWXSfvxmM^&4n7 z4663&u^(m=EVWm^*w&4=zIcPyMg>7F1$=Xw-tfU1{z`qV@%E=(Pa-$1jv{lBoP~F1 z(qaYp>GWroAm`9paM0R!4hBCg*RQMoq+3ufS9U8&oFL~e{`UO_C-_UztJipd?%r!% zl$Chfpi|=M%a_7#Z&+ZJmBL!i=d6}3dgcpym&+=_Hg$PD&}U3uI-?_X*Zmxw1WI+^ zJBkDX{xP1G9=_ckC*e$WV2`pL%Je}scHhO(&z!FQ;p3uxjEHeUl|3->PwKUI~ee{GH*Ho*;-oo=szHhHQ*&Cxb7Awpxqk71i+SKSeMzwiicl?Vbr+#@w+2OEw+hfW|hzr zW(&w4=$?4{Ywf6C>CQWVEBn7QJ}srW_6WY!z0knmo< z(vA!zRs6Gfa?{SK6fWa_H#OJbQus?(GfdoRrJpPo?f`oLY% z#r1Y{Q5iaO<1Wm(nCGF1DrD=59&yZgm{%)+jx4c*HD6bABY{qRQ^1~I$P!rlc2V-(Ex2 zX{iYi&1s93JZ>}j&2A2j zZQA*^$VVx5JMvx#M9t`vj&A~L8g>PZeZFr>x9n6ouq(FK0_|Ou2pgzFeahL6LOiv7 zZSl!WA$F6295SZ+nj>L<&dCMFo@o69_`7{M{}Slv$CJJB#AdoUI5=EQJ4K=3Zc_k*YR~l_THMgJ=`hE0e`n#jl*D{xQu_|D3*SpfM~w= zq)0zzcq74pPVj3ZqO{%5cI}HJ92OEiufZd6hGBa8vJWCm_#Rz%>{#1qIgUiI^YEKU z>pq@Z%#dfmY7Oh;M7yxQ;v@O8NuWA*PRk6FGHfVxSb6OSiCc4i=V~dc_eVDniR`EUrwphahVECH%B%9= zU5tIp3)Gf%ukUnzOR2vsZ0E6^x&nHIfNNvhBvL0bdv;_q)HRi9z*iK%NpmQK*Ovyd zbVW_r5nr{Dr?2f!wGCZkTGW%Jug&ITxeLae*FJn(H?cm#+xBR7mVfnQ-^SHjqB$Gx za)10e_t!6eBf4f3F2{jIqo22FKm~45ErDxZAerANN8KLBxVO#_(LeKA5zH>pvE}%h zeDmW|J~yL2PE0V%3CRs1i6irgdEY^O)DvVG^5XDgdC=Xbu!nMF#1#|pf_%+4B+{*9DXC2D;Hh4|&Ae=nxt4NKun z%+2>(iTYa+AA1-Uo3~=Nb63tN@fo1WfPEcYeCMPN-wIp0QxWk{#B-m7c2n}eXgzKy zCzq`wD{BZymx6PsQVo*}j;&VMDG51&k!;65>NLEsy4wRE8&pfz4BP)|ikVrvZjcEf zFSZ)`L`X;0DFBt?o; z9X!{DQnRKTsM!S>g&T@>$k9}}C(2CDaxkvlJx@jx7wrDGX^2!yzHBo+_AjP#4KvO- z_G~b+G}muo(It*^bLL^nR=({=6Bn>YEE ze~(Z;L?6mW)k>g;eqFnxX7S@BN#cOTR|De42b&@MEN7}Mm6J6i7Y%q=VrF$kSNu#c z@NFKKE4Iotw{;=duu#E*yQIrynaB9(U^DiN}r)?hMDPU z_U|{t+NB{iNAW`F6J=R1u>OX8p9VAH2n%h}}-qMd|OQy9q zUa+P{0_V=xlPF4YH7B&5t~v{L{8# img_align_celeba + -> 188242.jpg + -> 173822.jpg + -> 284702.jpg + -> 537394.jpg +``` +这是一个重要的步骤,因为我们将使用`ImageFolder`数据集类,它要求在数据集的根文件夹中有子目录。现在,我们可以创建数据集,创 +建数据加载器,设置要运行的设备,以及最后可视化一些训练数据。 + +```buildoutcfg +# 我们可以按照设置的方式使用图像文件夹数据集。 +# 创建数据集 +dataset = dset.ImageFolder(root=dataroot, + transform=transforms.Compose([ + transforms.Resize(image_size), + transforms.CenterCrop(image_size), + transforms.ToTensor(), + transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), + ])) +# 创建加载器 +dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, + shuffle=True, num_workers=workers) + +# 选择我们运行在上面的设备 +device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu") + +# 绘制部分我们的输入图像 +real_batch = next(iter(dataloader)) +plt.figure(figsize=(8,8)) +plt.axis("off") +plt.title("Training Images") +plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0))) +``` + +![](https://pytorch.org/tutorials/_images/sphx_glr_dcgan_faces_tutorial_001.png) + +## 3.3 实现 +通过设置输入参数和准备好的数据集,我们现在可以进入真正的实现步骤。我们将从权重初始化策略开始,然后详细讨论生成器,鉴别器, +损失函数和训练循环。 + +#### 3.3.1 权重初始化 +在`DCGAN`论文中,作者指出所有模型权重应从正态分布中随机初始化,mean = 0,stdev = 0.02。`weights_init`函数将初始化模型作为 +输入,并重新初始化所有卷积,卷积转置和batch标准化层以满足此标准。初始化后立即将此函数应用于模型。 + +```buildoutcfg +# custom weights initialization called on netG and netD +def weights_init(m): + classname = m.__class__.__name__ + if classname.find('Conv') != -1: + nn.init.normal_(m.weight.data, 0.0, 0.02) + elif classname.find('BatchNorm') != -1: + nn.init.normal_(m.weight.data, 1.0, 0.02) + nn.init.constant_(m.bias.data, 0) +``` + +#### 3.3.2 生成器 +生成器![](notation/G.gif)用于将潜在空间矢量(![](notation/z.gif))映射到数据空间。由于我们的数据是图像,因此将![](notation/z.gif) +转换为数据空间意味着最终创建与训练图像具有相同大小的RGB图像(即3x64x64)。实际上,这是通过一系列跨步的二维卷积转置层实现的, +每个转换层与二维批量标准层和relu activation进行配对。生成器的输出通过`tanh`函数输入,使其返回到[-1,1]范围的输入数据。值得 +注意的是在转换层之后存在批量范数函数,因为这是DCGAN论文的关键贡献。这些层有助于训练期间的梯度流动。DCGAN论文中的生成器中 +的图像如下所示: + +![](https://pytorch.org/tutorials/_images/dcgan_generator.png) + +> 请注意,我们对输入怎么设置(*nz*,*ngf*和*nc*)会影响代码中的生成器体系结构。*nz* 是![](notation/z.gif)输入向量的长度, +*ngf*与通过生成器传播的特征图的大小有关,*nc*是输出图像中的通道数(对于RGB图像,设置为3)。下面是生成器的代码。 + +* 生成器代码 +```buildoutcfg +# 生成器代码 +class Generator(nn.Module): + def __init__(self, ngpu): + super(Generator, self).__init__() + self.ngpu = ngpu + self.main = nn.Sequential( + # 输入是Z,进入卷积 + nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), + nn.BatchNorm2d(ngf * 8), + nn.ReLU(True), + # state size. (ngf*8) x 4 x 4 + nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), + nn.BatchNorm2d(ngf * 4), + nn.ReLU(True), + # state size. (ngf*4) x 8 x 8 + nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False), + nn.BatchNorm2d(ngf * 2), + nn.ReLU(True), + # state size. (ngf*2) x 16 x 16 + nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False), + nn.BatchNorm2d(ngf), + nn.ReLU(True), + # state size. (ngf) x 32 x 32 + nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), + nn.Tanh() + # state size. (nc) x 64 x 64 + ) + + def forward(self, input): + return self.main(input) +``` + 现在,我们可以实例化生成器并应用`weights_init`函数。查看打印的模型以查看生成器对象的结构。 + + ```buildoutcfg +# 创建生成器 +netG = Generator(ngpu).to(device) + +# 如果需要,管理multi-gpu +if (device.type == 'cuda') and (ngpu > 1): + netG = nn.DataParallel(netG, list(range(ngpu))) + +# 应用weights_init函数随机初始化所有权重,mean= 0,stdev = 0.2。 +netG.apply(weights_init) + +# 打印模型 +print(netG) +``` + +* 输出结果: + +```buildoutcfg +Generator( + (main): Sequential( + (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False) + (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): ReLU(inplace=True) + (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (5): ReLU(inplace=True) + (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (8): ReLU(inplace=True) + (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (11): ReLU(inplace=True) + (12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (13): Tanh() + ) +) +``` + +#### 3.3.3 判别器 +如上所述,判别器![](notation/D.gif)是二进制分类网络,它将图像作为输入并输出输入图像是真实的标量概率(与假的相反)。这里, +![](notation/D.gif)采用 3x64x64 的输入图像,通过一系列Conv2d,BatchNorm2d和LeakyReLU层处理它,并通过Sigmoid激活函数输出 +最终概率。如果问题需要,可以使用更多层扩展此体系结构,但使用strided convolution(跨步卷积),BatchNorm和LeakyReLU具有重要 +意义。DCGAN论文提到使用跨步卷积而不是池化到降低采样是一种很好的做法,因为它可以让网络学习自己的池化功能。批量标准和 leaky +relu函数也促进良好的梯度流,这对于[](notation/G.gif)和[](notation/D.gif)的学习过程都是至关重要的。 + +* 判别器代码 + +```buildoutcfg +class Discriminator(nn.Module): + def __init__(self, ngpu): + super(Discriminator, self).__init__() + self.ngpu = ngpu + self.main = nn.Sequential( + # input is (nc) x 64 x 64 + nn.Conv2d(nc, ndf, 4, 2, 1, bias=False), + nn.LeakyReLU(0.2, inplace=True), + # state size. (ndf) x 32 x 32 + nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False), + nn.BatchNorm2d(ndf * 2), + nn.LeakyReLU(0.2, inplace=True), + # state size. (ndf*2) x 16 x 16 + nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False), + nn.BatchNorm2d(ndf * 4), + nn.LeakyReLU(0.2, inplace=True), + # state size. (ndf*4) x 8 x 8 + nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False), + nn.BatchNorm2d(ndf * 8), + nn.LeakyReLU(0.2, inplace=True), + # state size. (ndf*8) x 4 x 4 + nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False), + nn.Sigmoid() + ) + + def forward(self, input): + return self.main(input) +``` + +现在,与生成器一样,我们可以创建判别器,应用`weights_init`函数,并打印模型的结构。 + +```buildoutcfg +# 创建判别器 +netD = Discriminator(ngpu).to(device) + +# Handle multi-gpu if desired +if (device.type == 'cuda') and (ngpu > 1): + netD = nn.DataParallel(netD, list(range(ngpu))) + +# 应用weights_init函数随机初始化所有权重,mean= 0,stdev = 0.2 +netD.apply(weights_init) + +# 打印模型 +print(netD) +``` + +* 输出结果: + +```buildoutcfg +Discriminator( + (main): Sequential( + (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (1): LeakyReLU(negative_slope=0.2, inplace=True) + (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (4): LeakyReLU(negative_slope=0.2, inplace=True) + (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (7): LeakyReLU(negative_slope=0.2, inplace=True) + (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) + (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (10): LeakyReLU(negative_slope=0.2, inplace=True) + (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False) + (12): Sigmoid() + ) +) +``` + +#### 3.3.4 损失函数和优化器 +通过![](notation/D.gif)和![](notation/G.gif)设置,我们可以指定他们如何通过损失函数和优化器学习。我们将使用PyTorch中定义的 +二进制交叉熵损失([BCELoss](https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss))函数: +![](notation/公式2.PNG) + +注意该函数如何计算目标函数中的两个对数分量(即![](notation/logD(x).gif)和![](notation/log(1-).gif)。我们可以指定用于输入y +的BCE方程的哪个部分。这是在即将出现的训练循环中完成的,但重要的是要了解我们如何通过改变y(即GT标签)来选择我们希望计算的组件。 + +接下来,我们将真实标签定义为1,将假标签定义为0。这些标签将在计算![](notation/D.gif)和![](notation/G.gif)的损失时使用,这 +也是原始 GAN 论文中使用的惯例。最后,我们设置了两个单独的优化器,一个用于![](notation/D.gif),一个用于![](notation/G.gif)。 +如 DCGAN 论文中所述,两者都是Adam优化器,学习率为0.0002,Beta1 = 0.5。 为了跟踪生成器的学习进度,我们将生成一组固定的潜在 +向量,这些向量是从高斯分布(即`fixed_noise`)中提取的。在训练循环中,我们将周期性地将此`fixed_noise`输入到![](notation/G.gif) +中,并且在迭代中我们将看到图像形成于噪声之外。 + +```buildoutcfg +# 初始化BCELoss函数 +criterion = nn.BCELoss() + +# 创建一批潜在的向量,我们将用它来可视化生成器的进程 +fixed_noise = torch.randn(64, nz, 1, 1, device=device) + +# 在训练期间建立真假标签的惯例 +real_label = 1 +fake_label = 0 + +# 为 G 和 D 设置 Adam 优化器 +optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999)) +optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999)) +``` + +#### 3.3.4 训练 +最后,既然已经定义了 GAN 框架的所有部分,我们就可以对其进行训练了。请注意,训练GAN在某种程度上是一种艺术形式,因为不正确 +的超参数设置会导致对错误的解释很少的模式崩溃,在这里,我们将密切关注Goodfellow的论文中的算法1,同时遵守[ganhacks](https://github.com/soumith/ganhacks) +中展示的一些最佳实践。也就是说,我们将“为真实和虚假”图像构建不同的 mini-batches ,并且还调整![](notation/G.gif)的目标函 +数以最大化![](notation/log_DGz.gif)。训练分为两个主要部分,第1部分更新判别器,第2部分更新生成器。 + +#### * 第一部分:训练判别器 +回想一下,训练判别器的目的是最大化将给定输入正确分类为真实或假的概率。就Goodfellow而言,我们希望“通过提升其随机梯度来更 +新判别器”。实际上,我们希望最大化![](notation/log+log(1-).gif)。由于ganhacks的独立 mini-batch 建议,我们将分两步计算。首 +先,我们将从训练集构建一批实际样本,向前通过![](notation/D.gif),计算损失![](notation/logD(x).gif),然后计算向后传递的梯 +度。其次,我们将用当前生成器构造一批假样本,通过![](notation/D.gif)向前传递该 batch,计算损失![](notation/log(1-).gif), +并通过反向传递累积梯度。现在,随着从全实时和全实时批量累积的梯度,我们称之为Discriminator优化器的一步。 + +#### * 第一部分:训练判别器 +正如原始论文所述,我们希望通过最小化![](notation/log(1-).gif)来训练生成器,以便产生更好的伪样本。如上所述,Goodfellow 表 +明这不会提供足够的梯度,尤其是在学习过程的早期阶段。作为修复,我们希望最大化![](notation/log_DGz.gif)。在代码中,我们通过 +以下方式实现此目的:使用判别器对第1部分的生成器中的输出进行分类,使用真实标签: GT 标签计算![](notation/G.gif)的损失, +在向后传递中计算![](notation/G.gif)的梯度,最后使用优化器步骤更新![](notation/G.gif)的参数。使用真实标签作为损失函数的GT +标签似乎是违反直觉的,但是这允许我们使用 BCELoss的![](notation/log(x).gif)部分(而不是![](notation/log(1-x).gif)部分), +这正是我们想要。 + +最后,我们将进行一些统计报告,在每个epoch结束时,我们将通过生成器推送我们的fixed_noise batch,以直观地跟踪![](notation/G.gif) +训练的进度。训练的统计数据是: + +* **Loss_D**:判别器损失计算为所有实际批次和所有假批次的损失总和(![](notation/log+log_DGz.gif)) +* **Loss_G**:计算生成器损失(![](notation/log_DGz.gif)) +* **D(x)**:所有实际批次的判别器的平均输出(整批)。当![](notation/G.gif)变好时这应该从接近1开始,然后理论上收敛到0.5。 +想想为什么会这样。 +* **D(G(z))**:所有假批次的平均判别器输出。第一个数字是在![](notation/D.gif)更新之前,第二个数字是在![](notation/D.gif) +更新之后。当G变好时,这些数字应该从0开始并收敛到0.5。想想为什么会这样。 + +> 此步骤可能需要一段时间,具体取决于您运行的epoch数以及是否从数据集中删除了一些数据。 + +```buildoutcfg +# Training Loop + +# Lists to keep track of progress +img_list = [] +G_losses = [] +D_losses = [] +iters = 0 + +print("Starting Training Loop...") +# For each epoch +for epoch in range(num_epochs): + # 对于数据加载器中的每个batch + for i, data in enumerate(dataloader, 0): + + ############################ + # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) + ########################### + ## Train with all-real batch + netD.zero_grad() + # Format batch + real_cpu = data[0].to(device) + b_size = real_cpu.size(0) + label = torch.full((b_size,), real_label, device=device) + # Forward pass real batch through D + output = netD(real_cpu).view(-1) + # Calculate loss on all-real batch + errD_real = criterion(output, label) + # Calculate gradients for D in backward pass + errD_real.backward() + D_x = output.mean().item() + + ## Train with all-fake batch + # Generate batch of latent vectors + noise = torch.randn(b_size, nz, 1, 1, device=device) + # Generate fake image batch with G + fake = netG(noise) + label.fill_(fake_label) + # Classify all fake batch with D + output = netD(fake.detach()).view(-1) + # Calculate D's loss on the all-fake batch + errD_fake = criterion(output, label) + # Calculate the gradients for this batch + errD_fake.backward() + D_G_z1 = output.mean().item() + # Add the gradients from the all-real and all-fake batches + errD = errD_real + errD_fake + # Update D + optimizerD.step() + + ############################ + # (2) Update G network: maximize log(D(G(z))) + ########################### + netG.zero_grad() + label.fill_(real_label) # fake labels are real for generator cost + # Since we just updated D, perform another forward pass of all-fake batch through D + output = netD(fake).view(-1) + # Calculate G's loss based on this output + errG = criterion(output, label) + # Calculate gradients for G + errG.backward() + D_G_z2 = output.mean().item() + # Update G + optimizerG.step() + + # Output training stats + if i % 50 == 0: + print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' + % (epoch, num_epochs, i, len(dataloader), + errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) + + # Save Losses for plotting later + G_losses.append(errG.item()) + D_losses.append(errD.item()) + + # Check how the generator is doing by saving G's output on fixed_noise + if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): + with torch.no_grad(): + fake = netG(fixed_noise).detach().cpu() + img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) + + iters += 1 +``` + +* 输出结果: + +```buildoutcfg +Starting Training Loop... +[0/5][0/1583] Loss_D: 2.0937 Loss_G: 5.2059 D(x): 0.5704 D(G(z)): 0.6680 / 0.0090 +[0/5][50/1583] Loss_D: 0.3567 Loss_G: 12.2064 D(x): 0.9364 D(G(z)): 0.1409 / 0.0000 +[0/5][100/1583] Loss_D: 0.3519 Loss_G: 8.8873 D(x): 0.8714 D(G(z)): 0.0327 / 0.0004 +[0/5][150/1583] Loss_D: 0.5300 Loss_G: 6.6410 D(x): 0.8918 D(G(z)): 0.2776 / 0.0030 +[0/5][200/1583] Loss_D: 0.2543 Loss_G: 4.3581 D(x): 0.8662 D(G(z)): 0.0844 / 0.0218 +[0/5][250/1583] Loss_D: 0.7170 Loss_G: 4.2652 D(x): 0.8285 D(G(z)): 0.3227 / 0.0370 +[0/5][300/1583] Loss_D: 0.5739 Loss_G: 4.2060 D(x): 0.8329 D(G(z)): 0.2577 / 0.0305 +[0/5][350/1583] Loss_D: 0.8139 Loss_G: 6.5680 D(x): 0.9163 D(G(z)): 0.3844 / 0.0062 +[0/5][400/1583] Loss_D: 0.4089 Loss_G: 5.0794 D(x): 0.8580 D(G(z)): 0.1221 / 0.0243 +[0/5][450/1583] Loss_D: 0.4785 Loss_G: 4.1612 D(x): 0.7154 D(G(z)): 0.0514 / 0.0258 +[0/5][500/1583] Loss_D: 0.3748 Loss_G: 4.2888 D(x): 0.8135 D(G(z)): 0.0955 / 0.0264 +[0/5][550/1583] Loss_D: 0.5247 Loss_G: 5.9952 D(x): 0.8347 D(G(z)): 0.1580 / 0.0075 +[0/5][600/1583] Loss_D: 0.7765 Loss_G: 2.2662 D(x): 0.5977 D(G(z)): 0.0408 / 0.1708 +[0/5][650/1583] Loss_D: 0.6914 Loss_G: 4.4091 D(x): 0.6502 D(G(z)): 0.0266 / 0.0238 +[0/5][700/1583] Loss_D: 0.5679 Loss_G: 5.3386 D(x): 0.8476 D(G(z)): 0.2810 / 0.0098 +[0/5][750/1583] Loss_D: 0.3717 Loss_G: 5.1295 D(x): 0.9221 D(G(z)): 0.2207 / 0.0106 +[0/5][800/1583] Loss_D: 0.4423 Loss_G: 3.1339 D(x): 0.8418 D(G(z)): 0.1655 / 0.0820 +[0/5][850/1583] Loss_D: 0.3391 Loss_G: 4.8393 D(x): 0.7920 D(G(z)): 0.0315 / 0.0169 +[0/5][900/1583] Loss_D: 0.4346 Loss_G: 4.3887 D(x): 0.8883 D(G(z)): 0.2270 / 0.0202 +[0/5][950/1583] Loss_D: 0.5315 Loss_G: 4.6233 D(x): 0.8393 D(G(z)): 0.2490 / 0.0188 +[0/5][1000/1583] Loss_D: 0.5281 Loss_G: 6.1465 D(x): 0.9643 D(G(z)): 0.3270 / 0.0049 +[0/5][1050/1583] Loss_D: 0.5515 Loss_G: 6.4457 D(x): 0.9262 D(G(z)): 0.3361 / 0.0033 +[0/5][1100/1583] Loss_D: 0.4430 Loss_G: 4.7469 D(x): 0.7306 D(G(z)): 0.0184 / 0.0202 +[0/5][1150/1583] Loss_D: 0.7336 Loss_G: 2.6978 D(x): 0.6552 D(G(z)): 0.1293 / 0.1059 +[0/5][1200/1583] Loss_D: 0.2927 Loss_G: 4.7480 D(x): 0.8858 D(G(z)): 0.1329 / 0.0173 +[0/5][1250/1583] Loss_D: 2.0790 Loss_G: 5.1077 D(x): 0.2722 D(G(z)): 0.0036 / 0.0172 +[0/5][1300/1583] Loss_D: 0.2431 Loss_G: 5.0027 D(x): 0.8812 D(G(z)): 0.0816 / 0.0169 +[0/5][1350/1583] Loss_D: 0.2969 Loss_G: 4.6160 D(x): 0.9126 D(G(z)): 0.1609 / 0.0183 +[0/5][1400/1583] Loss_D: 0.7158 Loss_G: 2.9825 D(x): 0.6117 D(G(z)): 0.0292 / 0.0900 +[0/5][1450/1583] Loss_D: 0.7513 Loss_G: 1.9396 D(x): 0.6186 D(G(z)): 0.0559 / 0.2414 +[0/5][1500/1583] Loss_D: 0.4366 Loss_G: 3.9122 D(x): 0.8736 D(G(z)): 0.2231 / 0.0325 +[0/5][1550/1583] Loss_D: 0.3204 Loss_G: 4.2434 D(x): 0.8395 D(G(z)): 0.0929 / 0.0271 +[1/5][0/1583] Loss_D: 0.5077 Loss_G: 4.8872 D(x): 0.9331 D(G(z)): 0.3082 / 0.0122 +[1/5][50/1583] Loss_D: 0.5637 Loss_G: 3.6652 D(x): 0.8525 D(G(z)): 0.2684 / 0.0414 +[1/5][100/1583] Loss_D: 0.4047 Loss_G: 3.6624 D(x): 0.8323 D(G(z)): 0.1508 / 0.0473 +[1/5][150/1583] Loss_D: 0.3858 Loss_G: 3.3070 D(x): 0.7873 D(G(z)): 0.0826 / 0.0583 +[1/5][200/1583] Loss_D: 0.4348 Loss_G: 3.6292 D(x): 0.8390 D(G(z)): 0.1908 / 0.0417 +[1/5][250/1583] Loss_D: 0.5953 Loss_G: 2.1992 D(x): 0.6572 D(G(z)): 0.0649 / 0.1540 +[1/5][300/1583] Loss_D: 0.4062 Loss_G: 3.8770 D(x): 0.8655 D(G(z)): 0.2012 / 0.0310 +[1/5][350/1583] Loss_D: 0.9472 Loss_G: 1.4837 D(x): 0.4979 D(G(z)): 0.0322 / 0.2947 +[1/5][400/1583] Loss_D: 0.5269 Loss_G: 2.6842 D(x): 0.9150 D(G(z)): 0.2922 / 0.1248 +[1/5][450/1583] Loss_D: 0.6091 Loss_G: 3.8100 D(x): 0.8194 D(G(z)): 0.2720 / 0.0360 +[1/5][500/1583] Loss_D: 0.5674 Loss_G: 3.2716 D(x): 0.8279 D(G(z)): 0.2452 / 0.0610 +[1/5][550/1583] Loss_D: 0.8366 Loss_G: 5.5266 D(x): 0.9263 D(G(z)): 0.4840 / 0.0076 +[1/5][600/1583] Loss_D: 0.6098 Loss_G: 2.2626 D(x): 0.6424 D(G(z)): 0.0640 / 0.1451 +[1/5][650/1583] Loss_D: 0.3970 Loss_G: 3.4130 D(x): 0.8347 D(G(z)): 0.1613 / 0.0491 +[1/5][700/1583] Loss_D: 0.5422 Loss_G: 3.1208 D(x): 0.7889 D(G(z)): 0.1972 / 0.0699 +[1/5][750/1583] Loss_D: 0.9114 Loss_G: 1.3789 D(x): 0.5066 D(G(z)): 0.0350 / 0.3440 +[1/5][800/1583] Loss_D: 1.1917 Loss_G: 5.6081 D(x): 0.9548 D(G(z)): 0.6084 / 0.0064 +[1/5][850/1583] Loss_D: 0.4852 Loss_G: 1.9158 D(x): 0.7103 D(G(z)): 0.0636 / 0.1943 +[1/5][900/1583] Loss_D: 0.5322 Loss_G: 2.8350 D(x): 0.7762 D(G(z)): 0.1994 / 0.0868 +[1/5][950/1583] Loss_D: 0.7765 Loss_G: 1.7411 D(x): 0.5553 D(G(z)): 0.0732 / 0.2260 +[1/5][1000/1583] Loss_D: 0.5518 Loss_G: 4.5488 D(x): 0.9244 D(G(z)): 0.3354 / 0.0161 +[1/5][1050/1583] Loss_D: 0.4237 Loss_G: 3.2012 D(x): 0.8118 D(G(z)): 0.1651 / 0.0583 +[1/5][1100/1583] Loss_D: 1.1245 Loss_G: 5.5327 D(x): 0.9483 D(G(z)): 0.5854 / 0.0090 +[1/5][1150/1583] Loss_D: 0.5543 Loss_G: 1.9609 D(x): 0.6777 D(G(z)): 0.0933 / 0.1936 +[1/5][1200/1583] Loss_D: 0.4945 Loss_G: 2.0234 D(x): 0.7580 D(G(z)): 0.1329 / 0.1742 +[1/5][1250/1583] Loss_D: 0.5637 Loss_G: 2.9421 D(x): 0.7701 D(G(z)): 0.2123 / 0.0780 +[1/5][1300/1583] Loss_D: 0.6178 Loss_G: 2.5512 D(x): 0.7828 D(G(z)): 0.2531 / 0.1068 +[1/5][1350/1583] Loss_D: 0.4302 Loss_G: 2.5266 D(x): 0.8525 D(G(z)): 0.2053 / 0.1141 +[1/5][1400/1583] Loss_D: 1.5730 Loss_G: 1.4042 D(x): 0.2854 D(G(z)): 0.0183 / 0.3325 +[1/5][1450/1583] Loss_D: 0.6962 Loss_G: 3.3562 D(x): 0.8652 D(G(z)): 0.3732 / 0.0534 +[1/5][1500/1583] Loss_D: 0.7635 Loss_G: 1.4343 D(x): 0.5765 D(G(z)): 0.0807 / 0.3056 +[1/5][1550/1583] Loss_D: 0.4228 Loss_G: 3.3460 D(x): 0.8169 D(G(z)): 0.1671 / 0.0522 +[2/5][0/1583] Loss_D: 0.8332 Loss_G: 1.5990 D(x): 0.6355 D(G(z)): 0.2409 / 0.2433 +[2/5][50/1583] Loss_D: 0.4681 Loss_G: 2.0920 D(x): 0.7295 D(G(z)): 0.0978 / 0.1626 +[2/5][100/1583] Loss_D: 0.7995 Loss_G: 2.8227 D(x): 0.7766 D(G(z)): 0.3675 / 0.0828 +[2/5][150/1583] Loss_D: 0.3804 Loss_G: 2.6037 D(x): 0.8523 D(G(z)): 0.1729 / 0.1016 +[2/5][200/1583] Loss_D: 0.9238 Loss_G: 0.8758 D(x): 0.5284 D(G(z)): 0.1343 / 0.4542 +[2/5][250/1583] Loss_D: 0.5205 Loss_G: 2.6795 D(x): 0.7778 D(G(z)): 0.1875 / 0.0934 +[2/5][300/1583] Loss_D: 0.7720 Loss_G: 3.8033 D(x): 0.9307 D(G(z)): 0.4405 / 0.0384 +[2/5][350/1583] Loss_D: 0.5825 Loss_G: 3.3677 D(x): 0.9309 D(G(z)): 0.3609 / 0.0470 +[2/5][400/1583] Loss_D: 0.4290 Loss_G: 2.5963 D(x): 0.7495 D(G(z)): 0.1047 / 0.0976 +[2/5][450/1583] Loss_D: 0.7161 Loss_G: 4.0053 D(x): 0.8270 D(G(z)): 0.3655 / 0.0252 +[2/5][500/1583] Loss_D: 0.5238 Loss_G: 2.3543 D(x): 0.8084 D(G(z)): 0.2320 / 0.1330 +[2/5][550/1583] Loss_D: 0.7724 Loss_G: 2.2096 D(x): 0.6645 D(G(z)): 0.2238 / 0.1417 +[2/5][600/1583] Loss_D: 0.4897 Loss_G: 2.8286 D(x): 0.7776 D(G(z)): 0.1738 / 0.0832 +[2/5][650/1583] Loss_D: 1.2680 Loss_G: 4.7502 D(x): 0.8977 D(G(z)): 0.6179 / 0.0149 +[2/5][700/1583] Loss_D: 0.7054 Loss_G: 3.3908 D(x): 0.8692 D(G(z)): 0.3753 / 0.0490 +[2/5][750/1583] Loss_D: 0.4933 Loss_G: 3.6839 D(x): 0.8933 D(G(z)): 0.2845 / 0.0368 +[2/5][800/1583] Loss_D: 0.6246 Loss_G: 2.7728 D(x): 0.8081 D(G(z)): 0.2968 / 0.0821 +[2/5][850/1583] Loss_D: 1.2216 Loss_G: 1.1784 D(x): 0.3819 D(G(z)): 0.0446 / 0.3623 +[2/5][900/1583] Loss_D: 0.6578 Loss_G: 1.7445 D(x): 0.6494 D(G(z)): 0.1271 / 0.2173 +[2/5][950/1583] Loss_D: 0.8333 Loss_G: 1.2805 D(x): 0.5193 D(G(z)): 0.0543 / 0.3210 +[2/5][1000/1583] Loss_D: 0.7348 Loss_G: 0.7953 D(x): 0.5920 D(G(z)): 0.1265 / 0.4815 +[2/5][1050/1583] Loss_D: 0.6809 Loss_G: 3.7259 D(x): 0.8793 D(G(z)): 0.3686 / 0.0401 +[2/5][1100/1583] Loss_D: 0.7728 Loss_G: 2.1345 D(x): 0.5886 D(G(z)): 0.1234 / 0.1626 +[2/5][1150/1583] Loss_D: 0.9383 Loss_G: 3.7146 D(x): 0.8942 D(G(z)): 0.5075 / 0.0355 +[2/5][1200/1583] Loss_D: 0.4951 Loss_G: 2.8725 D(x): 0.8084 D(G(z)): 0.2163 / 0.0764 +[2/5][1250/1583] Loss_D: 0.6952 Loss_G: 2.1559 D(x): 0.6769 D(G(z)): 0.2063 / 0.1561 +[2/5][1300/1583] Loss_D: 0.4560 Loss_G: 2.6873 D(x): 0.7993 D(G(z)): 0.1710 / 0.0908 +[2/5][1350/1583] Loss_D: 0.9185 Loss_G: 3.9262 D(x): 0.8631 D(G(z)): 0.4938 / 0.0276 +[2/5][1400/1583] Loss_D: 0.5935 Loss_G: 1.2768 D(x): 0.6625 D(G(z)): 0.1064 / 0.3214 +[2/5][1450/1583] Loss_D: 0.8836 Loss_G: 4.0820 D(x): 0.9368 D(G(z)): 0.5101 / 0.0251 +[2/5][1500/1583] Loss_D: 0.5268 Loss_G: 2.1486 D(x): 0.7462 D(G(z)): 0.1701 / 0.1450 +[2/5][1550/1583] Loss_D: 0.5581 Loss_G: 3.0543 D(x): 0.8082 D(G(z)): 0.2489 / 0.0644 +[3/5][0/1583] Loss_D: 0.6875 Loss_G: 2.3447 D(x): 0.7796 D(G(z)): 0.3180 / 0.1182 +[3/5][50/1583] Loss_D: 0.7772 Loss_G: 1.2497 D(x): 0.5569 D(G(z)): 0.0763 / 0.3372 +[3/5][100/1583] Loss_D: 1.8087 Loss_G: 0.8440 D(x): 0.2190 D(G(z)): 0.0213 / 0.4701 +[3/5][150/1583] Loss_D: 0.6292 Loss_G: 2.8794 D(x): 0.8807 D(G(z)): 0.3623 / 0.0741 +[3/5][200/1583] Loss_D: 0.5880 Loss_G: 2.2299 D(x): 0.8279 D(G(z)): 0.3026 / 0.1316 +[3/5][250/1583] Loss_D: 0.7737 Loss_G: 1.2797 D(x): 0.5589 D(G(z)): 0.0836 / 0.3363 +[3/5][300/1583] Loss_D: 0.5120 Loss_G: 1.5623 D(x): 0.7216 D(G(z)): 0.1406 / 0.2430 +[3/5][350/1583] Loss_D: 0.5651 Loss_G: 3.2310 D(x): 0.8586 D(G(z)): 0.3048 / 0.0518 +[3/5][400/1583] Loss_D: 1.3554 Loss_G: 5.0320 D(x): 0.9375 D(G(z)): 0.6663 / 0.0112 +[3/5][450/1583] Loss_D: 0.5939 Loss_G: 1.9385 D(x): 0.6931 D(G(z)): 0.1538 / 0.1785 +[3/5][500/1583] Loss_D: 1.5698 Loss_G: 5.0469 D(x): 0.9289 D(G(z)): 0.7124 / 0.0106 +[3/5][550/1583] Loss_D: 0.5496 Loss_G: 1.7024 D(x): 0.6891 D(G(z)): 0.1171 / 0.2172 +[3/5][600/1583] Loss_D: 2.0152 Loss_G: 6.4814 D(x): 0.9824 D(G(z)): 0.8069 / 0.0031 +[3/5][650/1583] Loss_D: 0.6249 Loss_G: 2.9602 D(x): 0.8547 D(G(z)): 0.3216 / 0.0707 +[3/5][700/1583] Loss_D: 0.4448 Loss_G: 2.3997 D(x): 0.8289 D(G(z)): 0.2034 / 0.1153 +[3/5][750/1583] Loss_D: 0.5768 Loss_G: 2.5956 D(x): 0.8094 D(G(z)): 0.2721 / 0.1032 +[3/5][800/1583] Loss_D: 0.5314 Loss_G: 2.9121 D(x): 0.8603 D(G(z)): 0.2838 / 0.0724 +[3/5][850/1583] Loss_D: 0.9673 Loss_G: 4.2585 D(x): 0.9067 D(G(z)): 0.5233 / 0.0206 +[3/5][900/1583] Loss_D: 0.7076 Loss_G: 2.7892 D(x): 0.7294 D(G(z)): 0.2625 / 0.0909 +[3/5][950/1583] Loss_D: 0.4336 Loss_G: 2.8206 D(x): 0.8736 D(G(z)): 0.2363 / 0.0770 +[3/5][1000/1583] Loss_D: 0.6914 Loss_G: 1.9334 D(x): 0.6811 D(G(z)): 0.2143 / 0.1734 +[3/5][1050/1583] Loss_D: 0.6618 Loss_G: 1.8457 D(x): 0.6486 D(G(z)): 0.1421 / 0.2036 +[3/5][1100/1583] Loss_D: 0.6517 Loss_G: 3.2499 D(x): 0.8540 D(G(z)): 0.3491 / 0.0532 +[3/5][1150/1583] Loss_D: 0.6688 Loss_G: 3.9172 D(x): 0.9389 D(G(z)): 0.4170 / 0.0269 +[3/5][1200/1583] Loss_D: 0.9467 Loss_G: 0.8899 D(x): 0.4853 D(G(z)): 0.1028 / 0.4567 +[3/5][1250/1583] Loss_D: 0.6048 Loss_G: 3.3952 D(x): 0.8353 D(G(z)): 0.3150 / 0.0425 +[3/5][1300/1583] Loss_D: 0.4915 Loss_G: 2.5383 D(x): 0.7663 D(G(z)): 0.1622 / 0.1071 +[3/5][1350/1583] Loss_D: 0.7804 Loss_G: 1.5018 D(x): 0.5405 D(G(z)): 0.0719 / 0.2701 +[3/5][1400/1583] Loss_D: 0.6432 Loss_G: 1.5893 D(x): 0.6069 D(G(z)): 0.0576 / 0.2577 +[3/5][1450/1583] Loss_D: 0.7720 Loss_G: 3.8510 D(x): 0.9291 D(G(z)): 0.4558 / 0.0299 +[3/5][1500/1583] Loss_D: 0.9340 Loss_G: 4.6210 D(x): 0.9556 D(G(z)): 0.5341 / 0.0141 +[3/5][1550/1583] Loss_D: 0.7278 Loss_G: 4.0992 D(x): 0.9071 D(G(z)): 0.4276 / 0.0231 +[4/5][0/1583] Loss_D: 0.4672 Loss_G: 1.9660 D(x): 0.7085 D(G(z)): 0.0815 / 0.1749 +[4/5][50/1583] Loss_D: 0.5710 Loss_G: 2.3229 D(x): 0.6559 D(G(z)): 0.0654 / 0.1285 +[4/5][100/1583] Loss_D: 0.8091 Loss_G: 0.8053 D(x): 0.5301 D(G(z)): 0.0609 / 0.4987 +[4/5][150/1583] Loss_D: 0.5661 Loss_G: 1.4238 D(x): 0.6836 D(G(z)): 0.1228 / 0.2842 +[4/5][200/1583] Loss_D: 0.6187 Loss_G: 1.6628 D(x): 0.6178 D(G(z)): 0.0744 / 0.2292 +[4/5][250/1583] Loss_D: 0.9808 Loss_G: 2.0649 D(x): 0.5769 D(G(z)): 0.2623 / 0.1706 +[4/5][300/1583] Loss_D: 0.6530 Loss_G: 2.7874 D(x): 0.8024 D(G(z)): 0.3063 / 0.0804 +[4/5][350/1583] Loss_D: 0.5535 Loss_G: 2.5154 D(x): 0.7744 D(G(z)): 0.2165 / 0.1023 +[4/5][400/1583] Loss_D: 0.5277 Loss_G: 2.1542 D(x): 0.6766 D(G(z)): 0.0801 / 0.1474 +[4/5][450/1583] Loss_D: 0.5995 Loss_G: 2.6477 D(x): 0.7890 D(G(z)): 0.2694 / 0.0902 +[4/5][500/1583] Loss_D: 0.7183 Loss_G: 1.2993 D(x): 0.5748 D(G(z)): 0.1000 / 0.3213 +[4/5][550/1583] Loss_D: 0.4708 Loss_G: 2.0671 D(x): 0.7286 D(G(z)): 0.1094 / 0.1526 +[4/5][600/1583] Loss_D: 0.5865 Loss_G: 1.9083 D(x): 0.7084 D(G(z)): 0.1745 / 0.1867 +[4/5][650/1583] Loss_D: 1.5298 Loss_G: 4.2918 D(x): 0.9623 D(G(z)): 0.7240 / 0.0197 +[4/5][700/1583] Loss_D: 0.9155 Loss_G: 0.9452 D(x): 0.4729 D(G(z)): 0.0575 / 0.4395 +[4/5][750/1583] Loss_D: 0.7500 Loss_G: 1.7498 D(x): 0.5582 D(G(z)): 0.0772 / 0.2095 +[4/5][800/1583] Loss_D: 0.5993 Loss_G: 2.5779 D(x): 0.7108 D(G(z)): 0.1829 / 0.1063 +[4/5][850/1583] Loss_D: 0.6787 Loss_G: 3.6855 D(x): 0.9201 D(G(z)): 0.4084 / 0.0347 +[4/5][900/1583] Loss_D: 1.2792 Loss_G: 2.2909 D(x): 0.6365 D(G(z)): 0.4471 / 0.1575 +[4/5][950/1583] Loss_D: 0.6995 Loss_G: 3.3548 D(x): 0.9201 D(G(z)): 0.4188 / 0.0488 +[4/5][1000/1583] Loss_D: 0.6913 Loss_G: 3.9969 D(x): 0.8630 D(G(z)): 0.3771 / 0.0242 +[4/5][1050/1583] Loss_D: 0.7620 Loss_G: 1.7744 D(x): 0.6668 D(G(z)): 0.2290 / 0.2204 +[4/5][1100/1583] Loss_D: 0.6901 Loss_G: 3.1660 D(x): 0.8472 D(G(z)): 0.3595 / 0.0593 +[4/5][1150/1583] Loss_D: 0.5866 Loss_G: 2.4580 D(x): 0.7962 D(G(z)): 0.2695 / 0.1049 +[4/5][1200/1583] Loss_D: 0.8830 Loss_G: 3.9824 D(x): 0.9264 D(G(z)): 0.5007 / 0.0264 +[4/5][1250/1583] Loss_D: 0.4750 Loss_G: 2.1389 D(x): 0.8004 D(G(z)): 0.1933 / 0.1464 +[4/5][1300/1583] Loss_D: 0.4972 Loss_G: 2.3561 D(x): 0.8266 D(G(z)): 0.2325 / 0.1285 +[4/5][1350/1583] Loss_D: 0.6721 Loss_G: 1.1904 D(x): 0.6042 D(G(z)): 0.0839 / 0.3486 +[4/5][1400/1583] Loss_D: 0.4447 Loss_G: 2.7106 D(x): 0.8540 D(G(z)): 0.2219 / 0.0852 +[4/5][1450/1583] Loss_D: 0.4864 Loss_G: 2.5237 D(x): 0.7153 D(G(z)): 0.1017 / 0.1036 +[4/5][1500/1583] Loss_D: 0.7662 Loss_G: 1.1344 D(x): 0.5429 D(G(z)): 0.0600 / 0.3805 +[4/5][1550/1583] Loss_D: 0.4294 Loss_G: 2.9664 D(x): 0.8335 D(G(z)): 0.1943 / 0.0689 +``` + + +#### 3.3.5 结果 +最后,让我们看看我们是如何做到的。在这里,我们将看看三个不同的结果。首先,我们将看到![](notation/D.gif)和![](notation/G.gif) +的损失在训练期间是如何变化的。其次,我们将可视化在每个epoch的 fixed_noise batch中![](notation/G.gif)的输出。第三,我们将 +查看来自![](notation/G.gif)的紧邻一批实际数据的一批假数据。 + +#### 损失与训练迭代 +下面是D&G的损失与训练迭代的关系图。 + +```buildoutcfg +plt.figure(figsize=(10,5)) +plt.title("Generator and Discriminator Loss During Training") +plt.plot(G_losses,label="G") +plt.plot(D_losses,label="D") +plt.xlabel("iterations") +plt.ylabel("Loss") +plt.legend() +plt.show() +``` + +![](https://pytorch.org/tutorials/_images/sphx_glr_dcgan_faces_tutorial_002.png) + +#### G的过程可视化 +记住在每个训练epoch之后我们如何在fixed_noise batch中保存生成器的输出。现在,我们可以通过动画可视化G的训练进度。按播放按钮 +开始动画。 + +```buildoutcfg +#%%capture +fig = plt.figure(figsize=(8,8)) +plt.axis("off") +ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list] +ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True) + +HTML(ani.to_jshtml()) +``` + +![](https://pytorch.org/tutorials/_images/sphx_glr_dcgan_faces_tutorial_003.png) + +#### 真实图像 vs 伪图像 +最后,让我们一起看看一些真实的图像和伪图像。 + +```buildoutcfg +# 从数据加载器中获取一批真实图像 +real_batch = next(iter(dataloader)) + +# 绘制真实图像 +plt.figure(figsize=(15,15)) +plt.subplot(1,2,1) +plt.axis("off") +plt.title("Real Images") +plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0))) + +# 在最后一个epoch中绘制伪图像 +plt.subplot(1,2,2) +plt.axis("off") +plt.title("Fake Images") +plt.imshow(np.transpose(img_list[-1],(1,2,0))) +plt.show() +``` + +![](https://pytorch.org/tutorials/_images/sphx_glr_dcgan_faces_tutorial_004.png) + +## 进一步的工作 +我们已经完成了我们的整个教程,但是你可以从下面几个方向进一步探讨。 你可以: + +* 训练更长时间,看看结果有多好
+* 修改此模型以获取不同的数据集,并或者更改图像和模型体系结构的大小
+* 在[这里](https://github.com/nashory/gans-awesome-applications)查看其他一些很酷的GAN项目
+* 创建生成[音乐](https://github.com/nashory/gans-awesome-applications)的GAN
diff --git a/SixthSection/notation/D(x).gif b/SixthSection/notation/D(x).gif new file mode 100644 index 0000000000000000000000000000000000000000..4bbe0587f106610c536e00e2095b34d1b0d9a31c GIT binary patch literal 278 zcmV+x0qOonNk%w1VJ83*0J8u9|Ns90004G&b|NAoW@ctYL`1r}x>QtD?(XiGn3&w$ z+%hsUs;a6G5D@B#EI9;9yjW#|a|fk`xrn$pc3@Sro!a!@@`i z4CMn>{433<>={L@junX$UKI&|5(tMD z3rZB700ROQbX9Q|0c)rdol~s_1FcqT0EPgW5)ZZjTM_~b0S7AwH5y4;1Pl)h0t~cG cSr-M<06^B3cs>T8*=`~v*%lZ+91{@$JFeVabN~PV literal 0 HcmV?d00001 diff --git a/SixthSection/notation/D.gif b/SixthSection/notation/D.gif new file mode 100644 index 0000000000000000000000000000000000000000..5d8937d82f44cdb35a5b423f90f6096df0677ef5 GIT binary patch literal 149 zcmZ?wbhEHbQtD?(XiGn3&w$ z+%hsUs;a6G5D& z1B%CE0vZ9v)fx}E(#;aj5{@F#2&fI$dR*t+rWLE71L_lT@27!c69l%POQzoN!E@1P z(tvxZI)$T$WLO15lMXmA@PGgp0E`wX60jh^4*^01ByL!fdWm?9z~5D*YVL`2-& z+-7EGy1Kfms;V+FGKh$XA^8LW00000EC2ui03-ks000EL5XedE6vkE~w71K5g_h80{q0%{Zm3?5($ zK@TQFzA2?kQodtjshNdIC3D#rYtzaVn{41JUX5axH&LhARTJJ1Tadt?1s9zb`}qlgMebx z04uH9dQNkCCip3EkO&$Z5d$PF`i~!Uc6bsB1p^K~85$1;4g+-nh6Y#&0b~FU2zD9= za%)JJ5}FbWn*$}L02LCfrKeB;1q&Dom;eo72vwjRv$dE?01E;D!eFlw$6*q*fpxCE zUuPS$5)F0MxRG+&*c%C(9&n`69pN4VM+n#t>c;H{i7fBz2Vn#OAcqC|<{bI>6cDfg yih+RzTNDs?kl>344gwg60w&sR)Kc8BEBUilP4fpn7$7{A4%*dZJOglI!qIbsNkC;`DBT0ajLC0rp> z4g#jcK@kuPfu(x0P*4yDB7kjiA}2zBdop|!g%gJe1#<>^6LE7J0gev=D~%Ql3l;*4 z5|}Ro300#Mq7w;rY71!=kdbAUKd-D64F?-JwJ!{+6TKC{5(L z6j{tC!HGaw%f1yr8cUE^3k)t&Ku7nU00e2U8WMvAQ8$42s2yNCq0_~04G!VpIsx5- zX;14`SVRv15i~}8CHM%Z+%AN2=g_-&Q|@O8gU5vQ$L@iG0S_27P$ujbOaQ{(bQ93P zz_AJhL=Dgest17u0URWQMZiITEMQ+>C_dq3aM)c@=+O}Gz|&KvzCWGOcLIWsWJRN& zah{L`#e>vH1gxk=dw4*pKaJ)RBq#u1JN8Agj|>Nbun!RCCFnr{1Qp zCVE!FAa>0w&sR)Kwuj`>UilP4fpn7$7{A4%*dZJOglI!qIbsNkC;`DBT15{RC0rp> z4g#jcK@kuPfu(x0P*4yDB7kjiA}2zBdoz3#g%gJe1#<>^6LE7J0gev=D~%Ql3l;*4 z5|}ar300#Mq7w;rY71!=kdbAUKd-D64F?-JwK5E<6TKC{5(L z6j{_K!HESpE5uSLfHamMwH739NJkf+00acNt`YNeO}Bvgs2ylKq0_~04G!VZdk#d_ zZ58T9UY3vu8Y8|Ee1v0eSB1cl>Rr5f5rAi^gaJGV`Oh3m0B|HQXeaC!OaQ{(bQ93P zz%i$y251A-gTR6S4wAtl;2=O2unLwsbksG=fB{RS!zRL@K>`K|2qY+d%9O^g=V$co z@saEt1np1#S`n5|k7M zOqodzQvo4~8+Kw@01u8nAp>Q9Q;anaq!JGYo4R$76$}f(!oxim1hEy001HzB0e%UZ z00X=L1TBZXcNL-=bk7Kr(h?3m#}f{!fv3KAj9=|fi~v(V60lkeQO^@z+L2F=NMJCC zj07X<1T-+PrUC&h0~|@AfPlb)01n!SVbCA}g9I%E612ktp#}06Ty^iOT>0 literal 0 HcmV?d00001 diff --git a/SixthSection/notation/log(x).gif b/SixthSection/notation/log(x).gif new file mode 100644 index 0000000000000000000000000000000000000000..5e611333b995a4494468724d0aa69655d0b93206 GIT binary patch literal 330 zcmV-Q0k!@|Nk%w1VJ`p@0J8u9|Ns90007+F+=z&Xc6N3U5D=J{m?9z~?(XiYs;X2} zR76BX%*@PYW@a)nGP=6DA^8LW00000EC2ui051R%000E{5XecZy~x~ZkQKuwTFp2i zB}EjPISz~~Jt1`nvgqW}eipoop_LmXCJ9B*vcT~eN{NCfNem2x*ok0hG-|*~0{a*% zoRTo0fChXe2({qgK?EM@$^$1T91B_!0TO)_20ns#01iX}9&=(dY87+<15s&2b}mZAUzDRCQ&8)2&me6Bq;q!SKDc@}m~iW5L~07N$umq-f| z2CEZI6AwVs5)5rn5?sOn0u2UN6$C~DlS2~>4G9bh1PRB^;Y+cl2+=%<_={9x1v2p< cpbG*%68M{t!UPI@@~jZpkikWV2@wDQJNl4t00000 literal 0 HcmV?d00001 diff --git a/SixthSection/notation/log+log(1-).gif b/SixthSection/notation/log+log(1-).gif new file mode 100644 index 0000000000000000000000000000000000000000..e3a75d3d8328381db2a61d6d5432b79d883310da GIT binary patch literal 961 zcmV;y13vsmNk%w1Vd4N10J8u9|Ns90007+F+=z&Xc6N3U5D=J{m?9z~?(XiYs;X2} zR76BX%*@PYW@a)nGP=6DA^8LW00000EC2ui0O9}=000F35XecZy~x~MGoJ?Xjbyob zW~z4D3ZPtA$rMcqgIhJa#FL0$?}8ipjL3^{wLCS0ohG5s>J1nc!XZHDHWrV{)%3_E zCDW>IOhJIp6fM{Lrm#B%MVEkJRPP^U6@nCm69)oD4jT&!Z3k+LDUxuAiZ=uX0by_x zjgOHRT38cPqyUs8sS>NGZ3RXKY7_yp4*?}uQY0Cgql#rm|gO2_R#!1+C0p5s&6&5yph#=y`ksH!= zx>!L##*MeSaUhA13V;V9HCiB9tw7M1FRBSRAOamHo~WAqOouZOHD3iD2KZ%fgHZ}d zhfP@8lxobbNgPPM^|ZnQs1-;WNWfq~rV3l}9+N<=MyhlUsx(^~Qzx3i{wQpF0F~(o zlN|bj_{%AXpn(I#Wj?80w7>jUaXjr#{>>^!_xs4-cW!&JfSmAR|Go4p)w(kNPz<;x|8Ar z4&JB(i#5I2+zzcYKq5?o1Y;OcAEKknA>%N|(P)~86wnKsP1n^vjOmgW13?`WgOE~I zVF>_MdV^(xo&=?gkSjP<$N^uIBD1kn9R4ah@}p3vQ^;N=bC;(Cl;1CeuT-&}KVAE8r?J3$4~Z z6v#CgXv@mD=Bld#xy-hE!2`C`%d8{`Oe<9i`&s~nyBrXJuf72H&@TxGpYz1RE`+*J ji~gcp0t_*@SFmED+I51&7jr;y3LjfwgEJP}5CH%?*O-ra literal 0 HcmV?d00001 diff --git a/SixthSection/notation/log+log_DGz.gif b/SixthSection/notation/log+log_DGz.gif new file mode 100644 index 0000000000000000000000000000000000000000..8abf2c7a05ccb0844541240bcbe5219be56c4f57 GIT binary patch literal 915 zcmV;E18n?9Nk%w1VZ#6t0J8u9|Ns90007+F+=z&Xc6N3U5D=J{m?9z~?(XiYs;X2} zR76BX%*@PYW@a)nGP=6DA^8LW00000EC2ui0K)(h000F35XecZy~x~MGoJ?Xjbyob zW~ZeS%7qn7$&@g#SA$bJiS#>;EyyDRfH>qy`RgbNg%xqYun-Obg0``ERH&xM%~*j- zgWxh&OW=Ab=?+1`MlhuDkGQg@^d|=bWDXk(3KR$jXM`z?WE6peg@-MQ6pj>ERTEH~ z0GK17pB9)11!M+i6alIa0U`jYECUH725%pxsHzjJe6JF+6$=X%0;j>e9JF7xaLxb; zltc@H7Ot=@0dLhT4AU>T9MWXn*T9I|66O;P2OB`_B;OVE6!riLstE%I1fBp85HQf7 z!5Cy#-c=Cb3jo3i3H&{XsKA}fCvQYXkKujBxcZ*f{xWH05E_paT{>*7^6!<&!0y(5Iy$>$xGoN(lt!DugKE0ayS&M`--e7R1{(aeIlNd%><1 zy=aSNQs7|07{(U_@Cpnmn}omvz`>LNvILO%@V zfPI2?h~2;UFrO)TtnwHO5IE&eu4F!k; z7Esne_16plqGce0moPZZg8^P3;X)PiRzU-zG>{-keBg4IfEAnqR4jm4vzr77NM=BR z6UgJ6IKn~2j$ijlz`#uSgp-R?QbYp4j?wrCBj*^W_jWS4^TGDOkoc200RvG z)Fc3F+A|O|0yJPiKV{G|z(c~|I2rj%}+_xjFYt{+x=8`YIf*D6cFxf-^06VdzdFx1Vr5|Pc<;`Gg1cutg}#15Qmqgcjb`ORMZG#g9ivWERQ4<0OAP8v`4l5K` zYJ?LHLIMR~1$Ytx5~LIcPpN7SN?8;EU=;+km4#QJ6_x-4Z~!i7JZ4u1owUDWyB7q* zKp+cB0s)H&s=>;CoB)6o2F#NevKWNe2&3CeKhYBow-N^g6{Kp67MN}MjF?*{1d$*t z%mNAH7Gd)D;DEn1iylZBCxOAEH-8Mw6VSjw9b7(N27n^qAb<~f^pdUPp4QbK~BWWHrkK_(_0dp4mAweVp=1OPio){CV8 literal 0 HcmV?d00001 diff --git a/SixthSection/notation/log_DGz.gif b/SixthSection/notation/log_DGz.gif new file mode 100644 index 0000000000000000000000000000000000000000..49daf2cee8cd2b3904dd542caa3200d4a3fb87cd GIT binary patch literal 578 zcmV-I0=@l5Nk%w1VP^mm0J8u9|Ns90007+F+=z&Xc6N3U5D=J{m?9z~?(XiYs;X2} zR76BX%*@PYW@a)nGP=6DA^8LW00000EC2ui0A~Oa000F35XecZy~x~MGymRaAk|os zl`wp))U@0STQXQn$PMHbQE0QvfMFpV0)%8^@u&s6b5?gZdZo@fZGs& z*_?m_2Qi`$#lZqD2`>l`1Rz`mi(yC%h=uJ6DI)Npl&umt@=L&rn<7d;av=a0k{cKp zoPd$0Cw`6%_#hJKGuEJpcdz literal 0 HcmV?d00001 diff --git a/SixthSection/notation/p_data.gif b/SixthSection/notation/p_data.gif new file mode 100644 index 0000000000000000000000000000000000000000..8746a8af8fd8cad49c76087cfb1916aa822c8b93 GIT binary patch literal 239 zcmV5CAa-DaFXmyZ@4K1chV_ zV5%Yql5EPt9SjPxDlo_rqj8Hz%0b96N)l4%m13#_hfo7n7&xc~&BNdrL;;G-4}b_1 z0S-h@J@_%$w#w$MvjGYVjNAkklWH9T3S^EC pnw$p@2UjJZ2%P`~0U!ni0t+^+udxM~vakRT4gm&VIlmqe06TIEQ#Swr literal 0 HcmV?d00001 diff --git a/SixthSection/notation/p_g.gif b/SixthSection/notation/p_g.gif new file mode 100644 index 0000000000000000000000000000000000000000..3abf492683021457fa7a933fc317e50f5635608d GIT binary patch literal 175 zcmV;g08sx&Nk%w1VG#fh0J8u9|Ns9000640s)&e)+}zxln3!f}X729p%*@OX5D<2D zc2rbUL_|a)A|f&}GP=6DA^8LW00000EC2ui01*HV000DB5CAa-DaFXm=Lv;m4Bdtt zNU|wQ$qlWoja>mtvYk&Oh-pYSy@?H zTAGrQ5>O2UDE?$&1hI5L1jq~q79{}>LkxHTc>m~)M L1Op!#Mh0sD*MuW* literal 0 HcmV?d00001 diff --git a/SixthSection/notation/公式1.PNG b/SixthSection/notation/公式1.PNG new file mode 100644 index 0000000000000000000000000000000000000000..6065c691fe01e7111aa60e212d645232d4aa56e7 GIT binary patch literal 4759 zcmb`LRa6uJv&VOlSfo3b*aZZnMS97FrCCb4K|-XZVTq+Xl%N>xMY=&j zI+tEPzk8qW+x?$2^PB(MJj|RqXA+RQ2(tUk_W=L^nT9&t0000+{;Rn_g#WVVUF`5b z0DTP*%7D5N*1!J@zO#~!5&+PcMsj0E@Xr%_shjx%0OWoD4Pd`#l_LN^EuaBcG77Zb zvl@NIqLIJUttJ^sffr3qcP@1GTK$y zL*cFX9Uj>~rR2uY1OH2=*#AGKJV@ru8znQ+li4abTdgY7V}O$4Hd?0ahKDx=*B!me1*$ zsjEIUj~c-k`(#dY)layfpY6D}_d-Nt7VK=m@x7}d!@uoNhp@9wRs77@u#sX+`9xL0 z;!oPq=jG?-E@FSHR4z2{RjYDv;h}}eHVg@wbGA>wV~Y+UE#sX}i0X(=S@RCN`P-?A zeXFeE-%C2cvtEcU6kR6)90J{YL#Lzm-iNdtR1{m`gKi~*%Addt3hv8EzIwLRw%*T3 zHZIGrs(v-X*y4h#uw@H5rwP*=QMr9DLpe%$>>WvYZVelS@HWUgVFL7~lqb?&J| zQ}W#GdmI)4DQ6ks*gxI_+O7Z+-zf}6&6%zo<+?PCu|mT?Um_!Ky(4~_yhPA0y@{v- zkuK-b-u~KGih>_@Uw~{Pi6viV7V)LIK1~z_vofh4Ts#-e1r2x?$$hq?+Ujl`yHF+{ zSA#A+E#`-mb8Y%Qy%Q>vQroklG-FQqTu`FJQ1FgedPh9l1C&t^fHuqgHO~y zfIn6?nsLwbXYS#Hik=IR?Z8?E|9ncYm50Xt?05DKrD^PTRgJLu&|rmdm{xrHGC8w1 z=PHBDp$ESHXc3#)Yp>hhp#O6jtClL`O#H1fkQPZn{m$Lp%~q$OoNq09V$`! z5fr8qsvn4MHwO4`6*Xr4QhJs}5H%8W@D;qvkUGxCztJceW;Z_xiU+U|Cyc5vo#>@B zfRepF>14SJ;Gm~z4|&~ZTz1W9fx)o~S4MHo)~`jn01kz{$LDTy@kg3HQulRxd*<^-asYS|KHE2Oa)KwJXnE z9KSC2_zwsurGDepW_It_MB3M_pX}uXt;s`f9&INL<0W*;LFZQ0D3hhSi079lAtH#_ zQOOOG*%P|Lfs`l0--~Cod{Xj24_^-PI%I=|=~5|uh069h)h!>tL#f+~>ZPy^Y7{bu zIFZj;xRy)qRK*#z}BJdZ?DSrQs0bi+>ByATwNFUK4{u`86RnW?i zr4^*J%$r?o_?ML`@!Ac9dg9vDCb6wN`&ymqD3A~wyA*D{bY99c0kkyK9Hkrcd*I_} zFW&%?gNhn*9c@9pENZ!Ua=@J2Y|%%d*}n4Y_hmy?#L2;HvHRBKt}o~I_MDw2N6-wL zTYOzXm>?+hr5{F}VD>GoemPiR!cpa9Qc|)aXl-uvMgKaR+B9rZeRrs<4I(A@S^wXT zwsikhRb;B62D&=7CPc`Js(?{_(%m3XGApV4`1kkydkR84UiY;k56yK0IYm5Kc;UT* z#N#8F90oR&)Lz+=q>wA#JvDaLKD+dhWUS0fjqMw}hU?}Poqk?Nqp+2kLWv9Qt z2F1qbI@n)J3`m~$qE8z)l(=!L{!$CpZVtS|j6GklGzCK2^(tT22F}+ncpOoBHu$_~ zg{E{XzOu_fr3V7!CgQn41YaPC``e5v3;C%(IX=ENU@R@n$jNb};$o11N5?~WD&Uk^ zFp=uGt&Y04#@tM_EiCYHwBvRt%&+_xg0 zS(7uf)7l4kO>9Wj>(Xc(N-G2$vOA@;H)n;9L=)wC`$<5{4>L-JTqoQj>XSP>eX&kCxpXirC@+cU0(q)G~myN4Fhi*7=qIbl4!X@PX`ozHm}E7Y>My661jO@x z`8Thd7vFsFxCTS~nbb`2@n=|spLSsN(MaoznnD>HsfsB@1?p-!Qb;qkn{lb7e?)e< z;nS|SC5OkfDyx?@p)ID32!|fWdWe)Lws?mY_!j4suX#J;m%bqz+>&pt?Yo_3Xt;HiOdV63-?y8+}+>ED?J5 zUYZNOzGA8IHQQ+QKWepq!f14+ z?Ulb3luwx9LIy^S_)MxKEiX@dmyL>FO2n5;X*T?o39+F2p+co9&MwD2DIYn>Lv`~g zpcrs&LO8pd$?Q&4E65a^X!MIU<+~^Apx1u3E=9Ww+r%}v(4KPW7kPNj?X;|Ty2Sue zAxFQotBO{`Jxt36g%2UT>5R!dF!^{&6 z^xJ%KU0lT=WOPT5X3++JAq+2wTKk#)9*xx6t)P}8!xGxr^ZjMQ+;MemuN#vfoob4Z z-grsCJNAIiibT8)&)D4!*(3yg?j@e)yyLno_lB8bXgwXA^pb1XDNbGEoy10)fSa`C z&12jB$H=nq+K&XGR}v)2Z5D`_(!|CSI+OI@jb|sr@}0#plHfqS%r*W$vRCh4qNdSs4&u7F zt-`YnG}$C&ro)_5cL#h_*pY&cSo_{)N9vVS6L*wNZRvWl$PH3yq zU!Hb#*hbbwR*@yq`pJ=NlZEfBVJ3Co<%F@laG8qmw2;Xtw}t&B)O*@r+t*6Gkhx!y z>N*5T_0g_%cameYoIYM+fCefI`#uyvB~l!sv=Kl<4}l>E^x({vux$K}Y;hvl4|aR~ zN|pQzXTP94Mi7~k&srJY`a-RiYWZ@?x*<$*L6}hR@kZ$M&{}o=!+_Tl5#gf_)Lmy| z?O%r=kQHUPJJTcvS@qn~?}fJw)k^buThP5<0XoMsf-~&fM=}7k*QaL&g#!EL41<8LUwc?H6Y-#i!|$^v0ee zAzs%Wv$Bc;qa@Y}fm&HOw)y_D748HAQ;H3{MRboh@t-i11ccZn>eZmH5bNX@INj2m z(>((E_U|nMQkP0=!_^Vnb|hhvFs@$pfUjAQ>qE`h&jB8IOiT7s3art{ly8@v+Uquo zdl81_K0k;MF#x-S!+-*xb``EeQjy#17eMj;*{E};df>?D0_!(a~Tr(m*YSZ9V zsafh495_p~SsEiV|0N@53iNJb^;33Y-kPEIsyQ*ItI#!h#^>iN#a6sy?YP6X0FwQ= zREzqutxogok%A=NW$lJ;A}Zh$en|AZ|99P_ncqPRD=d9TGAZ~7Nf2gZ*8HoSkA(#P zXy=V5BwjiH*rRP!W93;#wd#hW4yNAy7$*kD_vp&}a++S&&HUDa@cvF^E5BCYahG_~ z+@#@5ZI94IK=G{48{aCmKI>p?F7dOzsJ2X+?i%~9Q=iM5A1-zn){JwoA8t&6gW=oO zDD#g(dKtv43mEs&HKJ~7`F^v(Hi0bO4U4#k-J%M|S2$==vvSbseP@Qwhd})=dYyB7 z)$2XsiXMrJO@>Ofb;O(b1p0E?#x@RO@H{8^hlyCM8jx-kRz)Vw-#>#I&c7&^?ALnu z(_{5AxkH8%$3=SZ%fRf+A+(m{ffvPyUa)H=-!t)!?Waqv#9Hh9O7F5EtELim2S;@~ zB@t*t?c|N-Ouih#XL9_d5RqRVv#uOTk*n8<4H+F6**6$*Y7R?FLu8OG-)Fb4sTZl* z=N}#D)LGC6bRQ9?(my9%q!uyI1AW7O1t9I79)Dhc1qMHJN)yuNp7JI+9;DrB;eQ{L zrVzX=RB!4H(@kMV6HA8*Eu|zV*3JaY$DN=$j=q8)a+BBViuwhnUs!jAF_ez1FR8#b zb@XcS?77~Jr-x-%YC5|t5cyPlyL8H|PSO5+-kgcCZ29(64US(qFPP72VA;Flrl`9n zWtfrRV7Ou0{+qEtnHBbDMp)#FyE|6qM@cR+E^il`203Bj%V+kR4(7CFelr*9C893R zoqop_=mNIYq zqOv{+trXGDF7|76ryq|5>0|HMYt2q}5}PogYi|5J$e f{U3Y}z_AB7%L%-XC;j(80su5rb>Vf&wo(59bo%uZ literal 0 HcmV?d00001 diff --git a/SixthSection/notation/公式2.PNG b/SixthSection/notation/公式2.PNG new file mode 100644 index 0000000000000000000000000000000000000000..3c5d8215065fffd17eec5287cd3ef87db8a51903 GIT binary patch literal 3601 zcmb`KXEYlQ_s40gh)`6i5xX{RZ4nfS7?nz?R;5WhI+T*8 zF-p}65^B{f3I3jU|F_RM&x?CM_j|tQ+;h)8=f1cv%@A-_03U#kj*ivH5PF}Ej=uQ3 zmSwtpp0Sz_BF;hYcOR}t_jXiZ^;}$X(KXejqkEUmLbGQ$*UUbKR(^DJYy)AvW&QTk$i_yNzQ4B5pI zKn7-pFIN>-%PO{w0Sr{BbAU6{Gq?c96k%aU;iBm6aQBUHL{pZ4N=Vx0kgel8UfvOu zDVMF*$-w&7hcw1Zy6l%kB%tVXzWBjT;5es)J`~P=NjH&|1N0wL^?z@Q=%LW*1+EbX zLf4k6JS?dARo}7K_n2e017T2jJN&T}B8So&Bh19d0a_5`I3LPBk9w_m*j!n4r*TEG zI_%J<%N5)0fQ?wux-x9Vdh{AFojK+a;tI;P!%Jr5HJSa8F5jv7-NSR4XQKzAn9N|I z6F`yK8#)B9YjlpH7(gfGjD0Y2eJWZsdT(~afS{=X-H(g=y5{c}nT{cPVp_5>{xW6# zSC7lkNb5GUkU1!MCRF;qtB~i)ICy#1z&`yp=YB+b6j3eM@o+jTCpc2s3}q^oIan^} zyoTqKs{3ZszFnfDuJG8}ic1^lRU7yn8m{EGr5ezHA%=R3wOZkL%bCijhM(V5l7P2v>vvr&y@X!s%~m73cfFYr%}3f zD`{9FX^Clf>uo^|bp#Ej@``iUwm4j2O-2}2p~aMQ(-G*H!!9a`{i74WvYY&8;I}_Z z;n`!LB$z8~@)c1^C={P>!c!%bJlGS_ZWU#sx?dQ#c4bSWAGgPmj9|>^5C?ew_9@9A zbyVa=SlFBL5K^XxJ5ANit;R5kl_9J@0uyNsAFmdFSKI2q#C^Be;Y$EroijE>EN``! zdp&<~C=Grx_K?$X;}r0*tG9}RKG>~3zLp~V=|I!TL-R)Vgw&&;`KCA2DczmJX-9e! zL&Adl>#y#}0}D)WW9j{G9lMPOp&@o69U4qwLjO_(a&K$q=m$?49Nf{|Ch9Z~WNN*h z&GF}dohsmLxii>cKCS7A2b6yfBxt-DuxZLD8KSJ8{^L@yFh9u}Kchc_*iwdCtNH%D&Tq~Al`Enwb z_8oaet08-0=58PO`Y(GAC1fI7D!G+4^r!Bo5!;cA>P~N!R9BIYV78FV!Q>(`IqkXg z64!ke9MawXmSk^Fq3$1+o-Y@246}~S_xgE<^Ta9{9+$%+@ukGH>60bk`0n_A+gFB) zz5PN$E&twMZJQPIUUKK)S@%_1*KxXPC$>xO4!YCZjbs|iD@+O?Xg4lgSDB!G9_9uP z=$x3vG%M@7=U&f-(K-)W7($!9iWX+%z&CL4Evt%ZV5)}$XVrpVZfjv;G^M4@+3ffZ z8-WMfT641V{CY}<1fz9{3EKoY@giiF1!N%|V%eW76JBX#@lRgx#~Ve0YwjW5zEhgF zVg0jhF!zwdp@Dsj5^gRW!(P0-TStC|ng7lXD(PB6@p`s;yCF(fYZDLPOWmvfH`;c2 zIiYN2M_}E4Bow3XUW&{IOnvZHe&spcYi{ItR&*;xfMX6W2cDkiBrbJRZ`{=SvclS- z1&WhqT$lgxpq;GnxWSo<40e~);3}8Zw_y%tBkDg*{iMZp~*=rKLMJl`fCxm{%H8sMS^|dChlp~rj*ND&!7E~1x`ym-??5) zN%;iydWsRR3>r|{jOhFazR>BTEi#)~qyMlDE%F919PP?se%UU0*dzw*72Y`B# zf*P&DuAaxP;WuaAel2VZHS+%!D4?tju;^7KfccMnY|)T2yTADpue>fFH+14~lKzoI zwCM@+qFFFNpbTx_;B zC?zkk4fauOUnv(Eqk^XN_DxteN>3^3NdacTFk~b0)V}TiClCVE zB*1dF*1+AiL8vb{`8EHpVoJM9{XF$hFdFoat!qj|+$aABw;G!IV*yT4(l;A{JGTc& zp%oaSzMm#MvntS!>!l#hbD#BThL<0fUVht>_XdK+>TbJ8l;Q=|3E_m+b0u)V@SuO*Aw zlS?1U#2LNympC4c;tm&qS-nW5J>#x%O3)r=#2i04Y-e+En#iM5m0(elg*`8#od9PY zXRY@H;ac{5%-}5(8y#)lo4Gtv*G9C;Qv-kc#R^}*)NkI_j92XpCjq&OH>&{a3?!F( zQqS{^`=`~qeut=sW$Nv`#1_gyLWEchO-K5UC_R)mWMuw8ols5ig$altDr?vkTsY=8 zQqv`DNR$9b{Vp5wmY1M ztTi{nXmXt+X_+f6;r0T1w#|DFGv&rgR!w*l_!raLm_c31%V-T+=Rbx zx5Dm*Jvih$;wd0~KnZ>X9rpwl#gF3RxENj>$f(&b&wG$Fv2qioez-PU<8fUHh7hiz zJ#x%NPm(JMnT!yg(ffi}f2JuI!_OLq4M>01Vm2eKSb*a!5;0{|5vGZv@)rwH%Y8?5 z>K7Jv7h*rVW8O?4YK53<+U^HHtzi{0iV5tPfxO@Rm;G_1@oy)WvyQD9l|-+CtjfP%fVb#~hGoXCjEX4hT|6g)@3z|q}=-lzaGt#9kmflYp^WMqviQEPyZgob*% za4lh3l$n6uxulF;$iycBz|gHy$%G62vkC9}B)o|wAnD^%f9dg?@CoVdqSXfefA3bE zD#{b1UHkJW!{sf}awL`lnOZCPa=pwDFKn8**t6B1VKLLpk?&u9Qy;c_O+FWKJvJM!14>}zYs!>Asl!3IfWt!wYhnycD0wd+>hV0$6>0w*})S?)W1kbHP*;s19G!$ z(Ww-5`bOa^Tcn0ETYH7^RPGwP#B?aHIN4P}l`mwjM!|6CAZ1JKlmV77uao=^T}JMwe3Fx+izo_PMo(HX%I(6@SaG5-f8JK%o+ literal 0 HcmV?d00001