#### GA Ns Py Torch

K
{
"cells": [
{
"cell_type": "markdown",
"id": "oDUanjQCdtoV"
},
"source": [
"### What is a GAN?\n",
"\n",
"In 2014, [Goodfellow et al.](https://arxiv.org/abs/1406.2661) presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the **discriminator**. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the **generator**, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.\n",
"\n",
"We can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game:\n",
"$$\\underset{G}{\\text{minimize}}\\; \\underset{D}{\\text{maximize}}\\; \\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\n",
"where $z \\sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In [Goodfellow et al.](https://arxiv.org/abs/1406.2661), they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.\n",
"\n",
"To optimize this minimax game, we will aternate between taking gradient *descent* steps on the objective for $G$, and gradient *ascent* steps on the objective for $D$:\n",
"1. update the **generator** ($G$) to minimize the probability of the __discriminator making the correct choice__. \n",
"2. update the **discriminator** ($D$) to maximize the probability of the __discriminator making the correct choice__.\n",
"\n",
"While these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the **discriminator making the incorrect choice**. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from [Goodfellow et al.](https://arxiv.org/abs/1406.2661). \n",
"\n",
"In this assignment, we will alternate the following updates:\n",
"1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data:\n",
"$$\\underset{G}{\\text{maximize}}\\; \\mathbb{E}_{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\n",
"2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data:\n",
"$$\\underset{D}{\\text{maximize}}\\; \\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] + \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$\n",
"\n",
"### What else is there in this notebook?\n",
"![caption](gan_outputs_pytorch.png)"
]
},
{
"cell_type": "markdown",
"id": "OgrXJSMmdtoW"
},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "CYVwNTuFdtoX"
},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"from torch.nn import init\n",
"import torchvision\n",
"import torchvision.transforms as T\n",
"import torch.optim as optim\n",
"from torch.utils.data import sampler\n",
"import torchvision.datasets as dset\n",
"\n",
"import numpy as np\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.gridspec as gridspec\n",
"\n",
"%matplotlib inline\n",
"plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n",
"plt.rcParams['image.interpolation'] = 'nearest'\n",
"plt.rcParams['image.cmap'] = 'gray'\n",
"\n",
"def show_images(images):\n",
"    images = np.reshape(images, [images.shape[0], -1])  # images reshape to (batch_size, D)\n",
"    sqrtn = int(np.ceil(np.sqrt(images.shape[0])))\n",
"    sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))\n",
"\n",
"    fig = plt.figure(figsize=(sqrtn, sqrtn))\n",
"    gs = gridspec.GridSpec(sqrtn, sqrtn)\n",
"    gs.update(wspace=0.05, hspace=0.05)\n",
"\n",
"    for i, img in enumerate(images):\n",
"        ax = plt.subplot(gs[i])\n",
"        plt.axis('off')\n",
"        ax.set_xticklabels([])\n",
"        ax.set_yticklabels([])\n",
"        ax.set_aspect('equal')\n",
"        plt.imshow(img.reshape([sqrtimg,sqrtimg]))\n",
"    return \n",
"\n",
"def preprocess_img(x):\n",
"    return 2 * x - 1.0\n",
"\n",
"def deprocess_img(x):\n",
"    return (x + 1.0) / 2.0\n",
"\n",
"def rel_error(x,y):\n",
"    return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n",
"\n",
"def count_params(model):\n",
"    \"\"\"Count the number of parameters in the current TensorFlow graph \"\"\"\n",
"    param_count = np.sum([np.prod(p.size()) for p in model.parameters()])\n",
"    return param_count\n",
"\n",
]
},
{
"cell_type": "markdown",
"id": "BC8RGqopdtob"
},
"source": [
"## Dataset"
]
},
{
"cell_type": "code",
"execution_count": 3,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000,
"referenced_widgets": [
"3de4b11b43e04731bb23455e0f368565",
"42d3d6f4bc874ef5bbc98b4632e29e38",
"a6be8502a9844ec494d322042e214d77",
"17caedca349a47d1a992e8bbb60b5642",
"ee639b5a0e294d9883e43314ea8d6702",
"c6e99d483fa24dfea0772971de3316fe",
"9224ea3208fe494b915250904bfa3eb6",
"bd1077ce31694e2a90080b6d51b484cc",
"255e59e324e2406ba342673af8f2a105",
"b7d43cde0bc6456ab63d6ab2f8422bec",
"ca345e79a4384c5489e1a642afae551f",
"261c9a1cb1f4421292fc632c50b78e20",
"be079b45d9f04ff38d9eba6c9e289c11",
"7ba6e77f6cbb4ca785ef608df8b5ef5c",
"8aae89186587409e96bfa45fe48b0885",
"d0bc3366acda44029c562ec8899651e8",
"f6c446977fc841f6b8615582c048740d",
"c0e0136b54044de498bf141dac1db574",
"8e9b9e1b5e464cbc9d35c27671ed9959",
"86504c47c60948639ed318e4386cbbd2",
"489b59fc12cd4dd99e66f1f80caecfea",
"51e877e40f8141b4a1aa7c9fd63ed03d",
"7f2fb3876b0f442999db456fea411d37",
"b7dbfcbf7c9c489b8975edf939632b16",
"51d715eb014e4163a1bb57b15e5b4bc4",
"78f6e9e772434dc891521d57062bf1b2",
"0149a792bcdb441eb23471e3d733878e",
"58d3a6ea0d6c49b68a22693d026ecc7d"
]
},
"id": "cxkhjwB6dtob",
"outputId": "4ed97823-c3fa-4c58-e9d6-a7df2380ee49",
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3de4b11b43e04731bb23455e0f368565",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
]
},
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ./utils/datasets/MNIST_data/MNIST/raw/train-images-idx3-ubyte.gz to ./utils/datasets/MNIST_data/MNIST/raw\n",
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "9224ea3208fe494b915250904bfa3eb6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
]
},
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ./utils/datasets/MNIST_data/MNIST/raw/train-labels-idx1-ubyte.gz to ./utils/datasets/MNIST_data/MNIST/raw\n",
"\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "8aae89186587409e96bfa45fe48b0885",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
]
},
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ./utils/datasets/MNIST_data/MNIST/raw/t10k-images-idx3-ubyte.gz to ./utils/datasets/MNIST_data/MNIST/raw\n",
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "51e877e40f8141b4a1aa7c9fd63ed03d",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
]
},
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ./utils/datasets/MNIST_data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ./utils/datasets/MNIST_data/MNIST/raw\n",
"Processing...\n",
"Done!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.6/dist-packages/torchvision/datasets/mnist.py:469: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\n",
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 864x864 with 128 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"class ChunkSampler(sampler.Sampler):\n",
"    \"\"\"Samples elements sequentially from some offset. \n",
"    Arguments:\n",
"        num_samples: # of desired datapoints\n",
"        start: offset where we should start selecting from\n",
"    \"\"\"\n",
"    def __init__(self, num_samples, start=0):\n",
"        self.num_samples = num_samples\n",
"        self.start = start\n",
"\n",
"    def __iter__(self):\n",
"        return iter(range(self.start, self.start + self.num_samples))\n",
"\n",
"    def __len__(self):\n",
"        return self.num_samples\n",
"\n",
"NUM_TRAIN = 50000\n",
"NUM_VAL = 5000\n",
"\n",
"NOISE_DIM = 96\n",
"batch_size = 128\n",
"\n",
"                           transform=T.ToTensor())\n",
"                          sampler=ChunkSampler(NUM_TRAIN, 0))\n",
"\n",
"                           transform=T.ToTensor())\n",
"                        sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))\n",
"\n",
"\n",
"show_images(imgs)"
]
},
{
"cell_type": "markdown",
"id": "oXmeqMF_dtoe"
},
"source": [
"## Random Noise\n",
"Generate uniform noise from -1 to 1 with shape [batch_size, dim]."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "mTSBJLnDdtoe"
},
"outputs": [],
"source": [
"def sample_noise(batch_size, dim):\n",
"    \"\"\"\n",
"    Generate a PyTorch Tensor of uniform random noise.\n",
"\n",
"    Input:\n",
"    - batch_size: Integer giving the batch size of noise to generate.\n",
"    - dim: Integer giving the dimension of noise to generate.\n",
"    \n",
"    Output:\n",
"    - A PyTorch Tensor of shape (batch_size, dim) containing uniform\n",
"      random noise in the range (-1, 1).\n",
"    \"\"\"\n",
]
},
{
"cell_type": "markdown",
"id": "pyZvX4kYdtoh"
},
"source": [
"Check noise is the correct shape and type:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "36d3917f-d5cd-43ef-8bb2-ac3ae6896d84"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"All tests passed!\n"
]
}
],
"source": [
"def test_sample_noise():\n",
"    batch_size = 3\n",
"    dim = 4\n",
"    torch.manual_seed(231)\n",
"    z = sample_noise(batch_size, dim)\n",
"    np_z = z.cpu().numpy()\n",
"    assert np_z.shape == (batch_size, dim)\n",
"    assert torch.is_tensor(z)\n",
"    assert np.all(np_z >= -1.0) and np.all(np_z <= 1.0)\n",
"    assert np.any(np_z < 0.0) and np.any(np_z > 0.0)\n",
"    print('All tests passed!')\n",
"    \n",
"test_sample_noise()"
]
},
{
"cell_type": "markdown",
"id": "SS_F4WRHdtok"
},
"source": [
"## Flatten"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "AliSaexBdtok"
},
"outputs": [],
"source": [
"class Flatten(nn.Module):\n",
"    def forward(self, x):\n",
"        N, C, H, W = x.size() # read in N, C, H, W\n",
"        return x.view(N, -1)  # \"flatten\" the C * H * W values into a single vector per image\n",
"    \n",
"class Unflatten(nn.Module):\n",
"    \"\"\"\n",
"    An Unflatten module receives an input of shape (N, C*H*W) and reshapes it\n",
"    to produce an output of shape (N, C, H, W).\n",
"    \"\"\"\n",
"    def __init__(self, N=-1, C=128, H=7, W=7):\n",
"        super(Unflatten, self).__init__()\n",
"        self.N = N\n",
"        self.C = C\n",
"        self.H = H\n",
"        self.W = W\n",
"    def forward(self, x):\n",
"        return x.view(self.N, self.C, self.H, self.W)\n",
"\n",
"def initialize_weights(m):\n",
"    if isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose2d):\n",
"        init.xavier_uniform_(m.weight.data)"
]
},
{
"cell_type": "markdown",
"id": "cjQipV5idton"
},
"source": [
"## CPU / GPU"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "Ss-M5fZwdton"
},
"outputs": [],
"source": [
"dtype = torch.FloatTensor\n",
"dtype = torch.cuda.FloatTensor # COMMENT THIS LINE IF YOU'RE ON A CPU!"
]
},
{
"cell_type": "markdown",
"id": "OcpYcDNLdtoq"
},
"source": [
"# Discriminator"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "gT4rloGkdtor"
},
"outputs": [],
"source": [
"def discriminator():\n",
"    \"\"\"\n",
"    Build and return a PyTorch model implementing the architecture.\n",
"    \"\"\"\n",
"    model = nn.Sequential( Flatten(),\n",
"                           nn.Linear(784, 256),\n",
"                           nn.LeakyReLU(inplace=True),\n",
"                           nn.Linear(256,256),\n",
"                           nn.LeakyReLU(inplace=True),\n",
"                           nn.Linear(256,1)\n",
"                         )\n",
"    return model"
]
},
{
"cell_type": "markdown",
"id": "0MbstME3dtot"
},
"source": [
"Test to make sure the number of parameters in the discriminator is correct:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "APf0Nevndtot",
"outputId": "01740507-7878-47be-f656-be677d6079a7"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Correct number of parameters in discriminator.\n"
]
}
],
"source": [
"def test_discriminator(true_count=267009):\n",
"    model = discriminator()\n",
"    cur_count = count_params(model)\n",
"    if cur_count != true_count:\n",
"        print('Incorrect number of parameters in discriminator. Check your achitecture.')\n",
"    else:\n",
"        print('Correct number of parameters in discriminator.')     \n",
"\n",
"test_discriminator()"
]
},
{
"cell_type": "markdown",
"id": "K03pVpqqdtow"
},
"source": [
"# Generator"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4GdEZ7grdtow"
},
"outputs": [],
"source": [
"def generator(noise_dim=NOISE_DIM):\n",
"    \"\"\"\n",
"    Build and return a PyTorch model implementing the architecture.\n",
"    \"\"\"\n",
"    model = nn.Sequential( nn.Linear(noise_dim,1024),\n",
"                           nn.ReLU(inplace=True),\n",
"                           nn.Linear(1024,1024),\n",
"                           nn.ReLU(inplace=True),\n",
"                           nn.Linear(1024,784),\n",
"                           nn.Tanh()\n",
"                         )\n",
"    return model"
]
},
{
"cell_type": "markdown",
"id": "cBIjphBKdtoz"
},
"source": [
"Test to make sure the number of parameters in the generator is correct:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "Lfc_zIWJdtoz",
"outputId": "0c1c13cb-7307-4a91-abc2-3460b3886cc8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Correct number of parameters in generator.\n"
]
}
],
"source": [
"def test_generator(true_count=1858320):\n",
"    model = generator(4)\n",
"    cur_count = count_params(model)\n",
"    if cur_count != true_count:\n",
"        print('Incorrect number of parameters in generator. Check your achitecture.')\n",
"    else:\n",
"        print('Correct number of parameters in generator.')\n",
"\n",
"test_generator()"
]
},
{
"cell_type": "markdown",
"id": "xnMGNozNdto2"
},
"source": [
"# GAN Loss\n",
"\n",
"Compute the generator and discriminator loss. The generator loss is:\n",
"$$\\ell_G = -\\mathbb{E}_{z \\sim p(z)}\\left[\\log D(G(z))\\right]$$\n",
"and the discriminator loss is:\n",
"$$\\ell_D = -\\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\log D(x)\\right] - \\mathbb{E}_{z \\sim p(z)}\\left[\\log \\left(1-D(G(z))\\right)\\right]$$"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9yu9yAO6dto2"
},
"outputs": [],
"source": [
"def bce_loss(input, target):\n",
"    \"\"\"pa \n",
"    Inputs:\n",
"    - input: PyTorch Tensor of shape (N, ) giving scores.\n",
"    - target: PyTorch Tensor of shape (N,) containing 0 and 1 giving targets.\n",
"\n",
"    Returns:\n",
"    - A PyTorch Tensor containing the mean BCE loss over the minibatch of input data.\n",
"    \"\"\"\n",
"    neg_abs = - input.abs()\n",
"    loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()\n",
"    return loss.mean()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "AOCyZALXdto5"
},
"outputs": [],
"source": [
"def discriminator_loss(logits_real, logits_fake):\n",
"    \"\"\"\n",
"    Computes the discriminator loss described above.\n",
"    \n",
"    Inputs:\n",
"    - logits_real: PyTorch Tensor of shape (N,) giving scores for the real data.\n",
"    - logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n",
"    \n",
"    Returns:\n",
"    - loss: PyTorch Tensor containing (scalar) the loss for the discriminator.\n",
"    \"\"\"\n",
"    N, _ = logits_real.size() \n",
"    loss = (bce_loss(logits_real, torch.ones(N).type(dtype)))+(bce_loss(logits_fake, torch.zeros(N).type(dtype)))\n",
"    return loss\n",
"\n",
"def generator_loss(logits_fake):\n",
"    \"\"\"\n",
"    Computes the generator loss described above.\n",
"\n",
"    Inputs:\n",
"    - logits_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n",
"    \n",
"    Returns:\n",
"    - loss: PyTorch Tensor containing the (scalar) loss for the generator.\n",
"    \"\"\"\n",
"    N, _ = logits_fake.size()\n",
"    loss = (bce_loss(logits_fake, torch.ones(N).type(dtype)))\n",
"    return loss"
]
},
{
"cell_type": "markdown",
},
"source": [
"Check generator and discriminator loss. We should see errors < 1e-7."
]
},
{
"cell_type": "code",
"execution_count": 15,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "9qVTG21-dto7",
"outputId": "fd6dbf37-e87d-4e0a-9781-58de0fc98eea"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Maximum error in d_loss: 2.83811e-08\n"
]
}
],
"source": [
"def test_discriminator_loss(logits_real, logits_fake, d_loss_true):\n",
"    d_loss = discriminator_loss(torch.Tensor(logits_real).type(dtype),\n",
"                                torch.Tensor(logits_fake).type(dtype)).cpu().numpy()\n",
"    print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n",
"\n",
]
},
{
"cell_type": "code",
"execution_count": 16,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "AK2fPRgNdto-",
"outputId": "6b71b6d9-92e2-4206-a311-5cda0d1060d0"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Maximum error in g_loss: 3.4188e-08\n"
]
}
],
"source": [
"def test_generator_loss(logits_fake, g_loss_true):\n",
"    g_loss = generator_loss(torch.Tensor(logits_fake).type(dtype)).cpu().numpy()\n",
"    print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n",
"\n",
]
},
{
"cell_type": "markdown",
"id": "hZ9a-AOgdtpA"
},
"source": [
"# Optimizing our loss"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "sJeiH6ZJdtpA"
},
"outputs": [],
"source": [
"def get_optimizer(model):\n",
"    \"\"\"\n",
"    Construct and return an Adam optimizer for the model with learning rate 1e-3,\n",
"    beta1=0.5, and beta2=0.999.\n",
"    \n",
"    Input:\n",
"    - model: A PyTorch model that we want to optimize.\n",
"    \n",
"    Returns:\n",
"    - An Adam optimizer for the model with the desired hyperparameters.\n",
"    \"\"\"\n",
"    optimizer = optim.Adam(model.parameters(), lr = 1e-3, betas = (0.5,0.999))\n",
"    return optimizer"
]
},
{
"cell_type": "markdown",
"id": "5eeMguyGdtpD"
},
"source": [
"# Training a GAN!"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "0Qj-KOMBdtpE"
},
"outputs": [],
"source": [
"def run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss, show_every=250, \n",
"              batch_size=128, noise_size=96, num_epochs=10):\n",
"    \"\"\"\n",
"    Train a GAN!\n",
"    \n",
"    Inputs:\n",
"    - D, G: PyTorch models for the discriminator and generator\n",
"    - D_solver, G_solver: torch.optim Optimizers to use for training the\n",
"      discriminator and generator.\n",
"    - discriminator_loss, generator_loss: Functions to use for computing the generator and\n",
"      discriminator loss, respectively.\n",
"    - show_every: Show samples after every show_every iterations.\n",
"    - batch_size: Batch size to use for training.\n",
"    - noise_size: Dimension of the noise to use as input to the generator.\n",
"    - num_epochs: Number of epochs over the training dataset to use for training.\n",
"    \"\"\"\n",
"    iter_count = 0\n",
"    for epoch in range(num_epochs):\n",
"        for x, _ in loader_train:\n",
"            if len(x) != batch_size:\n",
"                continue\n",
"            real_data = x.type(dtype)\n",
"            logits_real = D(2* (real_data - 0.5)).type(dtype)\n",
"\n",
"            g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)\n",
"            fake_images = G(g_fake_seed).detach()\n",
"            logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n",
"\n",
"            d_total_error = discriminator_loss(logits_real, logits_fake)\n",
"            d_total_error.backward()        \n",
"            D_solver.step()\n",
"\n",
"            g_fake_seed = sample_noise(batch_size, noise_size).type(dtype)\n",
"            fake_images = G(g_fake_seed)\n",
"\n",
"            gen_logits_fake = D(fake_images.view(batch_size, 1, 28, 28))\n",
"            g_error = generator_loss(gen_logits_fake)\n",
"            g_error.backward()\n",
"            G_solver.step()\n",
"\n",
"            if (iter_count % show_every == 0):\n",
"                print('Iter: {}, D: {:.4}, G:{:.4}'.format(iter_count,d_total_error.item(),g_error.item()))\n",
"                imgs_numpy = fake_images.data.cpu().numpy()\n",
"                show_images(imgs_numpy[0:16])\n",
"                plt.show()\n",
"                print()\n",
"            iter_count += 1"
]
},
{
"cell_type": "code",
"execution_count": 19,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "B9miV1qfdtpG",
"outputId": "dbb0c084-b3f3-4983-ecb8-bee24e7d8dac",
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iter: 0, D: 1.328, G:0.7202\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 250, D: 1.43, G:0.6752\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 500, D: 1.181, G:1.414\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 750, D: 1.204, G:1.556\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1000, D: 1.174, G:1.126\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1250, D: 1.255, G:1.068\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1500, D: 1.136, G:0.971\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1750, D: 1.317, G:0.7927\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2000, D: 1.274, G:0.9762\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2250, D: 1.258, G:0.9521\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2500, D: 1.202, G:0.833\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2750, D: 1.288, G:0.8659\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3000, D: 1.379, G:0.824\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3250, D: 1.392, G:0.8353\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3500, D: 1.296, G:0.8011\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3750, D: 1.221, G:0.841\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"# Make the discriminator\n",
"D = discriminator().type(dtype)\n",
"\n",
"# Make the generator\n",
"G = generator().type(dtype)\n",
"\n",
"# Use the function you wrote earlier to get optimizers for the Discriminator and the Generator\n",
"D_solver = get_optimizer(D)\n",
"G_solver = get_optimizer(G)\n",
"# Run it!\n",
"run_a_gan(D, G, D_solver, G_solver, discriminator_loss, generator_loss)"
]
},
{
"cell_type": "markdown",
"id": "vnLQHE1VdtpJ"
},
"source": [
"In the iterations in the low 100s we should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000."
]
},
{
"cell_type": "markdown",
"id": "RAITXp5ZdtpK"
},
"source": [
"# Least Squares GAN\n",
"We'll now look at [Least Squares GAN](https://arxiv.org/abs/1611.04076), a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss:\n",
"$$\\ell_G = \\frac{1}{2}\\mathbb{E}_{z \\sim p(z)}\\left[\\left(D(G(z))-1\\right)^2\\right]$$\n",
"and the discriminator loss:\n",
"$$\\ell_D = \\frac{1}{2}\\mathbb{E}_{x \\sim p_\\text{data}}\\left[\\left(D(x)-1\\right)^2\\right] + \\frac{1}{2}\\mathbb{E}_{z \\sim p(z)}\\left[ \\left(D(G(z))\\right)^2\\right]$$"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "nuOMD1TWdtpK"
},
"outputs": [],
"source": [
"def ls_discriminator_loss(scores_real, scores_fake):\n",
"    \"\"\"\n",
"    Compute the Least-Squares GAN loss for the discriminator.\n",
"    \n",
"    Inputs:\n",
"    - scores_real: PyTorch Tensor of shape (N,) giving scores for the real data.\n",
"    - scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n",
"    \n",
"    Outputs:\n",
"    - loss: A PyTorch Tensor containing the loss.\n",
"    \"\"\"\n",
"    N,_ = scores_real.size()\n",
"    loss = (0.5 * torch.mean((scores_real-torch.ones(N).type(dtype))**2)) + (0.5 * torch.mean(scores_fake**2))\n",
"    return loss\n",
"\n",
"def ls_generator_loss(scores_fake):\n",
"    \"\"\"\n",
"    Computes the Least-Squares GAN loss for the generator.\n",
"    \n",
"    Inputs:\n",
"    - scores_fake: PyTorch Tensor of shape (N,) giving scores for the fake data.\n",
"    \n",
"    Outputs:\n",
"    - loss: A PyTorch Tensor containing the loss.\n",
"    \"\"\"\n",
"    N,_ = scores_fake.size()\n",
"    loss = (0.5 * torch.mean((scores_fake-torch.ones(N).type(dtype))**2))\n",
"    return loss"
]
},
{
"cell_type": "markdown",
"id": "krGClF97dtpM"
},
"source": [
"Before running a GAN with our new loss function, let's check it:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 52
},
"id": "Wo_nel7-dtpM",
"outputId": "101f7a56-1b28-4236-8633-4171ae4283ee"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Maximum error in d_loss: 1.64377e-08\n",
"Maximum error in g_loss: 2.7837e-09\n"
]
}
],
"source": [
"def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):\n",
"    score_real = torch.Tensor(score_real).type(dtype)\n",
"    score_fake = torch.Tensor(score_fake).type(dtype)\n",
"    d_loss = ls_discriminator_loss(score_real, score_fake).cpu().numpy()\n",
"    g_loss = ls_generator_loss(score_fake).cpu().numpy()\n",
"    print(\"Maximum error in d_loss: %g\"%rel_error(d_loss_true, d_loss))\n",
"    print(\"Maximum error in g_loss: %g\"%rel_error(g_loss_true, g_loss))\n",
"\n",
]
},
{
"cell_type": "markdown",
"id": "q82122yedtpO"
},
"source": [
"Run the following cell to train model!"
]
},
{
"cell_type": "code",
"execution_count": 22,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "htEifHj5dtpP",
"outputId": "3f4566d9-02c7-4948-db13-ba5934e86437",
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iter: 0, D: 0.5689, G:0.51\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 250, D: 0.1481, G:0.3264\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 500, D: 0.2063, G:0.4708\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 750, D: 0.1258, G:0.2649\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1000, D: 0.152, G:0.4361\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1250, D: 0.1842, G:0.2598\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1500, D: 0.1986, G:0.2422\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 1750, D: 0.2018, G:0.2362\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2000, D: 0.2339, G:0.1912\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2250, D: 0.2559, G:0.2198\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2500, D: 0.2503, G:0.1511\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 2750, D: 0.2112, G:0.1597\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3000, D: 0.2393, G:0.1796\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3250, D: 0.2336, G:0.1621\n"
]
},
{
"data": {
"text/plain": [
"<Figure size 288x288 with 16 Axes>"
]
},
"needs_background": "light",
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Iter: 3500, D: 0.2206, G:0.1707\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAOwAAADnCAYAAAAdFLrXAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO2dd6ATVdqHn1sQAZUiFmwLKisqoggK6mfBgohlBQUri66uBXsvoKKrqLu62CuKHSn2riiwFkRRUCkK2BVcO9jAdr8/7v7mJOdmkkkySW7i+/wD995k5kxmct7+vlV1dXUYhlEeVJd6AYZhRMe+sIZRRtgX1jDKCPvCGkYZYV9YwygjatP9saqqquJcyHV1dVWJP1fSNfbp0weAJ554IrjGSrq+hIhGxd5D4T+nwiSsYZQRVenisH+EnavSr7HSrw/+GNcoTMIaRhlhX9hGSk1NDTU1NaVeRsFYunQpS5cuLfUyyg77whpGGZHWS2yUjt9++63USygoyy+/fKmXUJaYhDWMMqLgErZJkyYA/PLLL0m///nnnwFYbrnlgr9ttdVWAEyfPh1wUqax2nJh1+ZTXV3N77//DsBTTz0FwK677lrYxcVAVVW9o1KRhG+//RaAVq1aAdC5c2cAjjjiCJo2bRr8H9y9fPPNNwH48ccfi7TqyqZkYZ0M5y3UaYsSEqiurta5AJg7d27wAP/0008AwQPetWtXACZPnhzp2FVVVWk/u/+dtyBhnbXWWguAX3/9FYBbb70VgN122y30PdqotOk2a9YMcJ9DLhQzrKNnsXnz5vzwww+FOk0DLKxjGBVA3ipxVLVQO62kj6irqwt2se+//x6A7777DoDWrVsDbpfzz3HRRRcBMHTo0JzXHydSEWfNmgXACiusAMCcOXM48MADARg1ahQA33zzDUDwe30+/fv3B2DPPfcE4K233gLg/vvvB+CTTz7JKGELxccffwzAvffeC6SWrL4Z8+qrrwKwePFiwKnTL730EgCbbLIJAGussQbg7n1jY9ttt2XSpEkALFu2rGTrMAlrGGVEwW3YTNKgqqqKwYMHA3DbbbcBsMoqqwDw5ZdfAlBbW68IyBm12WabBe+Ncg5vPbHbP1qHHC4ffvghAE8//TQA48aN48knnwScNqCkgQ022ABwElZIE3n//feT/t6pU6eMIZ+4bVid2/+89XOzZs0Cm/zxxx8HYMcddwTcdU6bNg2AHj16pDzHkUceCcBNN92UcT3FsGGl3clZtt566wXawnPPPQfAXnvt5a8rtvObDWsYFUBsEvbrr78GoE2bNkA0yQrwwQcfsO666wL19hk4e2bBggUAdOzYEXC7tWzDTHZzKgqxO0s6XHDBBYDTEJYsWQLASiutxGuvvQa4XblTp04A9O3bF4AWLVoAsPvuuwOw5pprAvD6668DTjJF2cXjkrCff/550vX4vPDCC0D9tWy88caA034mTpwIuHskX4evHUjzSOdp9imkhG3bti0AX331FeAk/uGHHx685uabbwZg7bXXBmCPPfYAnFaUy3PpYxLWMCqA2G1YeTP33ntvwNk/+ne55ZYLfe+WW24JOM+i1tazZ08Apk6dqnVlu6yAOHdn7aiKKerafI/4vHnzAokq76limULvkcSRtrHtttsC0L17d4BIscC4JGzYsyHNR+mFkjTgrtm3yTNpBtLQVl555SjrKrgN27x5c8Ali0hDAOclVhx6yJAhWofWl/f5TcIaRgUQu4TV8Y466igAzjjjDADWWWcdAMaMGQPAIYccAkRLcm/fvj1Qb+8mnkM2YsuWLQG4+uqrOe644zKtL+/dWTvpiiuuCDhvtqSn4rCSjlpnFGQ3atceP348kBy/LnSm0/z58wFYf/31U65t9dVXj7SOkLWl/L3i0i1btsyYiloILcnXCM4880zA3cO+ffsGa5eElb8mn6ytMEzCGkYFEJuElRScMWMG4HauV155BYDhw4cD8OKLLya9LzHTSZ7G2bNnJ73m2WefBWCXXXYBXNaMvMXZ2LRx7M46r7Jy5MnddNNNAQKv90cffRT5mCqGkHdSUkzItpOtl458Jawfd73iiisA2HfffYFkmzUqvXr1ApyPQxlPPnPmzAmegzDi1JISjgm45/aggw4C4M477wxeo0IGPYeK1S5cuBBwudZz5szJdjkNMAlrGBVAbOV1ysjRzqWd6e233wacZB07diwA/fr1a3AMxcDkEVVmk3a0QYMGAc7DqnP5MeBi0aFDB8Bd+0MPPQRkJ1mF4q6yE5VXrZheFMmaL7LF9LlKkznxxBOTfp8LysP1Javupa7z4osvDrUr48SXqN26dQNcpla7du2SXr9kyZLg+qVhSLNSBdJ7771XsPUKk7CGUUbkLWF9G1g/y2bRzjplyhQA9ttvv9Bj6TU+KvqWpD3mmGOSzlXI+tlEevfuDcDpp58OuFxhSQfFnnPh+OOPB5x9PHr0aABOPfVUwMV4ZevGie6V37bl/PPPB+DSSy9N+/4oNbqJrwUnaWWrz507F6i34QspWX2UYSbJqrpleaq1lsGDBwf2t/wyql5SFtp///vfgq83NqeTfxz/S/TII48ArmxMJHZjCEPqsz4wPzFBKkmULnz5OCykquv811xzDeDUfznH5s2bl/S+mpqa0PCVrsFPpLjhhhsAl/aYTdeNbJ1OWoPWqA1I4Z1MTqAoyGmmEJju6QMPPAC4L06UzhSFSJxQUcYXX3wBwEYbbQS4MOTUqVODZ/rRRx8F3POozebll1/OdxkB5nQyjAogNgkr6aYdSk4U7Up+CVqUMqow/FTFlVZaCagvz9tnn33SvjeO3VlpaioZk9SQypwO36Em9Lmp3E4piFKBcy0hzOb6hg0bBrhC/O233x5o6ICJAxV46DrPPffcYA16lsKuuZCpiQcccADgNEJJ0+222y5w/CmcIyn8zjvvaB1p150NJmENowLIW8JKb1eZmAL/cmAoVCBJkU8/WjlkFPLw115TUxOEjQYMGJDyGPnszrIj5XRQ+qAkk0rnNtxwQ8DZuvPnzw+KnmX3ajc+77zzAFekr8LvTz/9NOqyGpCNhD322GOZMGEC4BwvagwXpzNPyQWSrELSXGmnLVu2zHjtxWzC9uCDDwL14cg77rgDcOWeeh4GDhwI1Iek4sIkrGFUAHmHdWRP+ruxkgjCUu1ywZesaksir2ZdXR233HJL3udJpKqqKrg2FXIrBe20004DnFtfr1PxtgoeHn744WDNl112WdKxlHQhySPpot270BMA3n333aCU74033kj6m1IvFXJSqEnJHauuumrk8/h2sO7TYYcdBsCNN94Y/KyiikIk1WeLQnW1tbWBbS3vue+viQO/SWGDv8d2JsMwCk7s5XUK8KsEqRBJDf6aJcU7deoUeGzTvDdn+0fXcs455wAudiepKQ+1kkMUj91rr72CdioqjhgxYgQAV155JeAKB5Q4rgICpXZmQ7ZeYl2XJIjfZEC2uD53SX/ZujNnzsy4JqXxKTpw/fXXA3DyyScDMHLkSKDex5Epnl6q+bCKJStBQtJQcXg1E4wDs2ENowIoWAG7kBdZKV/5cPTRRwNw3XXXJf1+m222AVxz6gzry3t3liR97LHHACeR1EhMnmpRW1vLPffcA7gMH0kReZa1dtnj+aTn5VtelymOKMkiW3PJkiXB7+TtVnzV17D8VkHyQyiKUFNT0yDrK8X6SiJh9bn4zdILpEWahDWMcid2Cat2kGoFKeQNlW2WDZJOykIR/qClKMS5O4cl5KtljcrTRo4cGcRs1dQrcVAWOK1BmshVV10FuPhkNuQrYSXlJS1THB9wn/tpp52WsUDARxlEfm55KrQOSd5SSVgl+ytKoJi5ClzixCSsYVQABRvVoaoLVdJEQXadMkkUH/TXqGOrwiMbCrE7X3755YDLcFq0aBHgmpitu+66DXKrlX8qL7DswZ122inf5cTW5tSvsIkDtQ/VvdbnkY3NXioJ+5e//AVw2U/Z+E6yxSSsYVQAsU9gV3G5pF82O6fyjGUr+KjBeDYZNsXglFNOAZxto5adyswC51mURPnzn/8MuOqcTBkupUBeb61Z15DLWmUX+9lLpRqdmS1VVVXBkDPlGKjtkfLE49COMtH4nhLDMEIp+LhJoVxSVW2kQ55UZcWcffbZQDx5tYW0f+TNVL2saiq333774G/yKMu2V+M4aSTaxfMh7nGTwveKK4a6ePHi4P9CXmDFmQs5irEYNmxVVVUDbVH50Mopl2YVB2E2bNG+sGF8/fXXgYrrz+EpBIW42X4YR+qfEj0mTpwYPOxKHC+kKlioL2xjoRRf2AMOOIDbb78dcIJDnRbj6EPsY04nw6gAYnc6ZUuxewkXAklWsf/++wNw3333Adl1FTQaF3KwtWnTJnAuqXvn5MmTAWfuFboUEkzCGkZZUXIbttiUKuheTMyGLSxxNlsLw2xYw6gASm7DGka5UUp/hElYwygj0tqwhmE0LkzCGkYZkdaGNQ9j4yPd8DClDDZp0qQivcQJzenL+h6mQw0F27RpY15iwyh3LA5b4ddY6dcHf4xrFCZhDaOMsC+s0eiorq5ulAX9jQH7VAyjjLBMJ6PRUch66HLHJKxhlBFlJWGvvvpqAI477jjA1Srajlx6yv1e+M3KU6HWPxpDoi4iarurFq5t27YF0rf90SjSjz76CIj++eUd1lGHw7XXXjvTS7PG7wOlQvGVVloJcLNUU51bH67atYg/WkggyvXlO4t2tdVWCya6FYM4JhBGTcn15/amQ4krmmL31ltvAa5D5vTp04HwrqCJWFjHMCqARpk4oX6vW2+9dcq/Dxw4EIBx48YB2U0PMwlbTz4qrD5vdcIfO3Zs0Ml/8803B5xmI5UvTsLuYZhWlQ2rrbYaAGPGjAGgV69eOR9Lfan12UyZMgWAfffdN+N7TcIaRgXQKCWs8Ncme+K7774DkjvrZ3HMRi1h5bDw59moFeznn3+e8Rj5piZKUsmm9aXwQQcdBDgHzciRI1l99dUBZ8dp4roklLrlx0Gc91DaghxJUWZBqbmeNMFWrVoBMGzYMMBpL5pyoHOssMIKQDRfgUlYw6gAGqWE1ZrkFW7dujUA5557LgDnn39+PscuuoQ999xzgzk1mv+qEIBCAvKyyobSNDtpEdk0GshXwu69996A82pusskmgPNuavKevJ+zZs0K3quJgyqF69ChQ9J1xEHYPYziAd5jjz0AN5lAUk8TB/Vzwrl0juD4zz//PADbbrst4GxUfS5dunQBnCTVZyHbOorPxSSsYVQAjSJxImxH1AgM2VDZeIPjxPeohklHNZo++OCDAWdrJ9pFulZN71YAXcfS3/v27QvAE088Efv1ZOK9994DnNSR1PTRdPiffvopuMZVVlkFcJ9JnJI1E1G0EH+i/bvvvgvA6aefDriZvppRrAnxSpIAJ1mFrt239VWMLls/jufXJKxhlBElt2FTTQVL/FvcRLVhE1PVtD5JWsUfjzzyyOA14DKznnnmGQDOOOOMpPdlOk8iO++8c9Kx0h3Dt93iKmCPGqtdunRpkgQCdz2yf2XfxUEcfghdm7QloXWn8mrr85VHXNMV9Tzoc5JWIXt4hx12AFwcNgpmwxpGBVByGzadhO/Xrx8ADzzwQLGWE6CddunSpYEXUF7RBx98EHBebMVIlR1z1llnAc47uGzZssDDut566wHhE8nFxIkTAejfv3/GtRaqVW3ULKhmzZoFr5XdKykkmzBOCZsLvhai9SoCoXVL47nmmmsAV2jy3nvvBfFWvXfEiBFJPwtJVp3ruuuuA2DjjTfO+zpMwhpGGVFyGxbCJURUGzabapNc7B9JiQULFgAuDjx8+PCk886ePRtw+bO77747kL41qTyM8hKLN998E4AzzzwTyM5bXOwmbGPGjAlGbMp+Gz9+POCGWje2TCc9W4pEyKs9YMAAwGl1el2qZ9G/py+//DLgcuClTcnrng1mwxpGBdAoJazsR+W0xnyunHdnSYvrr79exwJc3PXuu++OvA7t2PJW+t5JeV2lPRQz0ylb1l577QZVOcoplhc2zmHH+dxD+RvOPvtsAE488UTAfd76/CU9R48eDcBhhx0WeswLLrgAcJl4yhBTXeyuu+4K1OdcRyVMwpb8C5vq/IVMkMjlZisZ4IsvvgDgtttuA+Cpp54CXOjFT9hPh76oulY/rNOpUycA7r//fiA7h0Wxv7BTpkxhu+2207kBmDFjBgDdunWL/XxxqMQyRbTuSZMmAS7N8JtvvgGcUyrxmdSXWe999dVXAVdAoNfqHBI8vtmTDlOJDaMCKFlYJ5U0KlXqYSaUYiYmTJgAwIUXXgjAs88+m/b97dq1C8IGukbfYaEwzi677ALAAQccAMQTCigU0jz8VD2ALbfcstjLyQpJUmlJp5xyCgBff/014JxRCuUsXryYSy+9FHBldNKKevbsCbjyz7lz5wKukCMstTMXTMIaRhlRMhu22LZrwnnztn+aNm0KNOyaJ1tFSQN33nknAH369Al27rBE8Shd+6JSLBtW13nqqacGEkoFEEqtLERSR7b3sKqqKliHPmdpMlrvySefDMC3334LQNeuXQEYMmRIcAy/PE5hPHXzVLhv5ZVXBpx0VhFINg3gzIY1jAqg6BJWO1xioyzZiGqPUkjykbCSpOo3q89O69Z1+J9pqsQJvUbHysaDmIlie4l79erFY489BjgNIjEtMy4222wzAGbMmJH1PVxrrbUAZ0/Kp6CCjYsuughwYRwlyUShT58+AMFnoJBWlHYzYZiENYwKoOhe4lRd0GXrNXb8UiwlA8hmUXOyu+66K+l1idL1oYceAtKXy5UbLVq0aGCfKeE9XcJBtsycOTPn9/qNwCWttV6lgg4dOjTyMTfYYAPAeYX1GSiGq2J5vS6KtiG7N4zKeWoM4w9A0WxYpa75YzXq6uqKKm1K1eZUyfD77LOPzgu41MM4Z9IU24atrq4OtI3PPvsMgM6dOwMNY9hxkO097NGjRyD15s2bBzgbtU2bNoCzaSV50zUjlx9m0KBBABx66KEAbLPNNoDzCmeSlukwG9YwKoCC27CKTcp29SWsmmBVOq+99hrQcExDuU57SyQxfq7YbCEka65MmzYtGKAmjUZand9APJ1kla/Cz9LbYostANf+VFlS2cRdo77WJKxhlBFFs2FV2eA3oip2/nApbNimTZsGDbVV3iXbSe1m4qRYNqzuXY8ePXj44YcBl19czIqr2traOmhYwqdc3sRRmOuuuy7gKq+WLFmS9B6V2ynj6Y477gDqM9B8X4u0I73mlltuAeCll14CXOMD2c3ZYDasYVQABZewYdUp/t+LRSkkbHV1dZD5o4wm2UqXXHIJAOecc05s5yu2hP3Pf/7D5MmTARd7VDxTjeviyJEWudzDNddcE3Ax8v/7v/8DnMc3z/UAbgCYCtbzseNNwhpGBVBwCSu7wtf/dd5iZ/yUetxknFU5YRRbwv7444+BBiFNSr6KnXbaSWuK7bxx3ENlPknDueqqq3JZB+A8zIq7JtrMuVLwFjFvvfUW4Dq9Z0KTuqVGFAq/e32pvrDZuPjzpVhfWCUbtGnTJig1VIfATTfdFCjMbJ187mHYlHalnaoIXYXtokOHDkHPLvVqKmRIzlRiw6gASt6ErdiUQsK2bdu2QbBdKpmcIXFSbJW4bdu2QXnan/70J8DNmA2bbJAPpTZrioFJWMOoAEzCFuka5ZRR8y8h20nSSqly+VBsCVsMuzwRk7CGYZQFJmEr/Bor/frgj3GNwiSsYZQRaSWsYRiNC5OwhlFGpM18riTbIEGTqFj755BDDgFg9OjRFWnDKrOoyqsYqaRrVIH9r7/+ajasYZQ75iX2rlFx0Tgbexcb8xKXP+YlNowKoGTjJvNBWkEhit/LWbIalY9JWMMoIwr+ha2rq0vKNfV/rq2tZcqUKQ2asxmG0RCTsIZRRsRuw8quVHW+8L3R+vm3334LYk+qGdUx1Lg57BylRkOML7/88hKvpLJQ9wqNbSwGYZVHiWtRex81JdeQbrVOVbeNXM6n74DfqrXB++IO62hWqvq+Pvvss4Dr7RMFlaLpC7tw4cJslxFKYw0JrLjiioCbyxLGvHnzgk6EfvsbUc5hnfbt2wedF8OI4x5GLQ1UK6Pp06dz6aWXAjBnzhwAxowZA7j+XGFfOp1LE/KitFGysI5hVACxq8TqmK5/33jjDcBJWH+KeTrilKyNDTUpU/M6aRO+hNW0u/vuuw9wfX6hcc/l8aX/448/DkDfvn1Tvl6T0DNJ13zZbbfdAJg/fz7gJrJr9tGRRx4JuOd29913B+o7IarfsEyh22+/HXA9jjUH9t577wXcPFtNxHviiSfyXr9JWMMoI2K3YTVXZOuttwYa7jLZ8OSTTwLQp0+frN8bRj72T9OmTYHMk7Q1oe/jjz8OfudLHP389NNPA8650aNHDwB69eoFuLks6lgf5bOI24bVRLrZs2cDbi6Q5tK0bNkyuC716L3wwgsBOOGEEwCX8tm/f38AJk6cCMD7778PwGmnnQbUX6e0sDDnU7b3sHnz5kEzODmMunbtCtRPLQDX9lT3RT+rLeqvv/7aYEqA5iLpms4++2zANdjTvZVPRjOSo/SkNhvWMCqA2G1Y2SIiG8kqu6Jjx44AdOnSJb6FxUCYZJVnvH379gAMGzYMgIMOOgio/0zeeecdIHzSwQorrAA4G19zc9dbbz2g4ZS1YqAm77qHfjhN0qquro5OnToBzq4bNWoUAEOGDAHg/vvvB5zmoOsVmgCX2KQurrBOYrqpbFZ16R8xYgTguvXr+ZVkVWPxCy+8MLBvd9xxR4AGXmNpR7J11VBPvhjNG1JL2FwwCWsYZURsNqzsDenr2n21o0WhW7dugLN7ZTNq19MOd/rpp0c+pk8uNuyECRMAt7PKhvORFNTurXV/++23fPbZZwAsWLAACNc8/JlD8jxKQkchLhs2m/ZBRxxxBOBijdOmTQPgpJNOAuDf//530uslveXryOQX8NaVsx9C3vlZs2YBcO211wLO+y7tbocddgDcdPX77rsvsGsXL14MOO1h+vTpAFxwwQWAs2WlJeWC2bCGUQE0qgJ2rUW7raT1DTfcAMDRRx+d9Locz5Hz7qx0S9mmsscmTZqU9n1VVVVB+prirvKm3nPPPQDMnTsXgIsvvhhwElavy8aGzVXCtmzZEnASJCydNFVqqGzAs846C3AS6rnnngPgtttuS3q9NLKNN94YcFpVFHK5h/6atT5JTf0sG1paleLCvXv3DlJnFbPVc6qcAklU+TLywSSsYVQAsUvYbFusnHPOOdx5552Ai8n5Yy2UVC2PaT6UKpdY9s15550HNMw3lddUu7Y/VzebrKYoElbH9csdwdnoUSeIJ0qvfv36AS5urOfBjz/Lt6FcXeWeRyGfe+jn++pnfe5nnHEGAA888ADgnslJkyYFUQv5KHQMxVWlJcguj1MTFCZhDaOMKLgNq0w