Files
MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb

663 lines
74 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MNIST Handwritten Digit Classification using ONNX and AzureML\n",
"\n",
"This example shows how to train a model on the MNIST data using PyTorch, save it as an ONNX model, and deploy it as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
"\n",
"## What is ONNX\n",
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
"\n",
"## MNIST Details\n",
"The Modified National Institute of Standards and Technology (MNIST) dataset consists of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing numbers from 0 to 9. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/). For more information about the MNIST model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/vision/classification/mnist). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model\n",
"\n",
"### Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"gpu-cluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC6s_v3', \n",
" max_nodes=6)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"project_folder = './pytorch-mnist'\n",
"os.makedirs(project_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copy the training script [`mnist.py`](mnist.py) into your project directory. Make sure the training script has the following code to create an ONNX file:\n",
"```python\n",
"dummy_input = torch.randn(1, 1, 28, 28, device=device)\n",
"model_path = os.path.join(output_dir, 'mnist.onnx')\n",
"torch.onnx.export(model, dummy_input, model_path)\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"shutil.copy('mnist.py', project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create an experiment\n",
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this transfer learning PyTorch tutorial. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment_name = 'pytorch1-mnist'\n",
"experiment = Experiment(ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a PyTorch estimator\n",
"The AML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch). The following code will define a single-node PyTorch job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.dnn import PyTorch\n",
"\n",
"estimator = PyTorch(source_directory=project_folder, \n",
" script_params={'--output-dir': './outputs'},\n",
" compute_target=compute_target,\n",
" entry_script='mnist.py',\n",
" use_gpu=True)\n",
"\n",
"# upgrade to PyTorch 1.0 Preview, which has better support for ONNX\n",
"estimator.conda_dependencies.remove_conda_package('pytorch=0.4.0')\n",
"estimator.conda_dependencies.add_conda_package('pytorch-nightly')\n",
"estimator.conda_dependencies.add_channel('pytorch')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. Please note the following:\n",
"- We specified the output directory as `./outputs`. The `outputs` directory is specially treated by AML in that all the content in this directory gets uploaded to your workspace as part of your run history. The files written to this directory are therefore accessible even once your remote run is over. In this tutorial, we will save our trained model to this output directory.\n",
"\n",
"To leverage the Azure VM's GPU for training, we set `use_gpu=True`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run = experiment.submit(estimator)\n",
"print(run.get_details())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can block until the script has completed training before running more code."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the model (optional)\n",
"\n",
"Once the run completes, you can choose to download the ONNX model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# list all the files from the run\n",
"run.get_file_names()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('outputs', 'mnist.onnx')\n",
"run.download_file(model_path, output_file_path=model_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the model\n",
"You can also register the model from your run to your workspace. The `model_path` parameter takes in the relative path on the remote VM to the model file in your `outputs` directory. You can then deploy this registered model as a web service through the AML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = run.register_model(model_name='mnist', model_path=model_path)\n",
"print(model.name, model.id, model.version, sep = '\\t')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Displaying your registered models (optional)\n",
"\n",
"You can optionally list out all the models that you have registered in this workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"models = ws.models\n",
"for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploying as a web service\n",
"\n",
"### Write scoring file\n",
"\n",
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import json\n",
"import time\n",
"import sys\n",
"import os\n",
"from azureml.core.model import Model\n",
"import numpy as np # we're going to use numpy to process input and output data\n",
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
"\n",
"def init():\n",
" global session\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'mnist.onnx')\n",
" session = onnxruntime.InferenceSession(model)\n",
"\n",
"def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n",
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
"\n",
"def postprocess(result):\n",
" # We use argmax to pick the highest confidence label\n",
" return int(np.argmax(np.array(result).squeeze(), axis=0))\n",
"\n",
"def run(input_data_json):\n",
" try:\n",
" start = time.time() # start timer\n",
" input_data = preprocess(input_data_json)\n",
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
" result = session.run([], {input_name: input_data})\n",
" end = time.time() # stop timer\n",
" return {\"result\": postprocess(result),\n",
" \"time\": end - start}\n",
" except Exception as e:\n",
" result = str(e)\n",
" return {\"error\": result}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create inference configuration\n",
"First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime==1.15.1\",\"azureml-core\", \"azureml-defaults\"])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we setup the inference configuration "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'demo': 'onnx'}, \n",
" description = 'web service for MNIST ONNX model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will likely take a few minutes to run as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"from random import randint\n",
"\n",
"aci_service_name = 'onnx-demo-mnist'+str(randint(0,100))\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aci_service.state != 'Healthy':\n",
" # run this command for debugging.\n",
" print(aci_service.get_logs())\n",
" aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"\n",
"If you've made it this far, you've deployed a working web service that does handwritten digit classification using an ONNX model. You can get the URL for the webservice with the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aci_service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are eventually done using the web service, remember to delete it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aci_service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"AML Compute"
],
"datasets": [
"MNIST"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Train MNIST in PyTorch, convert, and deploy with ONNX Runtime",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"star_tag": [],
"tags": [
"ONNX Converter"
],
"task": "Image Classification",
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {
"c899ddfc2b134ca9b89a4f278ac7c997": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.1.0",
"model_name": "LayoutModel",
"state": {}
},
"d146cbdbd4e04710b3eebc15a66957ce": {
"model_module": "azureml_widgets",
"model_module_version": "1.0.0",
"model_name": "ShowRunDetailsModel",
"state": {
"child_runs_metrics": {},
"compute_target_status": {
"current_node_count": 1,
"node_state_counts": {
"idleNodeCount": 1,
"leavingNodeCount": 0,
"preparingNodeCount": 0,
"runningNodeCount": 0,
"unusableNodeCount": 0
},
"provisioning_errors": null,
"provisioning_state": "Succeeded",
"requested_node_count": 1,
"scale_settings": {
"autoScale": {
"initialNodeCount": 0,
"maximumNodeCount": 4,
"minimumNodeCount": 0
},
"manual": null
},
"vm_size": "Standard_NC6s_v3"
},
"error": "",
"layout": "IPY_MODEL_c899ddfc2b134ca9b89a4f278ac7c997",
"run_id": "pytorch1-mnist_1537876563990",
"run_logs": "Uploading experiment status to history service.\nAdding run profile attachment azureml-logs/60_control_log.txt\nUploading experiment status to history service.\nAdding run profile attachment azureml-logs/80_driver_log.txt\nScript process exited with code 0\nUploading driver log...\nFinalizing run...\n\nDownloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\nProcessing...\nDone!\nTrain Epoch: 1 [0/60000 (0%)]\tLoss: 2.365850\nTrain Epoch: 1 [640/60000 (1%)]\tLoss: 2.305295\nTrain Epoch: 1 [1280/60000 (2%)]\tLoss: 2.301407\nTrain Epoch: 1 [1920/60000 (3%)]\tLoss: 2.316538\nTrain Epoch: 1 [2560/60000 (4%)]\tLoss: 2.255810\nTrain Epoch: 1 [3200/60000 (5%)]\tLoss: 2.224511\nTrain Epoch: 1 [3840/60000 (6%)]\tLoss: 2.216569\nTrain Epoch: 1 [4480/60000 (7%)]\tLoss: 2.181396\nTrain Epoch: 1 [5120/60000 (9%)]\tLoss: 2.116898\nTrain Epoch: 1 [5760/60000 (10%)]\tLoss: 2.045963\nTrain Epoch: 1 [6400/60000 (11%)]\tLoss: 1.973494\nTrain Epoch: 1 [7040/60000 (12%)]\tLoss: 1.968609\nTrain Epoch: 1 [7680/60000 (13%)]\tLoss: 1.787280\nTrain Epoch: 1 [8320/60000 (14%)]\tLoss: 1.735044\nTrain Epoch: 1 [8960/60000 (15%)]\tLoss: 1.680426\nTrain Epoch: 1 [9600/60000 (16%)]\tLoss: 1.486279\nTrain Epoch: 1 [10240/60000 (17%)]\tLoss: 1.545747\nTrain Epoch: 1 [10880/60000 (18%)]\tLoss: 1.193543\nTrain Epoch: 1 [11520/60000 (19%)]\tLoss: 1.652350\nTrain Epoch: 1 [12160/60000 (20%)]\tLoss: 0.982182\nTrain Epoch: 1 [12800/60000 (21%)]\tLoss: 1.331902\nTrain Epoch: 1 [13440/60000 (22%)]\tLoss: 1.089598\nTrain Epoch: 1 [14080/60000 (23%)]\tLoss: 0.998703\nTrain Epoch: 1 [14720/60000 (25%)]\tLoss: 0.992036\nTrain Epoch: 1 [15360/60000 (26%)]\tLoss: 0.979473\nTrain Epoch: 1 [16000/60000 (27%)]\tLoss: 1.141276\nTrain Epoch: 1 [16640/60000 (28%)]\tLoss: 0.836921\nTrain Epoch: 1 [17280/60000 (29%)]\tLoss: 0.764657\nTrain Epoch: 1 [17920/60000 (30%)]\tLoss: 0.826818\nTrain Epoch: 1 [18560/60000 (31%)]\tLoss: 0.837834\nTrain Epoch: 1 [19200/60000 (32%)]\tLoss: 0.899033\nTrain Epoch: 1 [19840/60000 (33%)]\tLoss: 0.868245\nTrain Epoch: 1 [20480/60000 (34%)]\tLoss: 0.930491\nTrain Epoch: 1 [21120/60000 (35%)]\tLoss: 0.795202\nTrain Epoch: 1 [21760/60000 (36%)]\tLoss: 0.575117\nTrain Epoch: 1 [22400/60000 (37%)]\tLoss: 0.577884\nTrain Epoch: 1 [23040/60000 (38%)]\tLoss: 0.708801\nTrain Epoch: 1 [23680/60000 (39%)]\tLoss: 0.927512\nTrain Epoch: 1 [24320/60000 (41%)]\tLoss: 0.598836\nTrain Epoch: 1 [24960/60000 (42%)]\tLoss: 0.944021\nTrain Epoch: 1 [25600/60000 (43%)]\tLoss: 0.811654\nTrain Epoch: 1 [26240/60000 (44%)]\tLoss: 0.590322\nTrain Epoch: 1 [26880/60000 (45%)]\tLoss: 0.555104\nTrain Epoch: 1 [27520/60000 (46%)]\tLoss: 0.795565\nTrain Epoch: 1 [28160/60000 (47%)]\tLoss: 0.603378\nTrain Epoch: 1 [28800/60000 (48%)]\tLoss: 0.552437\nTrain Epoch: 1 [29440/60000 (49%)]\tLoss: 0.662064\nTrain Epoch: 1 [30080/60000 (50%)]\tLoss: 0.682541\nTrain Epoch: 1 [30720/60000 (51%)]\tLoss: 0.659051\nTrain Epoch: 1 [31360/60000 (52%)]\tLoss: 0.781052\nTrain Epoch: 1 [32000/60000 (53%)]\tLoss: 0.595491\nTrain Epoch: 1 [32640/60000 (54%)]\tLoss: 0.367289\nTrain Epoch: 1 [33280/60000 (55%)]\tLoss: 0.459428\nTrain Epoch: 1 [33920/60000 (57%)]\tLoss: 0.819237\nTrain Epoch: 1 [34560/60000 (58%)]\tLoss: 0.773166\nTrain Epoch: 1 [35200/60000 (59%)]\tLoss: 0.557691\nTrain Epoch: 1 [35840/60000 (60%)]\tLoss: 0.854719\nTrain Epoch: 1 [36480/60000 (61%)]\tLoss: 0.497524\nTrain Epoch: 1 [37120/60000 (62%)]\tLoss: 0.582861\nTrain Epoch: 1 [37760/60000 (63%)]\tLoss: 0.839674\nTrain Epoch: 1 [38400/60000 (64%)]\tLoss: 0.557275\nTrain Epoch: 1 [39040/60000 (65%)]\tLoss: 0.419819\nTrain Epoch: 1 [39680/60000 (66%)]\tLoss: 0.694659\nTrain Epoch: 1 [40320/60000 (67%)]\tLoss: 0.678524\nTrain Epoch: 1 [40960/60000 (68%)]\tLoss: 0.514364\nTrain Epoch: 1 [41600/60000 (69%)]\tLoss: 0.400510\nTrain Epoch: 1 [42240/60000 (70%)]\tLoss: 0.526099\nTrain Epoch: 1 [42880/60000 (71%)]\tLoss: 0.387087\nTrain Epoch: 1 [43520/60000 (72%)]\tLoss: 0.730123\nTrain Epoch: 1 [44160/60000 (74%)]\tLoss: 0.678924\nTrain Epoch: 1 [44800/60000 (75%)]\tLoss: 0.425195\nTrain Epoch: 1 [45440/60000 (76%)]\tLoss: 0.656437\nTrain Epoch: 1 [46080/60000 (77%)]\tLoss: 0.348130\nTrain Epoch: 1 [46720/60000 (78%)]\tLoss: 0.487442\nTrain Epoch: 1 [47360/60000 (79%)]\tLoss: 0.649533\nTrain Epoch: 1 [48000/60000 (80%)]\tLoss: 0.541395\nTrain Epoch: 1 [48640/60000 (81%)]\tLoss: 0.464202\nTrain Epoch: 1 [49280/60000 (82%)]\tLoss: 0.750336\nTrain Epoch: 1 [49920/60000 (83%)]\tLoss: 0.548484\nTrain Epoch: 1 [50560/60000 (84%)]\tLoss: 0.421382\nTrain Epoch: 1 [51200/60000 (85%)]\tLoss: 0.680766\nTrain Epoch: 1 [51840/60000 (86%)]\tLoss: 0.483003\nTrain Epoch: 1 [52480/60000 (87%)]\tLoss: 0.610840\nTrain Epoch: 1 [53120/60000 (88%)]\tLoss: 0.483278\nTrain Epoch: 1 [53760/60000 (90%)]\tLoss: 0.553161\nTrain Epoch: 1 [54400/60000 (91%)]\tLoss: 0.465237\nTrain Epoch: 1 [55040/60000 (92%)]\tLoss: 0.558884\nTrain Epoch: 1 [55680/60000 (93%)]\tLoss: 0.528969\nTrain Epoch: 1 [56320/60000 (94%)]\tLoss: 0.370189\nTrain Epoch: 1 [56960/60000 (95%)]\tLoss: 0.379404\nTrain Epoch: 1 [57600/60000 (96%)]\tLoss: 0.263894\nTrain Epoch: 1 [58240/60000 (97%)]\tLoss: 0.432745\nTrain Epoch: 1 [58880/60000 (98%)]\tLoss: 0.455681\nTrain Epoch: 1 [59520/60000 (99%)]\tLoss: 0.483901\n/azureml-envs/azureml_de892a6d0f01a442356c3959dd42e13b/lib/python3.6/site-packages/torch/nn/functional.py:54: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.\n warnings.warn(warning.format(ret))\n\nTest set: Average loss: 0.2073, Accuracy: 9384/10000 (94%)\n\nTrain Epoch: 2 [0/60000 (0%)]\tLoss: 0.390797\nTrain Epoch: 2 [640/60000 (1%)]\tLoss: 0.214512\nTrain Epoch: 2 [1280/60000 (2%)]\tLoss: 0.226415\nTrain Epoch: 2 [1920/60000 (3%)]\tLoss: 0.491764\nTrain Epoch: 2 [2560/60000 (4%)]\tLoss: 0.333604\nTrain Epoch: 2 [3200/60000 (5%)]\tLoss: 0.514239\nTrain Epoch: 2 [3840/60000 (6%)]\tLoss: 0.430618\nTrain Epoch: 2 [4480/60000 (7%)]\tLoss: 0.579474\nTrain Epoch: 2 [5120/60000 (9%)]\tLoss: 0.259456\nTrain Epoch: 2 [5760/60000 (10%)]\tLoss: 0.651198\nTrain Epoch: 2 [6400/60000 (11%)]\tLoss: 0.338269\nTrain Epoch: 2 [7040/60000 (12%)]\tLoss: 0.335233\nTrain Epoch: 2 [7680/60000 (13%)]\tLoss: 0.518132\nTrain Epoch: 2 [8320/60000 (14%)]\tLoss: 0.363488\nTrain Epoch: 2 [8960/60000 (15%)]\tLoss: 0.437092\nTrain Epoch: 2 [9600/60000 (16%)]\tLoss: 0.362660\nTrain Epoch: 2 [10240/60000 (17%)]\tLoss: 0.432337\nTrain Epoch: 2 [10880/60000 (18%)]\tLoss: 0.360611\nTrain Epoch: 2 [11520/60000 (19%)]\tLoss: 0.305427\nTrain Epoch: 2 [12160/60000 (20%)]\tLoss: 0.347859\nTrain Epoch: 2 [12800/60000 (21%)]\tLoss: 0.408770\nTrain Epoch: 2 [13440/60000 (22%)]\tLoss: 0.469975\nTrain Epoch: 2 [14080/60000 (23%)]\tLoss: 0.673716\nTrain Epoch: 2 [14720/60000 (25%)]\tLoss: 0.388876\nTrain Epoch: 2 [15360/60000 (26%)]\tLoss: 0.462371\nTrain Epoch: 2 [16000/60000 (27%)]\tLoss: 0.530107\nTrain Epoch: 2 [16640/60000 (28%)]\tLoss: 0.448767\nTrain Epoch: 2 [17280/60000 (29%)]\tLoss: 0.412764\nTrain Epoch: 2 [17920/60000 (30%)]\tLoss: 0.301494\nTrain Epoch: 2 [18560/60000 (31%)]\tLoss: 0.465599\nTrain Epoch: 2 [19200/60000 (32%)]\tLoss: 0.434249\nTrain Epoch: 2 [19840/60000 (33%)]\tLoss: 0.324006\nTrain Epoch: 2 [20480/60000 (34%)]\tLoss: 0.447446\nTrain Epoch: 2 [21120/60000 (35%)]\tLoss: 0.291222\nTrain Epoch: 2 [21760/60000 (36%)]\tLoss: 0.557065\nTrain Epoch: 2 [22400/60000 (37%)]\tLoss: 0.552659\nTrain Epoch: 2 [23040/60000 (38%)]\tLoss: 0.378901\nTrain Epoch: 2 [23680/60000 (39%)]\tLoss: 0.360550\nTrain Epoch: 2 [24320/60000 (41%)]\tLoss: 0.283795\nTrain Epoch: 2 [24960/60000 (42%)]\tLoss: 0.475816\nTrain Epoch: 2 [25600/60000 (43%)]\tLoss: 0.283652\nTrain Epoch: 2 [26240/60000 (44%)]\tLoss: 0.276265\nTrain Epoch: 2 [26880/60000 (45%)]\tLoss: 0.527902\nTrain Epoch: 2 [27520/60000 (46%)]\tLoss: 0.437130\nTrain Epoch: 2 [28160/60000 (47%)]\tLoss: 0.277132\nTrain Epoch: 2 [28800/60000 (48%)]\tLoss: 0.471580\nTrain Epoch: 2 [29440/60000 (49%)]\tLoss: 0.380154\nTrain Epoch: 2 [30080/60000 (50%)]\tLoss: 0.232072\nTrain Epoch: 2 [30720/60000 (51%)]\tLoss: 0.366567\nTrain Epoch: 2 [31360/60000 (52%)]\tLoss: 0.469628\nTrain Epoch: 2 [32000/60000 (53%)]\tLoss: 0.440017\nTrain Epoch: 2 [32640/60000 (54%)]\tLoss: 0.421814\nTrain Epoch: 2 [33280/60000 (55%)]\tLoss: 0.367687\nTrain Epoch: 2 [33920/60000 (57%)]\tLoss: 0.448384\nTrain Epoch: 2 [34560/60000 (58%)]\tLoss: 0.550283\nTrain Epoch: 2 [35200/60000 (59%)]\tLoss: 0.609798\nTrain Epoch: 2 [35840/60000 (60%)]\tLoss: 0.461334\nTrain Epoch: 2 [36480/60000 (61%)]\tLoss: 0.443838\nTrain Epoch: 2 [37120/60000 (62%)]\tLoss: 0.306666\nTrain Epoch: 2 [37760/60000 (63%)]\tLoss: 0.432083\nTrain Epoch: 2 [38400/60000 (64%)]\tLoss: 0.277025\nTrain Epoch: 2 [39040/60000 (65%)]\tLoss: 0.298752\nTrain Epoch: 2 [39680/60000 (66%)]\tLoss: 0.427435\nTrain Epoch: 2 [40320/60000 (67%)]\tLoss: 0.374736\nTrain Epoch: 2 [40960/60000 (68%)]\tLoss: 0.246496\nTrain Epoch: 2 [41600/60000 (69%)]\tLoss: 0.662259\nTrain Epoch: 2 [42240/60000 (70%)]\tLoss: 0.497635\nTrain Epoch: 2 [42880/60000 (71%)]\tLoss: 0.237556\nTrain Epoch: 2 [43520/60000 (72%)]\tLoss: 0.194535\nTrain Epoch: 2 [44160/60000 (74%)]\tLoss: 0.258943\nTrain Epoch: 2 [44800/60000 (75%)]\tLoss: 0.437360\nTrain Epoch: 2 [45440/60000 (76%)]\tLoss: 0.355489\nTrain Epoch: 2 [46080/60000 (77%)]\tLoss: 0.335020\nTrain Epoch: 2 [46720/60000 (78%)]\tLoss: 0.565189\nTrain Epoch: 2 [47360/60000 (79%)]\tLoss: 0.430366\nTrain Epoch: 2 [48000/60000 (80%)]\tLoss: 0.266303\nTrain Epoch: 2 [48640/60000 (81%)]\tLoss: 0.172954\nTrain Epoch: 2 [49280/60000 (82%)]\tLoss: 0.245803\nTrain Epoch: 2 [49920/60000 (83%)]\tLoss: 0.426530\nTrain Epoch: 2 [50560/60000 (84%)]\tLoss: 0.468984\nTrain Epoch: 2 [51200/60000 (85%)]\tLoss: 0.370892\nTrain Epoch: 2 [51840/60000 (86%)]\tLoss: 0.300021\nTrain Epoch: 2 [52480/60000 (87%)]\tLoss: 0.392199\nTrain Epoch: 2 [53120/60000 (88%)]\tLoss: 0.510658\nTrain Epoch: 2 [53760/60000 (90%)]\tLoss: 0.376290\nTrain Epoch: 2 [54400/60000 (91%)]\tLoss: 0.273752\nTrain Epoch: 2 [55040/60000 (92%)]\tLoss: 0.234505\nTrain Epoch: 2 [55680/60000 (93%)]\tLoss: 0.610978\nTrain Epoch: 2 [56320/60000 (94%)]\tLoss: 0.154850\nTrain Epoch: 2 [56960/60000 (95%)]\tLoss: 0.374254\nTrain Epoch: 2 [57600/60000 (96%)]\tLoss: 0.292167\nTrain Epoch: 2 [58240/60000 (97%)]\tLoss: 0.478376\nTrain Epoch: 2 [58880/60000 (98%)]\tLoss: 0.303128\nTrain Epoch: 2 [59520/60000 (99%)]\tLoss: 0.376779\n\nTest set: Average loss: 0.1297, Accuracy: 9597/10000 (96%)\n\nTrain Epoch: 3 [0/60000 (0%)]\tLoss: 0.450588\nTrain Epoch: 3 [640/60000 (1%)]\tLoss: 0.361118\nTrain Epoch: 3 [1280/60000 (2%)]\tLoss: 0.374497\nTrain Epoch: 3 [1920/60000 (3%)]\tLoss: 0.312127\nTrain Epoch: 3 [2560/60000 (4%)]\tLoss: 0.353896\nTrain Epoch: 3 [3200/60000 (5%)]\tLoss: 0.320840\nTrain Epoch: 3 [3840/60000 (6%)]\tLoss: 0.218477\nTrain Epoch: 3 [4480/60000 (7%)]\tLoss: 0.295629\nTrain Epoch: 3 [5120/60000 (9%)]\tLoss: 0.339400\nTrain Epoch: 3 [5760/60000 (10%)]\tLoss: 0.170357\nTrain Epoch: 3 [6400/60000 (11%)]\tLoss: 0.416447\nTrain Epoch: 3 [7040/60000 (12%)]\tLoss: 0.320326\nTrain Epoch: 3 [7680/60000 (13%)]\tLoss: 0.318410\nTrain Epoch: 3 [8320/60000 (14%)]\tLoss: 0.384793\nTrain Epoch: 3 [8960/60000 (15%)]\tLoss: 0.343415\nTrain Epoch: 3 [9600/60000 (16%)]\tLoss: 0.284627\nTrain Epoch: 3 [10240/60000 (17%)]\tLoss: 0.151805\nTrain Epoch: 3 [10880/60000 (18%)]\tLoss: 0.401332\nTrain Epoch: 3 [11520/60000 (19%)]\tLoss: 0.253159\nTrain Epoch: 3 [12160/60000 (20%)]\tLoss: 0.339563\nTrain Epoch: 3 [12800/60000 (21%)]\tLoss: 0.237430\nTrain Epoch: 3 [13440/60000 (22%)]\tLoss: 0.311402\nTrain Epoch: 3 [14080/60000 (23%)]\tLoss: 0.241667\nTrain Epoch: 3 [14720/60000 (25%)]\tLoss: 0.265347\nTrain Epoch: 3 [15360/60000 (26%)]\tLoss: 0.367453\nTrain Epoch: 3 [16000/60000 (27%)]\tLoss: 0.190671\nTrain Epoch: 3 [16640/60000 (28%)]\tLoss: 0.313052\nTrain Epoch: 3 [17280/60000 (29%)]\tLoss: 0.368028\nTrain Epoch: 3 [17920/60000 (30%)]\tLoss: 0.268639\nTrain Epoch: 3 [18560/60000 (31%)]\tLoss: 0.341066\nTrain Epoch: 3 [19200/60000 (32%)]\tLoss: 0.457961\nTrain Epoch: 3 [19840/60000 (33%)]\tLoss: 0.732400\nTrain Epoch: 3 [20480/60000 (34%)]\tLoss: 0.330679\nTrain Epoch: 3 [21120/60000 (35%)]\tLoss: 0.279778\nTrain Epoch: 3 [21760/60000 (36%)]\tLoss: 0.305972\nTrain Epoch: 3 [22400/60000 (37%)]\tLoss: 0.402131\nTrain Epoch: 3 [23040/60000 (38%)]\tLoss: 0.345302\nTrain Epoch: 3 [23680/60000 (39%)]\tLoss: 0.251726\nTrain Epoch: 3 [24320/60000 (41%)]\tLoss: 0.152062\nTrain Epoch: 3 [24960/60000 (42%)]\tLoss: 0.149305\nTrain Epoch: 3 [25600/60000 (43%)]\tLoss: 0.364678\nTrain Epoch: 3 [26240/60000 (44%)]\tLoss: 0.067165\nTrain Epoch: 3 [26880/60000 (45%)]\tLoss: 0.229927\nTrain Epoch: 3 [27520/60000 (46%)]\tLoss: 0.236894\nTrain Epoch: 3 [28160/60000 (47%)]\tLoss: 0.486373\nTrain Epoch: 3 [28800/60000 (48%)]\tLoss: 0.453053\nTrain Epoch: 3 [29440/60000 (49%)]\tLoss: 0.283823\nTrain Epoch: 3 [30080/60000 (50%)]\tLoss: 0.185119\nTrain Epoch: 3 [30720/60000 (51%)]\tLoss: 0.381274\nTrain Epoch: 3 [31360/60000 (52%)]\tLoss: 0.394533\nTrain Epoch: 3 [32000/60000 (53%)]\tLoss: 0.392791\nTrain Epoch: 3 [32640/60000 (54%)]\tLoss: 0.230672\nTrain Epoch: 3 [33280/60000 (55%)]\tLoss: 0.393846\nTrain Epoch: 3 [33920/60000 (57%)]\tLoss: 0.676802\nTrain Epoch: 3 [34560/60000 (58%)]\tLoss: 0.160434\nTrain Epoch: 3 [35200/60000 (59%)]\tLoss: 0.211318\nTrain Epoch: 3 [35840/60000 (60%)]\tLoss: 0.245763\nTrain Epoch: 3 [36480/60000 (61%)]\tLoss: 0.198454\nTrain Epoch: 3 [37120/60000 (62%)]\tLoss: 0.243536\nTrain Epoch: 3 [37760/60000 (63%)]\tLoss: 0.151804\nTrain Epoch: 3 [38400/60000 (64%)]\tLoss: 0.176093\nTrain Epoch: 3 [39040/60000 (65%)]\tLoss: 0.237228\nTrain Epoch: 3 [39680/60000 (66%)]\tLoss: 0.146441\nTrain Epoch: 3 [40320/60000 (67%)]\tLoss: 0.345162\nTrain Epoch: 3 [40960/60000 (68%)]\tLoss: 0.400378\nTrain Epoch: 3 [41600/60000 (69%)]\tLoss: 0.259152\nTrain Epoch: 3 [42240/60000 (70%)]\tLoss: 0.569659\nTrain Epoch: 3 [42880/60000 (71%)]\tLoss: 0.166401\nTrain Epoch: 3 [43520/60000 (72%)]\tLoss: 0.220592\nTrain Epoch: 3 [44160/60000 (74%)]\tLoss: 0.303227\nTrain Epoch: 3 [44800/60000 (75%)]\tLoss: 0.193691\nTrain Epoch: 3 [45440/60000 (76%)]\tLoss: 0.257408\nTrain Epoch: 3 [46080/60000 (77%)]\tLoss: 0.391211\nTrain Epoch: 3 [46720/60000 (78%)]\tLoss: 0.419841\nTrain Epoch: 3 [47360/60000 (79%)]\tLoss: 0.121861\nTrain Epoch: 3 [48000/60000 (80%)]\tLoss: 0.176442\nTrain Epoch: 3 [48640/60000 (81%)]\tLoss: 0.534631\nTrain Epoch: 3 [49280/60000 (82%)]\tLoss: 0.296596\nTrain Epoch: 3 [49920/60000 (83%)]\tLoss: 0.190096\nTrain Epoch: 3 [50560/60000 (84%)]\tLoss: 0.360826\nTrain Epoch: 3 [51200/60000 (85%)]\tLoss: 0.427482\nTrain Epoch: 3 [51840/60000 (86%)]\tLoss: 0.251076\nTrain Epoch: 3 [52480/60000 (87%)]\tLoss: 0.319904\nTrain Epoch: 3 [53120/60000 (88%)]\tLoss: 0.228778\nTrain Epoch: 3 [53760/60000 (90%)]\tLoss: 0.180340\nTrain Epoch: 3 [54400/60000 (91%)]\tLoss: 0.236512\nTrain Epoch: 3 [55040/60000 (92%)]\tLoss: 0.206779\nTrain Epoch: 3 [55680/60000 (93%)]\tLoss: 0.323677\nTrain Epoch: 3 [56320/60000 (94%)]\tLoss: 0.406382\nTrain Epoch: 3 [56960/60000 (95%)]\tLoss: 0.426768\nTrain Epoch: 3 [57600/60000 (96%)]\tLoss: 0.595419\nTrain Epoch: 3 [58240/60000 (97%)]\tLoss: 0.175457\nTrain Epoch: 3 [58880/60000 (98%)]\tLoss: 0.301019\nTrain Epoch: 3 [59520/60000 (99%)]\tLoss: 0.419139\n\nTest set: Average loss: 0.1049, Accuracy: 9686/10000 (97%)\n\nTrain Epoch: 4 [0/60000 (0%)]\tLoss: 0.352631\nTrain Epoch: 4 [640/60000 (1%)]\tLoss: 0.343671\nTrain Epoch: 4 [1280/60000 (2%)]\tLoss: 0.170439\nTrain Epoch: 4 [1920/60000 (3%)]\tLoss: 0.289486\nTrain Epoch: 4 [2560/60000 (4%)]\tLoss: 0.096597\nTrain Epoch: 4 [3200/60000 (5%)]\tLoss: 0.263759\nTrain Epoch: 4 [3840/60000 (6%)]\tLoss: 0.369941\nTrain Epoch: 4 [4480/60000 (7%)]\tLoss: 0.326594\nTrain Epoch: 4 [5120/60000 (9%)]\tLoss: 0.174094\nTrain Epoch: 4 [5760/60000 (10%)]\tLoss: 0.442069\nTrain Epoch: 4 [6400/60000 (11%)]\tLoss: 0.179002\nTrain Epoch: 4 [7040/60000 (12%)]\tLoss: 0.292742\nTrain Epoch: 4 [7680/60000 (13%)]\tLoss: 0.209898\nTrain Epoch: 4 [8320/60000 (14%)]\tLoss: 0.401671\nTrain Epoch: 4 [8960/60000 (15%)]\tLoss: 0.205146\nTrain Epoch: 4 [9600/60000 (16%)]\tLoss: 0.250836\nTrain Epoch: 4 [10240/60000 (17%)]\tLoss: 0.156622\nTrain Epoch: 4 [10880/60000 (18%)]\tLoss: 0.214578\nTrain Epoch: 4 [11520/60000 (19%)]\tLoss: 0.155916\nTrain Epoch: 4 [12160/60000 (20%)]\tLoss: 0.416294\nTrain Epoch: 4 [12800/60000 (21%)]\tLoss: 0.197429\nTrain Epoch: 4 [13440/60000 (22%)]\tLoss: 0.154103\nTrain Epoch: 4 [14080/60000 (23%)]\tLoss: 0.377950\nTrain Epoch: 4 [14720/60000 (25%)]\tLoss: 0.338084\nTrain Epoch: 4 [15360/60000 (26%)]\tLoss: 0.242834\nTrain Epoch: 4 [16000/60000 (27%)]\tLoss: 0.139219\nTrain Epoch: 4 [16640/60000 (28%)]\tLoss: 0.242067\nTrain Epoch: 4 [17280/60000 (29%)]\tLoss: 0.189929\nTrain Epoch: 4 [17920/60000 (30%)]\tLoss: 0.358215\nTrain Epoch: 4 [18560/60000 (31%)]\tLoss: 0.354969\nTrain Epoch: 4 [19200/60000 (32%)]\tLoss: 0.303644\nTrain Epoch: 4 [19840/60000 (33%)]\tLoss: 0.322343\nTrain Epoch: 4 [20480/60000 (34%)]\tLoss: 0.225422\nTrain Epoch: 4 [21120/60000 (35%)]\tLoss: 0.614347\nTrain Epoch: 4 [21760/60000 (36%)]\tLoss: 0.448674\nTrain Epoch: 4 [22400/60000 (37%)]\tLoss: 0.362976\nTrain Epoch: 4 [23040/60000 (38%)]\tLoss: 0.100357\nTrain Epoch: 4 [23680/60000 (39%)]\tLoss: 0.289331\nTrain Epoch: 4 [24320/60000 (41%)]\tLoss: 0.405818\nTrain Epoch: 4 [24960/60000 (42%)]\tLoss: 0.212617\nTrain Epoch: 4 [25600/60000 (43%)]\tLoss: 0.348597\nTrain Epoch: 4 [26240/60000 (44%)]\tLoss: 0.351009\nTrain Epoch: 4 [26880/60000 (45%)]\tLoss: 0.341456\nTrain Epoch: 4 [27520/60000 (46%)]\tLoss: 0.297527\nTrain Epoch: 4 [28160/60000 (47%)]\tLoss: 0.281190\nTrain Epoch: 4 [28800/60000 (48%)]\tLoss: 0.187359\nTrain Epoch: 4 [29440/60000 (49%)]\tLoss: 0.178844\nTrain Epoch: 4 [30080/60000 (50%)]\tLoss: 0.201243\nTrain Epoch: 4 [30720/60000 (51%)]\tLoss: 0.305701\nTrain Epoch: 4 [31360/60000 (52%)]\tLoss: 0.370592\nTrain Epoch: 4 [32000/60000 (53%)]\tLoss: 0.241955\nTrain Epoch: 4 [32640/60000 (54%)]\tLoss: 0.278765\nTrain Epoch: 4 [33280/60000 (55%)]\tLoss: 0.284302\nTrain Epoch: 4 [33920/60000 (57%)]\tLoss: 0.337426\nTrain Epoch: 4 [34560/60000 (58%)]\tLoss: 0.277304\nTrain Epoch: 4 [35200/60000 (59%)]\tLoss: 0.221228\nTrain Epoch: 4 [35840/60000 (60%)]\tLoss: 0.150985\nTrain Epoch: 4 [36480/60000 (61%)]\tLoss: 0.312087\nTrain Epoch: 4 [37120/60000 (62%)]\tLoss: 0.170111\nTrain Epoch: 4 [37760/60000 (63%)]\tLoss: 0.291135\nTrain Epoch: 4 [38400/60000 (64%)]\tLoss: 0.160971\nTrain Epoch: 4 [39040/60000 (65%)]\tLoss: 0.390679\nTrain Epoch: 4 [39680/60000 (66%)]\tLoss: 0.434802\nTrain Epoch: 4 [40320/60000 (67%)]\tLoss: 0.281539\nTrain Epoch: 4 [40960/60000 (68%)]\tLoss: 0.172577\nTrain Epoch: 4 [41600/60000 (69%)]\tLoss: 0.348624\nTrain Epoch: 4 [42240/60000 (70%)]\tLoss: 0.380416\nTrain Epoch: 4 [42880/60000 (71%)]\tLoss: 0.483520\nTrain Epoch: 4 [43520/60000 (72%)]\tLoss: 0.216825\nTrain Epoch: 4 [44160/60000 (74%)]\tLoss: 0.320874\nTrain Epoch: 4 [44800/60000 (75%)]\tLoss: 0.213358\nTrain Epoch: 4 [45440/60000 (76%)]\tLoss: 0.218650\nTrain Epoch: 4 [46080/60000 (77%)]\tLoss: 0.221090\nTrain Epoch: 4 [46720/60000 (78%)]\tLoss: 0.325981\nTrain Epoch: 4 [47360/60000 (79%)]\tLoss: 0.283184\nTrain Epoch: 4 [48000/60000 (80%)]\tLoss: 0.072845\nTrain Epoch: 4 [48640/60000 (81%)]\tLoss: 0.206940\nTrain Epoch: 4 [49280/60000 (82%)]\tLoss: 0.423454\nTrain Epoch: 4 [49920/60000 (83%)]\tLoss: 0.475285\nTrain Epoch: 4 [50560/60000 (84%)]\tLoss: 0.128978\nTrain Epoch: 4 [51200/60000 (85%)]\tLoss: 0.195609\nTrain Epoch: 4 [51840/60000 (86%)]\tLoss: 0.125730\nTrain Epoch: 4 [52480/60000 (87%)]\tLoss: 0.137783\nTrain Epoch: 4 [53120/60000 (88%)]\tLoss: 0.375247\nTrain Epoch: 4 [53760/60000 (90%)]\tLoss: 0.243497\nTrain Epoch: 4 [54400/60000 (91%)]\tLoss: 0.236100\nTrain Epoch: 4 [55040/60000 (92%)]\tLoss: 0.266795\nTrain Epoch: 4 [55680/60000 (93%)]\tLoss: 0.229095\nTrain Epoch: 4 [56320/60000 (94%)]\tLoss: 0.167610\nTrain Epoch: 4 [56960/60000 (95%)]\tLoss: 0.240640\nTrain Epoch: 4 [57600/60000 (96%)]\tLoss: 0.153999\nTrain Epoch: 4 [58240/60000 (97%)]\tLoss: 0.753790\nTrain Epoch: 4 [58880/60000 (98%)]\tLoss: 0.143998\nTrain Epoch: 4 [59520/60000 (99%)]\tLoss: 0.310583\n\nTest set: Average loss: 0.0843, Accuracy: 9739/10000 (97%)\n\nTrain Epoch: 5 [0/60000 (0%)]\tLoss: 0.227892\nTrain Epoch: 5 [640/60000 (1%)]\tLoss: 0.162702\nTrain Epoch: 5 [1280/60000 (2%)]\tLoss: 0.227571\nTrain Epoch: 5 [1920/60000 (3%)]\tLoss: 0.148511\nTrain Epoch: 5 [2560/60000 (4%)]\tLoss: 0.187414\nTrain Epoch: 5 [3200/60000 (5%)]\tLoss: 0.194418\nTrain Epoch: 5 [3840/60000 (6%)]\tLoss: 0.276495\nTrain Epoch: 5 [4480/60000 (7%)]\tLoss: 0.268769\nTrain Epoch: 5 [5120/60000 (9%)]\tLoss: 0.163968\nTrain Epoch: 5 [5760/60000 (10%)]\tLoss: 0.349296\nTrain Epoch: 5 [6400/60000 (11%)]\tLoss: 0.217248\nTrain Epoch: 5 [7040/60000 (12%)]\tLoss: 0.195263\nTrain Epoch: 5 [7680/60000 (13%)]\tLoss: 0.339447\nTrain Epoch: 5 [8320/60000 (14%)]\tLoss: 0.224461\nTrain Epoch: 5 [8960/60000 (15%)]\tLoss: 0.095605\nTrain Epoch: 5 [9600/60000 (16%)]\tLoss: 0.196891\nTrain Epoch: 5 [10240/60000 (17%)]\tLoss: 0.218742\nTrain Epoch: 5 [10880/60000 (18%)]\tLoss: 0.071347\nTrain Epoch: 5 [11520/60000 (19%)]\tLoss: 0.403286\nTrain Epoch: 5 [12160/60000 (20%)]\tLoss: 0.149740\nTrain Epoch: 5 [12800/60000 (21%)]\tLoss: 0.160939\nTrain Epoch: 5 [13440/60000 (22%)]\tLoss: 0.236512\nTrain Epoch: 5 [14080/60000 (23%)]\tLoss: 0.348727\nTrain Epoch: 5 [14720/60000 (25%)]\tLoss: 0.190054\nTrain Epoch: 5 [15360/60000 (26%)]\tLoss: 0.272029\nTrain Epoch: 5 [16000/60000 (27%)]\tLoss: 0.427739\nTrain Epoch: 5 [16640/60000 (28%)]\tLoss: 0.322332\nTrain Epoch: 5 [17280/60000 (29%)]\tLoss: 0.141410\nTrain Epoch: 5 [17920/60000 (30%)]\tLoss: 0.098900\nTrain Epoch: 5 [18560/60000 (31%)]\tLoss: 0.252387\nTrain Epoch: 5 [19200/60000 (32%)]\tLoss: 0.182150\nTrain Epoch: 5 [19840/60000 (33%)]\tLoss: 0.133239\nTrain Epoch: 5 [20480/60000 (34%)]\tLoss: 0.126683\nTrain Epoch: 5 [21120/60000 (35%)]\tLoss: 0.370189\nTrain Epoch: 5 [21760/60000 (36%)]\tLoss: 0.162514\nTrain Epoch: 5 [22400/60000 (37%)]\tLoss: 0.272352\nTrain Epoch: 5 [23040/60000 (38%)]\tLoss: 0.298543\nTrain Epoch: 5 [23680/60000 (39%)]\tLoss: 0.235891\nTrain Epoch: 5 [24320/60000 (41%)]\tLoss: 0.187710\nTrain Epoch: 5 [24960/60000 (42%)]\tLoss: 0.185363\nTrain Epoch: 5 [25600/60000 (43%)]\tLoss: 0.193369\nTrain Epoch: 5 [26240/60000 (44%)]\tLoss: 0.155984\nTrain Epoch: 5 [26880/60000 (45%)]\tLoss: 0.388923\nTrain Epoch: 5 [27520/60000 (46%)]\tLoss: 0.192868\nTrain Epoch: 5 [28160/60000 (47%)]\tLoss: 0.535787\nTrain Epoch: 5 [28800/60000 (48%)]\tLoss: 0.161020\nTrain Epoch: 5 [29440/60000 (49%)]\tLoss: 0.242179\nTrain Epoch: 5 [30080/60000 (50%)]\tLoss: 0.136554\nTrain Epoch: 5 [30720/60000 (51%)]\tLoss: 0.190672\nTrain Epoch: 5 [31360/60000 (52%)]\tLoss: 0.118027\nTrain Epoch: 5 [32000/60000 (53%)]\tLoss: 0.278750\nTrain Epoch: 5 [32640/60000 (54%)]\tLoss: 0.418058\nTrain Epoch: 5 [33280/60000 (55%)]\tLoss: 0.287063\nTrain Epoch: 5 [33920/60000 (57%)]\tLoss: 0.279596\nTrain Epoch: 5 [34560/60000 (58%)]\tLoss: 0.181579\nTrain Epoch: 5 [35200/60000 (59%)]\tLoss: 0.443592\nTrain Epoch: 5 [35840/60000 (60%)]\tLoss: 0.095470\nTrain Epoch: 5 [36480/60000 (61%)]\tLoss: 0.277385\nTrain Epoch: 5 [37120/60000 (62%)]\tLoss: 0.263358\nTrain Epoch: 5 [37760/60000 (63%)]\tLoss: 0.190867\nTrain Epoch: 5 [38400/60000 (64%)]\tLoss: 0.176580\nTrain Epoch: 5 [39040/60000 (65%)]\tLoss: 0.360235\nTrain Epoch: 5 [39680/60000 (66%)]\tLoss: 0.172416\nTrain Epoch: 5 [40320/60000 (67%)]\tLoss: 0.174126\nTrain Epoch: 5 [40960/60000 (68%)]\tLoss: 0.202162\nTrain Epoch: 5 [41600/60000 (69%)]\tLoss: 0.196991\nTrain Epoch: 5 [42240/60000 (70%)]\tLoss: 0.224622\nTrain Epoch: 5 [42880/60000 (71%)]\tLoss: 0.180406\nTrain Epoch: 5 [43520/60000 (72%)]\tLoss: 0.060447\nTrain Epoch: 5 [44160/60000 (74%)]\tLoss: 0.322497\nTrain Epoch: 5 [44800/60000 (75%)]\tLoss: 0.239324\nTrain Epoch: 5 [45440/60000 (76%)]\tLoss: 0.348920\nTrain Epoch: 5 [46080/60000 (77%)]\tLoss: 0.240017\nTrain Epoch: 5 [46720/60000 (78%)]\tLoss: 0.237575\nTrain Epoch: 5 [47360/60000 (79%)]\tLoss: 0.142648\nTrain Epoch: 5 [48000/60000 (80%)]\tLoss: 0.227562\nTrain Epoch: 5 [48640/60000 (81%)]\tLoss: 0.254358\nTrain Epoch: 5 [49280/60000 (82%)]\tLoss: 0.135818\nTrain Epoch: 5 [49920/60000 (83%)]\tLoss: 0.386120\nTrain Epoch: 5 [50560/60000 (84%)]\tLoss: 0.328150\nTrain Epoch: 5 [51200/60000 (85%)]\tLoss: 0.276833\nTrain Epoch: 5 [51840/60000 (86%)]\tLoss: 0.308869\nTrain Epoch: 5 [52480/60000 (87%)]\tLoss: 0.246442\nTrain Epoch: 5 [53120/60000 (88%)]\tLoss: 0.240874\nTrain Epoch: 5 [53760/60000 (90%)]\tLoss: 0.114337\nTrain Epoch: 5 [54400/60000 (91%)]\tLoss: 0.217325\nTrain Epoch: 5 [55040/60000 (92%)]\tLoss: 0.223010\nTrain Epoch: 5 [55680/60000 (93%)]\tLoss: 0.138459\nTrain Epoch: 5 [56320/60000 (94%)]\tLoss: 0.283678\nTrain Epoch: 5 [56960/60000 (95%)]\tLoss: 0.158834\nTrain Epoch: 5 [57600/60000 (96%)]\tLoss: 0.164267\nTrain Epoch: 5 [58240/60000 (97%)]\tLoss: 0.290795\nTrain Epoch: 5 [58880/60000 (98%)]\tLoss: 0.451639\nTrain Epoch: 5 [59520/60000 (99%)]\tLoss: 0.349018\n\nTest set: Average loss: 0.0797, Accuracy: 9758/10000 (98%)\n\nTrain Epoch: 6 [0/60000 (0%)]\tLoss: 0.311334\nTrain Epoch: 6 [640/60000 (1%)]\tLoss: 0.129143\nTrain Epoch: 6 [1280/60000 (2%)]\tLoss: 0.227222\nTrain Epoch: 6 [1920/60000 (3%)]\tLoss: 0.157591\nTrain Epoch: 6 [2560/60000 (4%)]\tLoss: 0.205490\nTrain Epoch: 6 [3200/60000 (5%)]\tLoss: 0.421089\nTrain Epoch: 6 [3840/60000 (6%)]\tLoss: 0.157544\nTrain Epoch: 6 [4480/60000 (7%)]\tLoss: 0.087023\nTrain Epoch: 6 [5120/60000 (9%)]\tLoss: 0.130669\nTrain Epoch: 6 [5760/60000 (10%)]\tLoss: 0.059450\nTrain Epoch: 6 [6400/60000 (11%)]\tLoss: 0.121786\nTrain Epoch: 6 [7040/60000 (12%)]\tLoss: 0.177859\nTrain Epoch: 6 [7680/60000 (13%)]\tLoss: 0.217464\nTrain Epoch: 6 [8320/60000 (14%)]\tLoss: 0.183426\nTrain Epoch: 6 [8960/60000 (15%)]\tLoss: 0.237282\nTrain Epoch: 6 [9600/60000 (16%)]\tLoss: 0.210031\nTrain Epoch: 6 [10240/60000 (17%)]\tLoss: 0.256110\nTrain Epoch: 6 [10880/60000 (18%)]\tLoss: 0.155481\nTrain Epoch: 6 [11520/60000 (19%)]\tLoss: 0.166967\nTrain Epoch: 6 [12160/60000 (20%)]\tLoss: 0.144590\nTrain Epoch: 6 [12800/60000 (21%)]\tLoss: 0.229593\nTrain Epoch: 6 [13440/60000 (22%)]\tLoss: 0.092102\nTrain Epoch: 6 [14080/60000 (23%)]\tLoss: 0.144247\nTrain Epoch: 6 [14720/60000 (25%)]\tLoss: 0.459083\nTrain Epoch: 6 [15360/60000 (26%)]\tLoss: 0.174974\nTrain Epoch: 6 [16000/60000 (27%)]\tLoss: 0.146433\nTrain Epoch: 6 [16640/60000 (28%)]\tLoss: 0.291392\nTrain Epoch: 6 [17280/60000 (29%)]\tLoss: 0.203127\nTrain Epoch: 6 [17920/60000 (30%)]\tLoss: 0.255063\nTrain Epoch: 6 [18560/60000 (31%)]\tLoss: 0.167576\nTrain Epoch: 6 [19200/60000 (32%)]\tLoss: 0.171914\nTrain Epoch: 6 [19840/60000 (33%)]\tLoss: 0.215950\nTrain Epoch: 6 [20480/60000 (34%)]\tLoss: 0.246624\nTrain Epoch: 6 [21120/60000 (35%)]\tLoss: 0.242730\nTrain Epoch: 6 [21760/60000 (36%)]\tLoss: 0.345666\nTrain Epoch: 6 [22400/60000 (37%)]\tLoss: 0.229078\nTrain Epoch: 6 [23040/60000 (38%)]\tLoss: 0.283169\nTrain Epoch: 6 [23680/60000 (39%)]\tLoss: 0.246430\nTrain Epoch: 6 [24320/60000 (41%)]\tLoss: 0.217211\nTrain Epoch: 6 [24960/60000 (42%)]\tLoss: 0.168141\nTrain Epoch: 6 [25600/60000 (43%)]\tLoss: 0.297715\nTrain Epoch: 6 [26240/60000 (44%)]\tLoss: 0.200130\nTrain Epoch: 6 [26880/60000 (45%)]\tLoss: 0.344390\nTrain Epoch: 6 [27520/60000 (46%)]\tLoss: 0.246202\nTrain Epoch: 6 [28160/60000 (47%)]\tLoss: 0.272422\nTrain Epoch: 6 [28800/60000 (48%)]\tLoss: 0.117001\nTrain Epoch: 6 [29440/60000 (49%)]\tLoss: 0.246031\nTrain Epoch: 6 [30080/60000 (50%)]\tLoss: 0.138119\nTrain Epoch: 6 [30720/60000 (51%)]\tLoss: 0.214345\nTrain Epoch: 6 [31360/60000 (52%)]\tLoss: 0.134483\nTrain Epoch: 6 [32000/60000 (53%)]\tLoss: 0.201771\nTrain Epoch: 6 [32640/60000 (54%)]\tLoss: 0.201668\nTrain Epoch: 6 [33280/60000 (55%)]\tLoss: 0.111183\nTrain Epoch: 6 [33920/60000 (57%)]\tLoss: 0.093289\nTrain Epoch: 6 [34560/60000 (58%)]\tLoss: 0.171475\nTrain Epoch: 6 [35200/60000 (59%)]\tLoss: 0.178729\nTrain Epoch: 6 [35840/60000 (60%)]\tLoss: 0.144986\nTrain Epoch: 6 [36480/60000 (61%)]\tLoss: 0.302206\nTrain Epoch: 6 [37120/60000 (62%)]\tLoss: 0.389723\nTrain Epoch: 6 [37760/60000 (63%)]\tLoss: 0.268302\nTrain Epoch: 6 [38400/60000 (64%)]\tLoss: 0.358240\nTrain Epoch: 6 [39040/60000 (65%)]\tLoss: 0.241359\nTrain Epoch: 6 [39680/60000 (66%)]\tLoss: 0.282464\nTrain Epoch: 6 [40320/60000 (67%)]\tLoss: 0.205064\nTrain Epoch: 6 [40960/60000 (68%)]\tLoss: 0.106739\nTrain Epoch: 6 [41600/60000 (69%)]\tLoss: 0.076333\nTrain Epoch: 6 [42240/60000 (70%)]\tLoss: 0.157558\nTrain Epoch: 6 [42880/60000 (71%)]\tLoss: 0.217494\nTrain Epoch: 6 [43520/60000 (72%)]\tLoss: 0.183687\nTrain Epoch: 6 [44160/60000 (74%)]\tLoss: 0.217155\nTrain Epoch: 6 [44800/60000 (75%)]\tLoss: 0.108482\nTrain Epoch: 6 [45440/60000 (76%)]\tLoss: 0.324247\nTrain Epoch: 6 [46080/60000 (77%)]\tLoss: 0.352494\nTrain Epoch: 6 [46720/60000 (78%)]\tLoss: 0.163462\nTrain Epoch: 6 [47360/60000 (79%)]\tLoss: 0.154820\nTrain Epoch: 6 [48000/60000 (80%)]\tLoss: 0.174164\nTrain Epoch: 6 [48640/60000 (81%)]\tLoss: 0.196258\nTrain Epoch: 6 [49280/60000 (82%)]\tLoss: 0.226030\nTrain Epoch: 6 [49920/60000 (83%)]\tLoss: 0.306971\nTrain Epoch: 6 [50560/60000 (84%)]\tLoss: 0.387282\nTrain Epoch: 6 [51200/60000 (85%)]\tLoss: 0.213550\nTrain Epoch: 6 [51840/60000 (86%)]\tLoss: 0.133755\nTrain Epoch: 6 [52480/60000 (87%)]\tLoss: 0.176044\nTrain Epoch: 6 [53120/60000 (88%)]\tLoss: 0.282900\nTrain Epoch: 6 [53760/60000 (90%)]\tLoss: 0.154157\nTrain Epoch: 6 [54400/60000 (91%)]\tLoss: 0.138895\nTrain Epoch: 6 [55040/60000 (92%)]\tLoss: 0.254137\nTrain Epoch: 6 [55680/60000 (93%)]\tLoss: 0.107765\nTrain Epoch: 6 [56320/60000 (94%)]\tLoss: 0.118788\nTrain Epoch: 6 [56960/60000 (95%)]\tLoss: 0.142051\nTrain Epoch: 6 [57600/60000 (96%)]\tLoss: 0.176375\nTrain Epoch: 6 [58240/60000 (97%)]\tLoss: 0.131573\nTrain Epoch: 6 [58880/60000 (98%)]\tLoss: 0.347166\nTrain Epoch: 6 [59520/60000 (99%)]\tLoss: 0.217951\n\nTest set: Average loss: 0.0690, Accuracy: 9776/10000 (98%)\n\nTrain Epoch: 7 [0/60000 (0%)]\tLoss: 0.142441\nTrain Epoch: 7 [640/60000 (1%)]\tLoss: 0.078599\nTrain Epoch: 7 [1280/60000 (2%)]\tLoss: 0.121731\nTrain Epoch: 7 [1920/60000 (3%)]\tLoss: 0.070044\nTrain Epoch: 7 [2560/60000 (4%)]\tLoss: 0.224216\nTrain Epoch: 7 [3200/60000 (5%)]\tLoss: 0.104122\nTrain Epoch: 7 [3840/60000 (6%)]\tLoss: 0.228575\nTrain Epoch: 7 [4480/60000 (7%)]\tLoss: 0.377044\nTrain Epoch: 7 [5120/60000 (9%)]\tLoss: 0.296184\nTrain Epoch: 7 [5760/60000 (10%)]\tLoss: 0.099891\nTrain Epoch: 7 [6400/60000 (11%)]\tLoss: 0.269691\nTrain Epoch: 7 [7040/60000 (12%)]\tLoss: 0.240640\nTrain Epoch: 7 [7680/60000 (13%)]\tLoss: 0.171192\nTrain Epoch: 7 [8320/60000 (14%)]\tLoss: 0.306889\nTrain Epoch: 7 [8960/60000 (15%)]\tLoss: 0.238503\nTrain Epoch: 7 [9600/60000 (16%)]\tLoss: 0.286252\nTrain Epoch: 7 [10240/60000 (17%)]\tLoss: 0.171058\nTrain Epoch: 7 [10880/60000 (18%)]\tLoss: 0.208866\nTrain Epoch: 7 [11520/60000 (19%)]\tLoss: 0.418091\nTrain Epoch: 7 [12160/60000 (20%)]\tLoss: 0.115058\nTrain Epoch: 7 [12800/60000 (21%)]\tLoss: 0.159557\nTrain Epoch: 7 [13440/60000 (22%)]\tLoss: 0.085076\nTrain Epoch: 7 [14080/60000 (23%)]\tLoss: 0.244673\nTrain Epoch: 7 [14720/60000 (25%)]\tLoss: 0.316326\nTrain Epoch: 7 [15360/60000 (26%)]\tLoss: 0.370775\nTrain Epoch: 7 [16000/60000 (27%)]\tLoss: 0.235262\nTrain Epoch: 7 [16640/60000 (28%)]\tLoss: 0.296188\nTrain Epoch: 7 [17280/60000 (29%)]\tLoss: 0.224960\nTrain Epoch: 7 [17920/60000 (30%)]\tLoss: 0.162341\nTrain Epoch: 7 [18560/60000 (31%)]\tLoss: 0.136551\nTrain Epoch: 7 [19200/60000 (32%)]\tLoss: 0.111435\nTrain Epoch: 7 [19840/60000 (33%)]\tLoss: 0.173483\nTrain Epoch: 7 [20480/60000 (34%)]\tLoss: 0.170351\nTrain Epoch: 7 [21120/60000 (35%)]\tLoss: 0.109828\nTrain Epoch: 7 [21760/60000 (36%)]\tLoss: 0.219692\nTrain Epoch: 7 [22400/60000 (37%)]\tLoss: 0.085780\nTrain Epoch: 7 [23040/60000 (38%)]\tLoss: 0.076800\nTrain Epoch: 7 [23680/60000 (39%)]\tLoss: 0.163377\nTrain Epoch: 7 [24320/60000 (41%)]\tLoss: 0.178391\nTrain Epoch: 7 [24960/60000 (42%)]\tLoss: 0.311988\nTrain Epoch: 7 [25600/60000 (43%)]\tLoss: 0.215559\nTrain Epoch: 7 [26240/60000 (44%)]\tLoss: 0.199207\nTrain Epoch: 7 [26880/60000 (45%)]\tLoss: 0.201917\nTrain Epoch: 7 [27520/60000 (46%)]\tLoss: 0.163283\nTrain Epoch: 7 [28160/60000 (47%)]\tLoss: 0.107533\nTrain Epoch: 7 [28800/60000 (48%)]\tLoss: 0.046209\nTrain Epoch: 7 [29440/60000 (49%)]\tLoss: 0.173062\nTrain Epoch: 7 [30080/60000 (50%)]\tLoss: 0.088925\nTrain Epoch: 7 [30720/60000 (51%)]\tLoss: 0.068962\nTrain Epoch: 7 [31360/60000 (52%)]\tLoss: 0.223214\nTrain Epoch: 7 [32000/60000 (53%)]\tLoss: 0.096083\nTrain Epoch: 7 [32640/60000 (54%)]\tLoss: 0.327635\nTrain Epoch: 7 [33280/60000 (55%)]\tLoss: 0.278620\nTrain Epoch: 7 [33920/60000 (57%)]\tLoss: 0.223806\nTrain Epoch: 7 [34560/60000 (58%)]\tLoss: 0.121638\nTrain Epoch: 7 [35200/60000 (59%)]\tLoss: 0.182739\nTrain Epoch: 7 [35840/60000 (60%)]\tLoss: 0.172866\nTrain Epoch: 7 [36480/60000 (61%)]\tLoss: 0.180873\nTrain Epoch: 7 [37120/60000 (62%)]\tLoss: 0.298984\nTrain Epoch: 7 [37760/60000 (63%)]\tLoss: 0.251939\nTrain Epoch: 7 [38400/60000 (64%)]\tLoss: 0.105321\nTrain Epoch: 7 [39040/60000 (65%)]\tLoss: 0.200500\nTrain Epoch: 7 [39680/60000 (66%)]\tLoss: 0.309791\nTrain Epoch: 7 [40320/60000 (67%)]\tLoss: 0.114949\nTrain Epoch: 7 [40960/60000 (68%)]\tLoss: 0.066153\nTrain Epoch: 7 [41600/60000 (69%)]\tLoss: 0.327437\nTrain Epoch: 7 [42240/60000 (70%)]\tLoss: 0.179023\nTrain Epoch: 7 [42880/60000 (71%)]\tLoss: 0.089861\nTrain Epoch: 7 [43520/60000 (72%)]\tLoss: 0.111230\nTrain Epoch: 7 [44160/60000 (74%)]\tLoss: 0.108233\nTrain Epoch: 7 [44800/60000 (75%)]\tLoss: 0.145669\nTrain Epoch: 7 [45440/60000 (76%)]\tLoss: 0.122024\nTrain Epoch: 7 [46080/60000 (77%)]\tLoss: 0.083490\nTrain Epoch: 7 [46720/60000 (78%)]\tLoss: 0.116002\nTrain Epoch: 7 [47360/60000 (79%)]\tLoss: 0.200240\nTrain Epoch: 7 [48000/60000 (80%)]\tLoss: 0.363707\nTrain Epoch: 7 [48640/60000 (81%)]\tLoss: 0.294594\nTrain Epoch: 7 [49280/60000 (82%)]\tLoss: 0.127643\nTrain Epoch: 7 [49920/60000 (83%)]\tLoss: 0.202008\nTrain Epoch: 7 [50560/60000 (84%)]\tLoss: 0.159551\nTrain Epoch: 7 [51200/60000 (85%)]\tLoss: 0.221197\nTrain Epoch: 7 [51840/60000 (86%)]\tLoss: 0.266463\nTrain Epoch: 7 [52480/60000 (87%)]\tLoss: 0.073967\nTrain Epoch: 7 [53120/60000 (88%)]\tLoss: 0.350092\nTrain Epoch: 7 [53760/60000 (90%)]\tLoss: 0.106500\nTrain Epoch: 7 [54400/60000 (91%)]\tLoss: 0.208859\nTrain Epoch: 7 [55040/60000 (92%)]\tLoss: 0.209937\nTrain Epoch: 7 [55680/60000 (93%)]\tLoss: 0.215286\nTrain Epoch: 7 [56320/60000 (94%)]\tLoss: 0.117026\nTrain Epoch: 7 [56960/60000 (95%)]\tLoss: 0.132321\nTrain Epoch: 7 [57600/60000 (96%)]\tLoss: 0.286004\nTrain Epoch: 7 [58240/60000 (97%)]\tLoss: 0.170485\nTrain Epoch: 7 [58880/60000 (98%)]\tLoss: 0.196613\nTrain Epoch: 7 [59520/60000 (99%)]\tLoss: 0.293870\n\nTest set: Average loss: 0.0657, Accuracy: 9801/10000 (98%)\n\nTrain Epoch: 8 [0/60000 (0%)]\tLoss: 0.315451\nTrain Epoch: 8 [640/60000 (1%)]\tLoss: 0.114413\nTrain Epoch: 8 [1280/60000 (2%)]\tLoss: 0.129036\nTrain Epoch: 8 [1920/60000 (3%)]\tLoss: 0.141999\nTrain Epoch: 8 [2560/60000 (4%)]\tLoss: 0.118697\nTrain Epoch: 8 [3200/60000 (5%)]\tLoss: 0.126823\nTrain Epoch: 8 [3840/60000 (6%)]\tLoss: 0.053924\nTrain Epoch: 8 [4480/60000 (7%)]\tLoss: 0.296224\nTrain Epoch: 8 [5120/60000 (9%)]\tLoss: 0.121338\nTrain Epoch: 8 [5760/60000 (10%)]\tLoss: 0.255161\nTrain Epoch: 8 [6400/60000 (11%)]\tLoss: 0.170684\nTrain Epoch: 8 [7040/60000 (12%)]\tLoss: 0.092008\nTrain Epoch: 8 [7680/60000 (13%)]\tLoss: 0.283091\nTrain Epoch: 8 [8320/60000 (14%)]\tLoss: 0.027133\nTrain Epoch: 8 [8960/60000 (15%)]\tLoss: 0.195686\nTrain Epoch: 8 [9600/60000 (16%)]\tLoss: 0.343612\nTrain Epoch: 8 [10240/60000 (17%)]\tLoss: 0.108563\nTrain Epoch: 8 [10880/60000 (18%)]\tLoss: 0.223832\nTrain Epoch: 8 [11520/60000 (19%)]\tLoss: 0.175617\nTrain Epoch: 8 [12160/60000 (20%)]\tLoss: 0.145828\nTrain Epoch: 8 [12800/60000 (21%)]\tLoss: 0.178722\nTrain Epoch: 8 [13440/60000 (22%)]\tLoss: 0.151158\nTrain Epoch: 8 [14080/60000 (23%)]\tLoss: 0.183155\nTrain Epoch: 8 [14720/60000 (25%)]\tLoss: 0.110281\nTrain Epoch: 8 [15360/60000 (26%)]\tLoss: 0.282224\nTrain Epoch: 8 [16000/60000 (27%)]\tLoss: 0.097411\nTrain Epoch: 8 [16640/60000 (28%)]\tLoss: 0.264533\nTrain Epoch: 8 [17280/60000 (29%)]\tLoss: 0.194778\nTrain Epoch: 8 [17920/60000 (30%)]\tLoss: 0.235924\nTrain Epoch: 8 [18560/60000 (31%)]\tLoss: 0.236801\nTrain Epoch: 8 [19200/60000 (32%)]\tLoss: 0.178174\nTrain Epoch: 8 [19840/60000 (33%)]\tLoss: 0.218752\nTrain Epoch: 8 [20480/60000 (34%)]\tLoss: 0.208353\nTrain Epoch: 8 [21120/60000 (35%)]\tLoss: 0.193034\nTrain Epoch: 8 [21760/60000 (36%)]\tLoss: 0.138453\nTrain Epoch: 8 [22400/60000 (37%)]\tLoss: 0.175271\nTrain Epoch: 8 [23040/60000 (38%)]\tLoss: 0.157295\nTrain Epoch: 8 [23680/60000 (39%)]\tLoss: 0.156248\nTrain Epoch: 8 [24320/60000 (41%)]\tLoss: 0.153413\nTrain Epoch: 8 [24960/60000 (42%)]\tLoss: 0.084870\nTrain Epoch: 8 [25600/60000 (43%)]\tLoss: 0.150966\nTrain Epoch: 8 [26240/60000 (44%)]\tLoss: 0.160973\nTrain Epoch: 8 [26880/60000 (45%)]\tLoss: 0.231433\nTrain Epoch: 8 [27520/60000 (46%)]\tLoss: 0.144396\nTrain Epoch: 8 [28160/60000 (47%)]\tLoss: 0.200417\nTrain Epoch: 8 [28800/60000 (48%)]\tLoss: 0.152939\nTrain Epoch: 8 [29440/60000 (49%)]\tLoss: 0.109962\nTrain Epoch: 8 [30080/60000 (50%)]\tLoss: 0.134907\nTrain Epoch: 8 [30720/60000 (51%)]\tLoss: 0.088782\nTrain Epoch: 8 [31360/60000 (52%)]\tLoss: 0.129031\nTrain Epoch: 8 [32000/60000 (53%)]\tLoss: 0.184744\nTrain Epoch: 8 [32640/60000 (54%)]\tLoss: 0.155463\nTrain Epoch: 8 [33280/60000 (55%)]\tLoss: 0.174192\nTrain Epoch: 8 [33920/60000 (57%)]\tLoss: 0.172103\nTrain Epoch: 8 [34560/60000 (58%)]\tLoss: 0.201503\nTrain Epoch: 8 [35200/60000 (59%)]\tLoss: 0.287885\nTrain Epoch: 8 [35840/60000 (60%)]\tLoss: 0.133675\nTrain Epoch: 8 [36480/60000 (61%)]\tLoss: 0.243534\nTrain Epoch: 8 [37120/60000 (62%)]\tLoss: 0.196020\nTrain Epoch: 8 [37760/60000 (63%)]\tLoss: 0.101380\nTrain Epoch: 8 [38400/60000 (64%)]\tLoss: 0.108299\nTrain Epoch: 8 [39040/60000 (65%)]\tLoss: 0.159048\nTrain Epoch: 8 [39680/60000 (66%)]\tLoss: 0.204734\nTrain Epoch: 8 [40320/60000 (67%)]\tLoss: 0.238383\nTrain Epoch: 8 [40960/60000 (68%)]\tLoss: 0.592663\nTrain Epoch: 8 [41600/60000 (69%)]\tLoss: 0.116080\nTrain Epoch: 8 [42240/60000 (70%)]\tLoss: 0.039719\nTrain Epoch: 8 [42880/60000 (71%)]\tLoss: 0.148190\nTrain Epoch: 8 [43520/60000 (72%)]\tLoss: 0.241765\nTrain Epoch: 8 [44160/60000 (74%)]\tLoss: 0.235942\nTrain Epoch: 8 [44800/60000 (75%)]\tLoss: 0.175277\nTrain Epoch: 8 [45440/60000 (76%)]\tLoss: 0.143608\nTrain Epoch: 8 [46080/60000 (77%)]\tLoss: 0.114853\nTrain Epoch: 8 [46720/60000 (78%)]\tLoss: 0.232284\nTrain Epoch: 8 [47360/60000 (79%)]\tLoss: 0.321072\nTrain Epoch: 8 [48000/60000 (80%)]\tLoss: 0.310765\nTrain Epoch: 8 [48640/60000 (81%)]\tLoss: 0.102070\nTrain Epoch: 8 [49280/60000 (82%)]\tLoss: 0.372137\nTrain Epoch: 8 [49920/60000 (83%)]\tLoss: 0.109344\nTrain Epoch: 8 [50560/60000 (84%)]\tLoss: 0.382866\nTrain Epoch: 8 [51200/60000 (85%)]\tLoss: 0.270467\nTrain Epoch: 8 [51840/60000 (86%)]\tLoss: 0.061211\nTrain Epoch: 8 [52480/60000 (87%)]\tLoss: 0.233812\nTrain Epoch: 8 [53120/60000 (88%)]\tLoss: 0.176510\nTrain Epoch: 8 [53760/60000 (90%)]\tLoss: 0.120536\nTrain Epoch: 8 [54400/60000 (91%)]\tLoss: 0.241959\nTrain Epoch: 8 [55040/60000 (92%)]\tLoss: 0.183966\nTrain Epoch: 8 [55680/60000 (93%)]\tLoss: 0.125279\nTrain Epoch: 8 [56320/60000 (94%)]\tLoss: 0.152849\nTrain Epoch: 8 [56960/60000 (95%)]\tLoss: 0.219788\nTrain Epoch: 8 [57600/60000 (96%)]\tLoss: 0.077843\nTrain Epoch: 8 [58240/60000 (97%)]\tLoss: 0.304191\nTrain Epoch: 8 [58880/60000 (98%)]\tLoss: 0.363550\nTrain Epoch: 8 [59520/60000 (99%)]\tLoss: 0.326421\n\nTest set: Average loss: 0.0632, Accuracy: 9807/10000 (98%)\n\nTrain Epoch: 9 [0/60000 (0%)]\tLoss: 0.140965\nTrain Epoch: 9 [640/60000 (1%)]\tLoss: 0.206063\nTrain Epoch: 9 [1280/60000 (2%)]\tLoss: 0.189364\nTrain Epoch: 9 [1920/60000 (3%)]\tLoss: 0.367962\nTrain Epoch: 9 [2560/60000 (4%)]\tLoss: 0.108362\nTrain Epoch: 9 [3200/60000 (5%)]\tLoss: 0.109142\nTrain Epoch: 9 [3840/60000 (6%)]\tLoss: 0.270022\nTrain Epoch: 9 [4480/60000 (7%)]\tLoss: 0.200647\nTrain Epoch: 9 [5120/60000 (9%)]\tLoss: 0.162118\nTrain Epoch: 9 [5760/60000 (10%)]\tLoss: 0.167245\nTrain Epoch: 9 [6400/60000 (11%)]\tLoss: 0.188903\nTrain Epoch: 9 [7040/60000 (12%)]\tLoss: 0.280550\nTrain Epoch: 9 [7680/60000 (13%)]\tLoss: 0.116265\nTrain Epoch: 9 [8320/60000 (14%)]\tLoss: 0.602693\nTrain Epoch: 9 [8960/60000 (15%)]\tLoss: 0.148682\nTrain Epoch: 9 [9600/60000 (16%)]\tLoss: 0.225477\nTrain Epoch: 9 [10240/60000 (17%)]\tLoss: 0.133642\nTrain Epoch: 9 [10880/60000 (18%)]\tLoss: 0.116083\nTrain Epoch: 9 [11520/60000 (19%)]\tLoss: 0.348113\nTrain Epoch: 9 [12160/60000 (20%)]\tLoss: 0.219562\nTrain Epoch: 9 [12800/60000 (21%)]\tLoss: 0.117716\nTrain Epoch: 9 [13440/60000 (22%)]\tLoss: 0.218508\nTrain Epoch: 9 [14080/60000 (23%)]\tLoss: 0.323755\nTrain Epoch: 9 [14720/60000 (25%)]\tLoss: 0.211174\nTrain Epoch: 9 [15360/60000 (26%)]\tLoss: 0.451853\nTrain Epoch: 9 [16000/60000 (27%)]\tLoss: 0.155174\nTrain Epoch: 9 [16640/60000 (28%)]\tLoss: 0.134905\nTrain Epoch: 9 [17280/60000 (29%)]\tLoss: 0.172428\nTrain Epoch: 9 [17920/60000 (30%)]\tLoss: 0.306172\nTrain Epoch: 9 [18560/60000 (31%)]\tLoss: 0.133085\nTrain Epoch: 9 [19200/60000 (32%)]\tLoss: 0.449040\nTrain Epoch: 9 [19840/60000 (33%)]\tLoss: 0.084722\nTrain Epoch: 9 [20480/60000 (34%)]\tLoss: 0.188086\nTrain Epoch: 9 [21120/60000 (35%)]\tLoss: 0.222472\nTrain Epoch: 9 [21760/60000 (36%)]\tLoss: 0.275132\nTrain Epoch: 9 [22400/60000 (37%)]\tLoss: 0.287421\nTrain Epoch: 9 [23040/60000 (38%)]\tLoss: 0.105733\nTrain Epoch: 9 [23680/60000 (39%)]\tLoss: 0.157949\nTrain Epoch: 9 [24320/60000 (41%)]\tLoss: 0.073462\nTrain Epoch: 9 [24960/60000 (42%)]\tLoss: 0.240201\nTrain Epoch: 9 [25600/60000 (43%)]\tLoss: 0.060848\nTrain Epoch: 9 [26240/60000 (44%)]\tLoss: 0.173801\nTrain Epoch: 9 [26880/60000 (45%)]\tLoss: 0.148143\nTrain Epoch: 9 [27520/60000 (46%)]\tLoss: 0.180779\nTrain Epoch: 9 [28160/60000 (47%)]\tLoss: 0.393192\nTrain Epoch: 9 [28800/60000 (48%)]\tLoss: 0.239243\nTrain Epoch: 9 [29440/60000 (49%)]\tLoss: 0.064345\nTrain Epoch: 9 [30080/60000 (50%)]\tLoss: 0.315658\nTrain Epoch: 9 [30720/60000 (51%)]\tLoss: 0.105739\nTrain Epoch: 9 [31360/60000 (52%)]\tLoss: 0.246439\nTrain Epoch: 9 [32000/60000 (53%)]\tLoss: 0.145221\nTrain Epoch: 9 [32640/60000 (54%)]\tLoss: 0.287615\nTrain Epoch: 9 [33280/60000 (55%)]\tLoss: 0.310717\nTrain Epoch: 9 [33920/60000 (57%)]\tLoss: 0.322760\nTrain Epoch: 9 [34560/60000 (58%)]\tLoss: 0.294462\nTrain Epoch: 9 [35200/60000 (59%)]\tLoss: 0.168697\nTrain Epoch: 9 [35840/60000 (60%)]\tLoss: 0.153495\nTrain Epoch: 9 [36480/60000 (61%)]\tLoss: 0.146843\nTrain Epoch: 9 [37120/60000 (62%)]\tLoss: 0.176622\nTrain Epoch: 9 [37760/60000 (63%)]\tLoss: 0.400825\nTrain Epoch: 9 [38400/60000 (64%)]\tLoss: 0.197533\nTrain Epoch: 9 [39040/60000 (65%)]\tLoss: 0.109741\nTrain Epoch: 9 [39680/60000 (66%)]\tLoss: 0.049689\nTrain Epoch: 9 [40320/60000 (67%)]\tLoss: 0.253087\nTrain Epoch: 9 [40960/60000 (68%)]\tLoss: 0.222971\nTrain Epoch: 9 [41600/60000 (69%)]\tLoss: 0.095467\nTrain Epoch: 9 [42240/60000 (70%)]\tLoss: 0.043052\nTrain Epoch: 9 [42880/60000 (71%)]\tLoss: 0.105347\nTrain Epoch: 9 [43520/60000 (72%)]\tLoss: 0.133342\nTrain Epoch: 9 [44160/60000 (74%)]\tLoss: 0.266375\nTrain Epoch: 9 [44800/60000 (75%)]\tLoss: 0.156081\nTrain Epoch: 9 [45440/60000 (76%)]\tLoss: 0.206747\nTrain Epoch: 9 [46080/60000 (77%)]\tLoss: 0.158561\nTrain Epoch: 9 [46720/60000 (78%)]\tLoss: 0.416148\nTrain Epoch: 9 [47360/60000 (79%)]\tLoss: 0.147991\nTrain Epoch: 9 [48000/60000 (80%)]\tLoss: 0.112567\nTrain Epoch: 9 [48640/60000 (81%)]\tLoss: 0.100846\nTrain Epoch: 9 [49280/60000 (82%)]\tLoss: 0.103345\nTrain Epoch: 9 [49920/60000 (83%)]\tLoss: 0.205922\nTrain Epoch: 9 [50560/60000 (84%)]\tLoss: 0.097610\nTrain Epoch: 9 [51200/60000 (85%)]\tLoss: 0.071967\nTrain Epoch: 9 [51840/60000 (86%)]\tLoss: 0.068125\nTrain Epoch: 9 [52480/60000 (87%)]\tLoss: 0.057313\nTrain Epoch: 9 [53120/60000 (88%)]\tLoss: 0.162428\nTrain Epoch: 9 [53760/60000 (90%)]\tLoss: 0.097614\nTrain Epoch: 9 [54400/60000 (91%)]\tLoss: 0.075174\nTrain Epoch: 9 [55040/60000 (92%)]\tLoss: 0.095530\nTrain Epoch: 9 [55680/60000 (93%)]\tLoss: 0.142529\nTrain Epoch: 9 [56320/60000 (94%)]\tLoss: 0.132163\nTrain Epoch: 9 [56960/60000 (95%)]\tLoss: 0.201932\nTrain Epoch: 9 [57600/60000 (96%)]\tLoss: 0.238939\nTrain Epoch: 9 [58240/60000 (97%)]\tLoss: 0.037396\nTrain Epoch: 9 [58880/60000 (98%)]\tLoss: 0.077772\nTrain Epoch: 9 [59520/60000 (99%)]\tLoss: 0.177759\n\nTest set: Average loss: 0.0559, Accuracy: 9813/10000 (98%)\n\nTrain Epoch: 10 [0/60000 (0%)]\tLoss: 0.112115\nTrain Epoch: 10 [640/60000 (1%)]\tLoss: 0.089035\nTrain Epoch: 10 [1280/60000 (2%)]\tLoss: 0.177925\nTrain Epoch: 10 [1920/60000 (3%)]\tLoss: 0.147350\nTrain Epoch: 10 [2560/60000 (4%)]\tLoss: 0.170561\nTrain Epoch: 10 [3200/60000 (5%)]\tLoss: 0.207891\nTrain Epoch: 10 [3840/60000 (6%)]\tLoss: 0.340160\nTrain Epoch: 10 [4480/60000 (7%)]\tLoss: 0.229032\nTrain Epoch: 10 [5120/60000 (9%)]\tLoss: 0.335419\nTrain Epoch: 10 [5760/60000 (10%)]\tLoss: 0.101219\nTrain Epoch: 10 [6400/60000 (11%)]\tLoss: 0.085085\nTrain Epoch: 10 [7040/60000 (12%)]\tLoss: 0.053658\nTrain Epoch: 10 [7680/60000 (13%)]\tLoss: 0.106224\nTrain Epoch: 10 [8320/60000 (14%)]\tLoss: 0.146947\nTrain Epoch: 10 [8960/60000 (15%)]\tLoss: 0.210157\nTrain Epoch: 10 [9600/60000 (16%)]\tLoss: 0.167598\nTrain Epoch: 10 [10240/60000 (17%)]\tLoss: 0.184822\nTrain Epoch: 10 [10880/60000 (18%)]\tLoss: 0.149518\nTrain Epoch: 10 [11520/60000 (19%)]\tLoss: 0.091374\nTrain Epoch: 10 [12160/60000 (20%)]\tLoss: 0.331635\nTrain Epoch: 10 [12800/60000 (21%)]\tLoss: 0.345818\nTrain Epoch: 10 [13440/60000 (22%)]\tLoss: 0.057789\nTrain Epoch: 10 [14080/60000 (23%)]\tLoss: 0.189208\nTrain Epoch: 10 [14720/60000 (25%)]\tLoss: 0.116747\nTrain Epoch: 10 [15360/60000 (26%)]\tLoss: 0.101344\nTrain Epoch: 10 [16000/60000 (27%)]\tLoss: 0.116675\nTrain Epoch: 10 [16640/60000 (28%)]\tLoss: 0.158562\nTrain Epoch: 10 [17280/60000 (29%)]\tLoss: 0.173697\nTrain Epoch: 10 [17920/60000 (30%)]\tLoss: 0.167972\nTrain Epoch: 10 [18560/60000 (31%)]\tLoss: 0.125186\nTrain Epoch: 10 [19200/60000 (32%)]\tLoss: 0.116458\nTrain Epoch: 10 [19840/60000 (33%)]\tLoss: 0.107688\nTrain Epoch: 10 [20480/60000 (34%)]\tLoss: 0.131942\nTrain Epoch: 10 [21120/60000 (35%)]\tLoss: 0.189690\nTrain Epoch: 10 [21760/60000 (36%)]\tLoss: 0.106075\nTrain Epoch: 10 [22400/60000 (37%)]\tLoss: 0.100791\nTrain Epoch: 10 [23040/60000 (38%)]\tLoss: 0.151750\nTrain Epoch: 10 [23680/60000 (39%)]\tLoss: 0.242852\nTrain Epoch: 10 [24320/60000 (41%)]\tLoss: 0.367772\nTrain Epoch: 10 [24960/60000 (42%)]\tLoss: 0.160668\nTrain Epoch: 10 [25600/60000 (43%)]\tLoss: 0.209858\nTrain Epoch: 10 [26240/60000 (44%)]\tLoss: 0.267443\nTrain Epoch: 10 [26880/60000 (45%)]\tLoss: 0.134159\nTrain Epoch: 10 [27520/60000 (46%)]\tLoss: 0.176844\nTrain Epoch: 10 [28160/60000 (47%)]\tLoss: 0.083609\nTrain Epoch: 10 [28800/60000 (48%)]\tLoss: 0.093472\nTrain Epoch: 10 [29440/60000 (49%)]\tLoss: 0.133502\nTrain Epoch: 10 [30080/60000 (50%)]\tLoss: 0.207314\nTrain Epoch: 10 [30720/60000 (51%)]\tLoss: 0.095819\nTrain Epoch: 10 [31360/60000 (52%)]\tLoss: 0.165338\nTrain Epoch: 10 [32000/60000 (53%)]\tLoss: 0.172792\nTrain Epoch: 10 [32640/60000 (54%)]\tLoss: 0.200346\nTrain Epoch: 10 [33280/60000 (55%)]\tLoss: 0.188566\nTrain Epoch: 10 [33920/60000 (57%)]\tLoss: 0.063107\nTrain Epoch: 10 [34560/60000 (58%)]\tLoss: 0.208076\nTrain Epoch: 10 [35200/60000 (59%)]\tLoss: 0.336500\nTrain Epoch: 10 [35840/60000 (60%)]\tLoss: 0.098523\nTrain Epoch: 10 [36480/60000 (61%)]\tLoss: 0.239501\nTrain Epoch: 10 [37120/60000 (62%)]\tLoss: 0.108441\nTrain Epoch: 10 [37760/60000 (63%)]\tLoss: 0.161891\nTrain Epoch: 10 [38400/60000 (64%)]\tLoss: 0.232178\nTrain Epoch: 10 [39040/60000 (65%)]\tLoss: 0.281599\nTrain Epoch: 10 [39680/60000 (66%)]\tLoss: 0.202701\nTrain Epoch: 10 [40320/60000 (67%)]\tLoss: 0.313276\nTrain Epoch: 10 [40960/60000 (68%)]\tLoss: 0.149932\nTrain Epoch: 10 [41600/60000 (69%)]\tLoss: 0.078690\nTrain Epoch: 10 [42240/60000 (70%)]\tLoss: 0.068174\nTrain Epoch: 10 [42880/60000 (71%)]\tLoss: 0.114682\nTrain Epoch: 10 [43520/60000 (72%)]\tLoss: 0.278032\nTrain Epoch: 10 [44160/60000 (74%)]\tLoss: 0.207701\nTrain Epoch: 10 [44800/60000 (75%)]\tLoss: 0.149129\nTrain Epoch: 10 [45440/60000 (76%)]\tLoss: 0.209997\nTrain Epoch: 10 [46080/60000 (77%)]\tLoss: 0.181944\nTrain Epoch: 10 [46720/60000 (78%)]\tLoss: 0.071149\nTrain Epoch: 10 [47360/60000 (79%)]\tLoss: 0.088598\nTrain Epoch: 10 [48000/60000 (80%)]\tLoss: 0.196593\nTrain Epoch: 10 [48640/60000 (81%)]\tLoss: 0.195960\nTrain Epoch: 10 [49280/60000 (82%)]\tLoss: 0.227564\nTrain Epoch: 10 [49920/60000 (83%)]\tLoss: 0.051203\nTrain Epoch: 10 [50560/60000 (84%)]\tLoss: 0.105916\nTrain Epoch: 10 [51200/60000 (85%)]\tLoss: 0.176384\nTrain Epoch: 10 [51840/60000 (86%)]\tLoss: 0.054657\nTrain Epoch: 10 [52480/60000 (87%)]\tLoss: 0.107465\nTrain Epoch: 10 [53120/60000 (88%)]\tLoss: 0.072626\nTrain Epoch: 10 [53760/60000 (90%)]\tLoss: 0.187904\nTrain Epoch: 10 [54400/60000 (91%)]\tLoss: 0.104509\nTrain Epoch: 10 [55040/60000 (92%)]\tLoss: 0.174006\nTrain Epoch: 10 [55680/60000 (93%)]\tLoss: 0.122760\nTrain Epoch: 10 [56320/60000 (94%)]\tLoss: 0.150131\nTrain Epoch: 10 [56960/60000 (95%)]\tLoss: 0.076365\nTrain Epoch: 10 [57600/60000 (96%)]\tLoss: 0.127536\nTrain Epoch: 10 [58240/60000 (97%)]\tLoss: 0.233154\nTrain Epoch: 10 [58880/60000 (98%)]\tLoss: 0.113188\nTrain Epoch: 10 [59520/60000 (99%)]\tLoss: 0.282389\n\nTest set: Average loss: 0.0531, Accuracy: 9837/10000 (98%)\n\n\n\nThe experiment completed successfully. Finalizing run...\nLogging experiment finalizing status in history service\n\n\nRun is completed.",
"run_properties": {
"SendToClient": "1",
"arguments": "--output-dir ./outputs",
"created_utc": "2018-09-25T11:56:04.832205Z",
"distributed_processes": [],
"end_time_utc": "2018-09-25T12:15:57.841467Z",
"log_files": {
"azureml-logs/55_batchai_execution.txt": "https://onnxamlistorageekgyifen.blob.core.windows.net/azureml/ExperimentRun/pytorch1-mnist_1537876563990/azureml-logs/55_batchai_execution.txt?sv=2017-04-17&sr=b&sig=NNkIC62xdG1h6156XtjtgwTJ1ScXlfxhBiBicNNoExE%3D&st=2018-09-25T12%3A06%3A00Z&se=2018-09-25T20%3A16%3A00Z&sp=r",
"azureml-logs/60_control_log.txt": "https://onnxamlistorageekgyifen.blob.core.windows.net/azureml/ExperimentRun/pytorch1-mnist_1537876563990/azureml-logs/60_control_log.txt?sv=2017-04-17&sr=b&sig=i2mtPt6w5xHkEjpkyfl%2BSD1GPpIdpzIbY6sVUQ62QMo%3D&st=2018-09-25T12%3A06%3A00Z&se=2018-09-25T20%3A16%3A00Z&sp=r",
"azureml-logs/80_driver_log.txt": "https://onnxamlistorageekgyifen.blob.core.windows.net/azureml/ExperimentRun/pytorch1-mnist_1537876563990/azureml-logs/80_driver_log.txt?sv=2017-04-17&sr=b&sig=CvqNHP18huWuXWdi%2BeiPcnztgJfI1iQQ6fV6Li25z1Y%3D&st=2018-09-25T12%3A06%3A00Z&se=2018-09-25T20%3A16%3A00Z&sp=r",
"azureml-logs/azureml.log": "https://onnxamlistorageekgyifen.blob.core.windows.net/azureml/ExperimentRun/pytorch1-mnist_1537876563990/azureml-logs/azureml.log?sv=2017-04-17&sr=b&sig=UTaxvUU4Ua%2FpsXPwQnSIV%2FbKK1zERtclIIjcTfbcSzQ%3D&st=2018-09-25T12%3A06%3A00Z&se=2018-09-25T20%3A16%3A00Z&sp=r"
},
"properties": {
"ContentSnapshotId": "727976ee-33bf-44c7-af65-ef1a1cbd2980",
"azureml.runsource": "experiment"
},
"run_duration": "0:19:53",
"run_id": "pytorch1-mnist_1537876563990",
"script_name": "mnist.py",
"status": "Completed",
"tags": {}
},
"widget_settings": {},
"workbench_uri": "https://mlworkspace.azure.ai/portal/subscriptions/75f78a03-482f-4fd8-8c71-5ddc08f92726/resourceGroups/onnxdemos/providers/Microsoft.MachineLearningServices/workspaces/onnx-aml-ignite-demo/experiment/pytorch1-mnist/run/pytorch1-mnist_1537876563990"
}
}
},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}