Compare commits

...

275 Commits

Author SHA1 Message Date
Roope Astala
60158bf41a Merge pull request #341 from rastala/master
version 1.0.33
2019-04-26 13:45:47 -04:00
Roope Astala
8dbbb01b8a version 1.0.33 2019-04-26 13:44:15 -04:00
Roope Astala
6e6b2b0c48 Merge pull request #340 from rastala/master
add readme
2019-04-26 09:41:49 -04:00
Roope Astala
85f5721bf8 add readme 2019-04-26 09:40:24 -04:00
Shané Winner
6a7dd741e7 Pixel server added 2019-04-23 13:48:23 -07:00
Shané Winner
314218fc89 Added pixel server 2019-04-23 13:47:06 -07:00
Shané Winner
b50d2725c7 Added pixel server 2019-04-23 13:46:06 -07:00
Shané Winner
9a2f448792 Added pixel server 2019-04-23 13:45:05 -07:00
Shané Winner
dd620f19fd Pixel server added 2019-04-23 13:43:41 -07:00
Shané Winner
8116d31da4 Pixel Server added 2019-04-23 13:40:26 -07:00
Shané Winner
ef29dc1fa5 Added Pixel Server 2019-04-23 13:39:18 -07:00
Shané Winner
97b345cb33 Implemented Pixel Server 2019-04-23 13:37:41 -07:00
Shané Winner
282250e670 Implementing Pixel Server 2019-04-23 13:36:24 -07:00
Shané Winner
acef60c5b3 Testing pixel web app 2019-04-23 13:15:04 -07:00
Shané Winner
bfb444eb15 Testing Pixel Tracker 2019-04-23 13:07:48 -07:00
Shané Winner
6277659bf2 Testing Pixel Server 2019-04-23 11:48:55 -07:00
Shané Winner
1645e12712 Testing Tracking Pixel 2019-04-23 11:15:53 -07:00
Roope Astala
cc4a32e70b Merge pull request #337 from jeff-shepherd/master
Updated automl_setup scripts
2019-04-23 13:50:09 -04:00
Jeff Shepherd
997a35aed5 Updated automl_setup scripts 2019-04-23 10:40:33 -07:00
Roope Astala
dd6317a4a0 Merge pull request #336 from rastala/master
adding work-with-data
2019-04-23 10:05:08 -04:00
Roope Astala
82d8353d54 adding work-with-data 2019-04-23 10:04:32 -04:00
Shané Winner
59a01c17a0 Testing the pixel tracker 2019-04-22 14:45:09 -07:00
Shané Winner
e31e1d9af3 Implemented a test pixel tracker 2019-04-22 14:41:32 -07:00
Roope Astala
d38b9db255 Merge pull request #334 from rastala/master
docker update
2019-04-22 15:43:28 -04:00
Roope Astala
761ad88c93 docker update 2019-04-22 15:43:02 -04:00
Roope Astala
644729e5db Merge pull request #333 from rastala/master
version 1.0.30
2019-04-22 15:40:11 -04:00
Roope Astala
e2b1b3fcaa version 1.0.30 2019-04-22 15:39:18 -04:00
Roope Astala
dc692589a9 Merge pull request #326 from rastala/master
update aks notebook
2019-04-18 16:19:51 -04:00
Roope Astala
624b4595b5 update aks notebook 2019-04-18 16:18:33 -04:00
Roope Astala
0ed85c33c2 Delete release.json 2019-04-18 10:01:50 -04:00
Roope Astala
5b01de605f Merge pull request #318 from savitamittal1/hdinotebook
Sample HDI notebook
2019-04-18 10:01:26 -04:00
Savitam
c351ac988a Sample HDI notebook
sample HDI notebook
2019-04-15 12:35:34 -07:00
Josée Martens
759ec3934c Delete yt_cover.png 2019-04-15 12:06:25 -05:00
Josée Martens
b499b88a85 Delete python36.png 2019-04-15 12:06:16 -05:00
Josée Martens
5f4edac3c1 Update NBSETUP.md 2019-04-15 12:00:31 -05:00
Josée Martens
edfce0d936 Update README.md 2019-04-12 17:28:16 -05:00
Josée Martens
1516c7fc24 Update README.md
testing for search
2019-04-12 17:19:55 -05:00
Roope Astala
389fb668ce Add files via upload 2019-04-10 11:12:55 -04:00
Josée Martens
647d5e72a5 Merge pull request #307 from Azure/vizhur-patch-2
Create googled8147fb6c0788258.html
2019-04-09 15:21:51 -05:00
vizhur
43ac4c84bb Create googled8147fb6c0788258.html 2019-04-09 16:19:47 -04:00
Roope Astala
8a1a82b50a Merge pull request #303 from rastala/master
dockerfile and missing config update
2019-04-08 15:38:13 -04:00
Roope Astala
72f386298c dockerfile and missing config update 2019-04-08 15:37:48 -04:00
Roope Astala
41d697e298 Merge pull request #302 from rastala/master
version 1.0.23
2019-04-08 15:35:50 -04:00
Roope Astala
c3ce932029 version 1.0.23 2019-04-08 15:34:51 -04:00
Roope Astala
a956162114 Merge pull request #290 from rastala/master
update aks deployment notebook
2019-04-03 10:53:51 -04:00
Roope Astala
cb5a178e40 Merge branch 'master' of github.com:rastala/MachineLearningNotebooks 2019-04-03 10:52:40 -04:00
Roope Astala
d81c336c59 update production deploy to aks 2019-04-03 10:52:15 -04:00
Roope Astala
4244a24d81 Merge pull request #287 from jeff-shepherd/master
Fixed line termination on automl_setup_linux.sh
2019-04-03 09:21:35 -04:00
Jeff Shepherd
3b488555e5 Added back automl_setup_linux.sh with correct line termination 2019-04-02 16:24:05 -07:00
Jeff Shepherd
6abc478f33 Removed automl_setup_linux.sh 2019-04-02 16:23:11 -07:00
Roope Astala
666c2579eb Merge pull request #285 from jeff-shepherd/master
Corrected line termination for automl_setup_mac.sh
2019-04-02 09:19:53 -04:00
Jeff Shepherd
5af3aa4231 Fixed line termination 2019-04-01 16:19:00 -07:00
Jeff Shepherd
e48d828ab0 Removed automl_setup_mac.sh 2019-04-01 16:17:56 -07:00
Jeff Shepherd
44aa636c21 Merge branch 'master' of https://github.com/Azure/MachineLearningNotebooks 2019-04-01 16:07:11 -07:00
Jeff Shepherd
4678f9adc3 Merge branch 'master' of https://github.com/jeff-shepherd/MachineLearningNotebooks 2019-04-01 16:04:46 -07:00
Jeff Shepherd
5bf85edade Added automl_setup_mac.sh with correct line termination 2019-04-01 16:03:39 -07:00
Jeff Shepherd
94f381e884 Removed automl_setup_mac.sh 2019-04-01 16:02:53 -07:00
Roope Astala
ea1b7599c3 Merge pull request #267 from rastala/master
add automl files
2019-03-25 19:26:07 -04:00
Roope Astala
6b8a6befde add automl files 2019-03-25 19:25:38 -04:00
Roope Astala
c1511b7b74 Merge pull request #266 from rastala/master
1.0.21 dockerfile
2019-03-25 15:10:05 -04:00
Roope Astala
8f007a3333 1.0.21 dockerfile 2019-03-25 15:09:39 -04:00
Roope Astala
5ad3ca00e8 Merge pull request #265 from rastala/master
version 1.0.21
2019-03-25 15:07:09 -04:00
Roope Astala
556a41e223 version 1.0.21 2019-03-25 15:06:08 -04:00
Roope Astala
407b8929d0 Merge pull request #259 from jeff-shepherd/master
Added example of printing model hyperparameters
2019-03-19 09:40:25 -04:00
Jeff Shepherd
18a11bbd8d Added model printing example 2019-03-18 16:31:48 -07:00
Roope Astala
8b439a9f7c Merge pull request #256 from rastala/master
update RAPIDS 2
2019-03-18 12:09:33 -04:00
rastala
75c393a221 update RAPIDS 2 2019-03-18 12:08:10 -04:00
Roope Astala
be7176fe06 Merge pull request #255 from rastala/master
update RAPIDS sample
2019-03-18 11:42:51 -04:00
rastala
7b41675355 update RAPIDS sample 2019-03-18 11:40:43 -04:00
Jeff Shepherd
fa7685f6fa Added example of printing model hyperparameters 2019-03-15 13:18:17 -07:00
Roope Astala
6b444b1467 Merge pull request #248 from rastala/master
dockerfile 1.0.18
2019-03-11 15:33:07 -04:00
Roope Astala
c9767473ae dockerfile 1.0.18 2019-03-11 15:32:30 -04:00
Roope Astala
648b48fc0c Merge pull request #247 from rastala/master
version 1.0.18
2019-03-11 15:23:44 -04:00
Roope Astala
04db5d93e2 version 1.0.18 2019-03-11 15:22:38 -04:00
Roope Astala
4e10935701 version 1.0.18 2019-03-11 15:21:35 -04:00
Roope Astala
f737db499d Delete googleade5d7141b3f2910.html 2019-03-05 17:01:36 -05:00
Roope Astala
6b66da1558 Merge pull request #238 from rastala/master
fix link in configuration notebook
2019-03-05 17:00:31 -05:00
Roope Astala
8647aea9d9 fix link in configuration notebook 2019-03-05 16:59:38 -05:00
Roope Astala
3ee2dc3258 Merge pull request #233 from jeff-shepherd/master
Setup updated to fix remote run
2019-02-26 15:34:15 -05:00
Jeff Shepherd
9f7c4ce668 Setup updated to fix remote run 2019-02-26 11:59:20 -08:00
hning86
036ca6ac75 dockerfile 1.0.17 2019-02-26 10:57:07 -05:00
Roope Astala
0b8817ee1c Merge pull request #229 from rastala/master
version 1.0.17
2019-02-25 16:12:51 -05:00
Roope Astala
b7b5576b15 version 1.0.17 2019-02-25 16:12:02 -05:00
Hai Ning
c082b72b71 Update pr.md 2019-02-23 21:55:59 -05:00
Hai Ning
673e76d431 Merge pull request #186 from gison93/master
Fix typos
2019-02-20 23:18:15 -05:00
Hai Ning
c518a04a19 Merge pull request #203 from davidefiocco/patch-1
Typo fix
2019-02-20 23:17:14 -05:00
Hai Ning
2f34888716 Update README.md 2019-02-20 07:52:14 -05:00
Roope Astala
6ca0088991 Merge pull request #218 from jeff-shepherd/master
Fixed broken links to configuration notebook
2019-02-15 14:47:49 -05:00
Jeff Shepherd
40e3856786 Removed subsampling reference, which is not published yet 2019-02-15 11:35:45 -08:00
Jeff Shepherd
ddd025e83e Fixed links to configuration notebook. 2019-02-15 11:31:10 -08:00
Hai Ning
ece4242c8f Update README.md 2019-02-15 12:57:08 -05:00
Hai Ning
4bca2bd7db Merge pull request #217 from nishankgu/patch-1
Update README.md
2019-02-15 12:52:59 -05:00
Nishank
a927dbfa31 Update README.md 2019-02-14 14:22:05 -08:00
hning86
280c718f53 keras sample 2019-02-14 16:59:08 -05:00
Hai Ning
bf1ac2b26a Update NBSETUP.md 2019-02-14 11:02:01 -05:00
Roope Astala
954c2afbce Merge pull request #214 from rongduan-zhu/master
Updated Azure Databricks Automated ML notebook from master
2019-02-13 14:06:48 -05:00
Rongduan Zhu
fbf1ea5f1a updated notebook from latest master 2019-02-13 11:02:27 -08:00
Roope Astala
84b72d904b Merge pull request #210 from rastala/master
tutorial update
2019-02-11 16:07:47 -05:00
Roope Astala
82bb9fcac3 tutorial update 2019-02-11 16:07:10 -05:00
Roope Astala
5c6bbacd47 Merge pull request #209 from rastala/master
adb readme update
2019-02-11 15:52:34 -05:00
Roope Astala
90aaeea113 adb readme update 2019-02-11 15:51:50 -05:00
Roope Astala
eeab7284c9 Merge pull request #208 from rastala/master
few missing files
2019-02-11 15:48:22 -05:00
Roope Astala
02fd9b685c few missing files 2019-02-11 15:47:37 -05:00
hning86
d5c923b446 dockerfile updated 2019-02-11 15:21:56 -05:00
Roope Astala
f16bf27e26 Merge pull request #207 from rastala/master
release 1.0.15
2019-02-11 15:18:00 -05:00
Roope Astala
c7bec58593 update version 2019-02-11 15:17:40 -05:00
Roope Astala
cca3996eb4 release 1.0.15 2019-02-11 15:12:30 -05:00
Davide Fiocco
210efe022a Typo fix 2019-02-08 20:23:12 +01:00
Roope Astala
5fd14bac30 Merge pull request #199 from rastala/master
update automl databricks
2019-02-06 11:53:35 -05:00
Roope Astala
3fa409543b update automl databricks 2019-02-06 11:53:00 -05:00
Josée Martens
42f2822b61 Adding file to enable search performance tracking.
@rastala
2019-02-04 14:36:40 -06:00
Roope Astala
48afbe1cab Delete release.json 2019-01-31 16:07:08 -05:00
Roope Astala
1298c55dd4 Merge pull request #193 from rastala/master
fix broken link
2019-01-31 15:45:01 -05:00
Roope Astala
0aa1b248f4 fix broken link 2019-01-31 15:44:22 -05:00
Roope Astala
3012b8f5a8 Merge pull request #192 from rastala/master
add authentication notebook
2019-01-31 15:41:40 -05:00
Roope Astala
501c55bcaf add authentication notebook 2019-01-31 15:40:51 -05:00
hning86
1a38f50221 docker instructions 2019-01-31 15:16:36 -05:00
hning86
cc64be8d6f text update 2019-01-31 14:29:31 -05:00
hning86
a0127a2a64 dockerfile instruction 2019-01-31 11:46:06 -05:00
Hai Ning
7eb966bf79 Merge pull request #191 from Azure/dockerfiles
Dockerfiles
2019-01-31 10:54:55 -05:00
Roope Astala
9118f2c7ce Merge pull request #190 from rastala/master
fix NBSETUP
2019-01-31 09:33:17 -05:00
Roope Astala
0e3198f311 fix NBSETUP 2019-01-31 09:32:30 -05:00
hning86
0fdab91b97 dockefile reorg 2019-01-31 09:21:06 -05:00
hning86
b54be912d8 dockerfiles added 2019-01-30 17:04:18 -05:00
Roope Astala
3d0c7990ff Merge pull request #189 from rastala/master
update tutorial readme
2019-01-30 14:28:24 -05:00
Roope Astala
6e1ce29a94 Merge remote-tracking branch 'upstream/master' 2019-01-30 14:26:25 -05:00
Roope Astala
0d26c9986a update tutorials README 2019-01-30 14:25:17 -05:00
gison93
100ab10797 add pipeline validation 2019-01-29 14:50:00 +01:00
gison93
1307efe7bc fix typo
remove trailing \u00c2\u00a0 from variable and notebook_path
2019-01-29 14:34:07 +01:00
gison93
08d0b8cf08 fix typo
Bloband -> Blob and
2019-01-29 12:42:48 +01:00
Roope Astala
0514eee64b Merge pull request #182 from rastala/master
version 1.0.10
2019-01-28 18:10:20 -05:00
Roope Astala
4b6e34fdc0 Update train-within-notebook.ipynb 2019-01-28 18:09:36 -05:00
Roope Astala
e01216d85b Update configuration.ipynb 2019-01-28 18:08:41 -05:00
Roope Astala
b00f75edd8 version 1.0.10 2019-01-28 15:30:17 -05:00
Hai Ning
06aba388c6 Update azure-ml-with-nvidia-rapids.ipynb 2019-01-24 10:09:31 -05:00
Roope Astala
3018461dfc Merge pull request #176 from rastala/master
update tutorials
2019-01-22 14:25:28 -05:00
Roope Astala
0d91f2d697 update tutorials 2019-01-22 14:24:31 -05:00
Roope Astala
a14cb635f0 Merge pull request #175 from rastala/master
RAPIDS sample
2019-01-22 13:44:55 -05:00
Roope Astala
88f6a966cc RAPIDS sample 2019-01-22 13:32:59 -05:00
Hai Ning
4f76a844c6 Update README.md 2019-01-18 01:18:44 -05:00
Hai Ning
c1573ff949 Update NBSETUP.md 2019-01-18 01:15:53 -05:00
Hai Ning
d1b18b3771 Update NBSETUP.md 2019-01-18 01:09:13 -05:00
Roope Astala
e1a948f4cd Merge pull request #168 from rastala/master
version 1.0.8
2019-01-14 12:14:02 -08:00
Roope Astala
3ca40c0817 version 1.0.8 2019-01-14 15:13:30 -05:00
Roope Astala
f724cb4d9b Merge pull request #166 from jeff-shepherd/master
Fixed broken links in tutorials
2019-01-08 12:01:50 -08:00
Jeff Shepherd
094b4b3b13 Fixed broken links in tutorials 2019-01-08 11:58:03 -08:00
Roope Astala
d09942f521 Merge pull request #165 from rastala/master
databricks update
2019-01-08 09:24:11 -08:00
Roope Astala
0c9e527174 databricks update 2019-01-08 12:23:15 -05:00
Roope Astala
e2640e54da Merge pull request #160 from rastala/master
Create aml-pipelines-concept.png
2019-01-02 12:03:13 -08:00
Roope Astala
d348baf8a1 Create aml-pipelines-concept.png 2019-01-02 15:02:25 -05:00
Roope Astala
b41e11e30d Merge pull request #159 from jeff-shepherd/master
Removed databricks notebook link
2019-01-02 11:56:15 -08:00
Jeff Shepherd
c1aa951867 Removed databricks notebook link 2019-01-02 11:45:52 -08:00
Roope Astala
5fe5f06e07 Merge pull request #158 from rastala/master
Create Databricks_AMLSDK_1-4_6.dbc
2019-01-02 10:52:24 -08:00
Roope Astala
e8a09c49b1 Create Databricks_AMLSDK_1-4_6.dbc 2019-01-02 13:51:29 -05:00
Roope Astala
fb6a73a790 Merge pull request #145 from rastala/master
fix databricks
2018-12-20 13:11:17 -08:00
Roope Astala
c2968b6526 fix databricks 2018-12-20 16:10:27 -05:00
Roope Astala
2ac62ae1ed Merge pull request #144 from rastala/master
Version 1.0.6
2018-12-20 12:43:17 -08:00
Roope Astala
cad5d5c97c more files 2018-12-20 15:42:03 -05:00
Roope Astala
d8cf73503e update to version 1.0.6 2018-12-20 15:40:20 -05:00
Hai Ning
4a2d6d637a Update README.md 2018-12-13 11:28:02 -05:00
Hai Ning
0348e54e21 Update README.md 2018-12-13 11:27:41 -05:00
Hai Ning
a97d147d01 Create README.md 2018-12-13 11:26:58 -05:00
Hai Ning
c4ceac032b Update README.md 2018-12-08 10:11:11 -05:00
Hai Ning
1d3dff5634 Update README.md 2018-12-08 10:10:37 -05:00
Roope Astala
096dd424db Merge pull request #125 from jeff-shepherd/master
Added to troubleshooting section and fixed paths
2018-12-07 11:03:59 -08:00
Jeff Shepherd
fdefea5e82 Added to troubleshooting section 2018-12-07 10:55:45 -08:00
Roope Astala
15fb283b78 Merge pull request #124 from rastala/master
add NBSETUP
2018-12-07 10:29:46 -08:00
Roope Astala
142514a255 add NBSETUP 2018-12-07 13:29:08 -05:00
Roope Astala
26678677b3 Merge pull request #122 from rastala/master
Update distributed pytorch notebook
2018-12-07 07:00:04 -08:00
Roope Astala
2e2a943f5b Update distributed pytorch notebook 2018-12-07 09:40:29 -05:00
Roope Astala
94ba3f3665 Merge pull request #117 from rastala/master
update automl notebooks
2018-12-04 16:12:49 -08:00
rastala
da02f93fc2 update automl notebooks 2018-12-04 19:11:42 -05:00
Roope Astala
d39af379f8 Merge pull request #116 from rastala/master
add adb readme
2018-12-04 15:34:49 -08:00
rastala
6304fb1eb1 add adb readme 2018-12-04 18:34:10 -05:00
Hai Ning
b35d14ab72 Update img-classification-part1-training.ipynb 2018-12-04 13:27:19 -05:00
Roope Astala
e791456f34 Merge pull request #115 from rastala/master
Fix broken link
2018-12-04 08:52:17 -08:00
rastala
9b93e13426 another patch 3 2018-12-04 11:51:05 -05:00
rastala
3d640393aa another patch 2 2018-12-04 11:47:06 -05:00
Roope Astala
34d50b0427 Merge pull request #114 from rastala/master
another patch
2018-12-04 08:43:29 -08:00
rastala
060a53d256 another patch 2018-12-04 11:42:51 -05:00
Roope Astala
78fba3ceea Merge pull request #113 from rastala/master
notebook patches
2018-12-04 08:26:00 -08:00
rastala
01dc3d0a5b notebook patches 2018-12-04 11:24:50 -05:00
Heather Spetalnick (Shapiro)
ce7ca94a9a fix aks link 2018-12-04 10:33:07 -05:00
Hai Ning
b8877f1f92 Merge pull request #111 from Azure/jamiemaclennan-patch-1
Fix link
2018-12-04 10:30:22 -05:00
Hai Ning
b19ec15601 Merge pull request #112 from Azure/jamiemaclennan-patch-2
fix links
2018-12-04 10:29:50 -05:00
Jamie MacLennan
e99de11c25 fix links 2018-12-04 10:28:19 -05:00
Jamie MacLennan
212c2e8bf0 Fix link 2018-12-04 10:24:10 -05:00
Hai Ning
2aed0a32bf Update README.md 2018-12-04 08:55:46 -05:00
Hai Ning
25456b84f0 Update train-within-notebook.ipynb 2018-12-04 08:52:23 -05:00
Hai Ning
897ae13de1 Update train-on-remote-vm.ipynb 2018-12-04 08:52:06 -05:00
Hai Ning
e329729e0f Update train-on-local.ipynb 2018-12-04 08:51:45 -05:00
Hai Ning
aacdede890 Update logging-api.ipynb 2018-12-04 08:51:00 -05:00
Hai Ning
0471f2f8db Update logging-api.ipynb 2018-12-04 08:49:46 -05:00
Hai Ning
a8a525e704 Update logging-api.ipynb 2018-12-04 08:48:56 -05:00
Hai Ning
e37f3fa206 Update README.md 2018-12-04 08:18:27 -05:00
Roope Astala
2c4d6b5188 Merge pull request #110 from rastala/master
fix azure-databricks
2018-12-03 19:21:31 -08:00
rastala
008befcce2 fix azure-databricks 2018-12-03 22:20:06 -05:00
Roope Astala
b4895bf1f8 Merge pull request #109 from jeff-shepherd/master
Added setup and configuration files
2018-12-03 16:46:42 -08:00
Jeff Shepherd
a27cdbd478 Added setup and configuration files 2018-12-03 16:36:14 -08:00
Roope Astala
a408f3f2cb Merge pull request #108 from rastala/master
big update
2018-12-03 17:47:29 -05:00
rastala
ab2de17978 one more file 2018-12-03 17:38:46 -05:00
rastala
a63f5084e0 big update 2 2018-12-03 17:38:20 -05:00
rastala
d26a4b0323 big update 2018-12-02 21:50:53 -05:00
rastala
3c49d861df license updates 2018-12-02 21:47:13 -05:00
Roope Astala
613db3158d Merge pull request #93 from yanrez/master
Make pipeline notebooks links in readme
2018-11-28 12:33:12 -05:00
yanrez
c3a8c36297 Make pipeline notebooks links in readme 2018-11-22 17:13:31 -08:00
Roope Astala
e7ce245674 Merge pull request #92 from dipankar-ray/master
updated pipeline notebooks with expanded tutorial
2018-11-22 10:15:55 -05:00
Dipankar Ray
ef5844fffd updated pipeline notebooks with expanded tutorial 2018-11-21 20:00:07 -08:00
Roope Astala
e039b98ee6 Merge pull request #91 from rastala/master
automl notebook update
2018-11-21 20:46:21 -05:00
rastala
05713689e0 automl notebook update 2018-11-21 20:45:17 -05:00
Roope Astala
7bb906b53c Merge pull request #87 from rastala/master
Update to version 0.1.80
2018-11-20 11:02:28 -05:00
rastala
5726fe3ddb Version 0.1.80 2018-11-20 11:00:48 -05:00
rastala
d10b1fa796 Revert "Updated notebook folders"
This reverts commit 06728004b6.
2018-11-20 10:39:48 -05:00
rastala
d7127de03c Revert "Update tutorials/README.md"
This reverts commit 50787f4ccc.
2018-11-20 10:39:34 -05:00
Roope Astala
50787f4ccc Update tutorials/README.md 2018-11-19 13:35:11 -05:00
Roope Astala
06728004b6 Updated notebook folders 2018-11-19 13:28:49 -05:00
Roope Astala
f5bcc55fe3 Merge pull request #74 from yueguoguo/master
Typo in README
2018-11-09 09:51:01 -05:00
Roope Astala
f23fb58200 Merge pull request #77 from rastala/master
Fix autoscale
2018-11-09 09:47:46 -05:00
Roope Astala
dbce7b8db2 Fix autoscase 2018-11-09 09:47:01 -05:00
Roope Astala
303090adf6 Merge pull request #76 from rastala/master
Update 00.configuration.ipynb
2018-11-09 09:33:07 -05:00
Roope Astala
b091d1f5f1 Update 00.configuration.ipynb
Create computes in 00.configuration, and link to tutorial
2018-11-09 09:31:25 -05:00
Hai Ning
803d69c539 Update 03.train-hyperparameter-tune-deploy-with-tensorflow.ipynb 2018-11-07 13:54:11 -05:00
Zhang Le
37848e9686 Merge pull request #1 from yueguoguo/yueguoguo-patch-1
Typo in README
2018-11-07 13:18:31 +08:00
Zhang Le
7d9227441e Typo in README
Typo of `psutil`.
2018-11-07 13:17:53 +08:00
Roope Astala
21c454b0f2 Merge pull request #72 from rastala/master
Add logging API notebook
2018-11-06 12:46:39 -05:00
Roope Astala
c7b0960ae4 Add logging API notebook 2018-11-06 12:46:05 -05:00
Roope Astala
14e11fefd6 Delete .gitignore 2018-11-06 12:31:53 -05:00
Roope Astala
4deaeb04cf Delete 05.train-in-spark-checkpoint.ipynb 2018-11-06 12:31:32 -05:00
Roope Astala
ee78323df2 Delete 03.train-on-aci-checkpoint.ipynb 2018-11-06 12:31:18 -05:00
Roope Astala
89c2622938 Delete 02.train-on-local-checkpoint.ipynb 2018-11-06 12:31:03 -05:00
Roope Astala
96b352e3be Delete 04.train-on-remote-vm-checkpoint.ipynb 2018-11-06 12:30:43 -05:00
Roope Astala
5280201f93 Merge pull request #70 from wchill/fix_macos_sigsegv
Fix segfault under certain conditions when running AutoML pipelines on MacOS
2018-11-05 19:04:14 -05:00
Eric Ahn
3825fd2c10 Fix segfault under certain conditions on MacOS 2018-11-05 15:06:38 -08:00
Roope Astala
b936dd3505 Merge pull request #69 from rastala/master
New SDK version 0.1.74
2018-11-05 15:28:40 -05:00
Roope Astala
7339c95ea0 New SDK version 2018-11-05 15:27:36 -05:00
Hai Ning
32102e2aac Update pipeline-batch-scoring.ipynb 2018-11-02 14:18:38 -04:00
Hai Ning
a043769197 Update pr.md 2018-10-29 22:23:49 -04:00
Hai Ning
a0f3727cf4 Update pr.md 2018-10-29 22:23:39 -04:00
Roope Astala
0e8b42f8c7 Delete snowleopardgaze.jpg 2018-10-26 16:53:47 -04:00
hning86
2daafdbca1 logging api sample 2018-10-26 14:02:05 -04:00
Roope Astala
fec2e97310 Merge pull request #62 from rastala/master
Fix link in 01 getting started
2018-10-26 10:27:42 -04:00
Roope Astala
1a79e53935 Fix link in 01 getting started 2018-10-26 10:26:38 -04:00
Hai Ning
900cc7a76b remove json.loads 2018-10-25 13:03:10 -04:00
Roope Astala
3148e52258 Merge pull request #60 from rastala/master
fix json output
2018-10-25 12:48:28 -04:00
Roope Astala
dda402db83 fix json output 2018-10-25 12:47:38 -04:00
Roope Astala
603f4a6434 Merge pull request #58 from rastala/master
Tutorial fixes
2018-10-24 13:47:05 -04:00
Roope Astala
114449dd9b Tutorial fixes 2018-10-24 13:45:15 -04:00
Roope Astala
de20b6c40e Merge pull request #55 from Azure/sdgilley-patch-1
Update 03.auto-train-models.ipynb
2018-10-22 12:43:20 -04:00
Hai Ning
886ece1089 Update pr.md 2018-10-22 11:23:49 -04:00
Sheri Gilley
0dfe00d05a Update 03.auto-train-models.ipynb
fix link
2018-10-22 10:04:46 -05:00
hning86
7a6fb8067f auto updated from HaiGPU 2018-10-22 01:50:11 -04:00
hning86
bb439ab2fd removed empty folder 2018-10-22 01:41:05 -04:00
hning86
ea3abdde4f auto updated from HaiGPU 2018-10-22 01:39:38 -04:00
Hai Ning
2e4eb8785c Update pr.md 2018-10-18 15:29:26 -04:00
Hai Ning
bfccb07dae Update pr.md 2018-10-18 15:27:36 -04:00
Hai Ning
94cd37e9fb Update README.md 2018-10-18 14:49:28 -04:00
Hai Ning
cdeb4dddab Update README.md 2018-10-18 14:47:44 -04:00
Hai Ning
e12637098a Update README.md 2018-10-18 14:47:19 -04:00
Hai Ning
d5f8811f4f YT cover 2018-10-18 14:46:08 -04:00
Hai Ning
92d36a2db4 Delete ytimg_png.PNG 2018-10-18 14:45:53 -04:00
Hai Ning
c5c76e8187 Update pr.md 2018-10-18 14:45:12 -04:00
Hai Ning
833d1d0f4e Update pr.md 2018-10-18 14:44:59 -04:00
Hai Ning
dd0c0264a2 Update README.md 2018-10-18 14:43:15 -04:00
Hai Ning
52368bad81 Update README.md 2018-10-18 14:42:48 -04:00
Hai Ning
604f6c18be Update README.md 2018-10-18 14:42:23 -04:00
Hai Ning
829bc297f2 Update README.md 2018-10-18 14:41:45 -04:00
Hai Ning
9e5101ea8c Update README.md 2018-10-18 14:41:34 -04:00
Hai Ning
37e96f2ad6 youtube cover 2018-10-18 14:40:17 -04:00
Roope Astala
d0c9bb330a Merge pull request #39 from cforbe/master
Adding dataprep notebook
2018-10-18 12:39:01 -04:00
Colleen Forbes
b4c7932640 Update README.md 2018-10-17 15:44:30 -07:00
Roope Astala
8fed628390 Merge pull request #53 from rastala/master
Update automl setup
2018-10-17 17:38:28 -04:00
rastala
d940aca06d Update automl setup 2018-10-17 17:37:01 -04:00
Hai Ning
beb97b1d9f Update README.md 2018-10-17 12:00:37 -04:00
Colleen
e7e9923cfb updating README.md 2018-10-03 16:46:51 -07:00
Colleen
b5482fcd4b Adding dataprep notebook 2018-10-03 09:58:55 -07:00
345 changed files with 131443 additions and 8052 deletions

View File

@@ -1,236 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 00. Installation and configuration\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning Workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook to use an existing workspace or create a new workspace.\n",
"\n",
"## What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Access Azure Subscription\n",
"\n",
"In order to create an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com).\n",
"\n",
"### 2. If you're running on your own local environment, install Azure ML SDK and other libraries\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, check the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Make sure your subscription is registered to use ACI\n",
"Azure Machine Learning makes use of Azure Container Instance (ACI). You need to ensure your subscription has been registered to use ACI in order be able to deploy a dev/test web service. If you have run through the quickstart experience you have already performed this step. Otherwise you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands.\n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your Azure Machine Learning workspace\n",
"\n",
"### Option 1: You have workspace already\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"If you have a workspace created another way, [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#create-workspace-configuration-file) describe how to get your subscription and workspace information.\n",
"\n",
"If this cell succeeds, you're done configuring this library! Otherwise continue to follow the instructions in the rest of the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"subscription_id ='<subscription-id>'\n",
"resource_group ='<resource-group>'\n",
"workspace_name = '<workspace-name>'\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" ws.write_config()\n",
" print('Workspace configuration succeeded. You are all set!')\n",
"except:\n",
" print('Workspace not found. Run the cells below.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 2: You don't have workspace yet\n",
"\n",
"\n",
"#### Requirements\n",
"\n",
"Inside your Azure subscription, you will need access to a _resource group_, which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and the following information:\n",
"* A name for your workspace\n",
"* Your subscription id\n",
"* The resource group name\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. BatchAI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Supported Azure Regions\n",
"Specify a region where your workspace will be located from the list of [Azure Machine Learning regions](https://linktoregions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", subscription_id)\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", resource_group)\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", workspace_name)\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", workspace_region)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create the workspace\n",
"This cell will create an AML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail when:\n",
"1. You do not have permission to create a workspace in the resource group\n",
"2. You do not have permission to create a resource group if it's non-existing.\n",
"2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,816 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 01. Train in the Notebook & Deploy Model to ACI\n",
"\n",
"* Load workspace\n",
"* Train a simple regression model directly in the Notebook python kernel\n",
"* Record run history\n",
"* Find the best model in run history and download it.\n",
"* Deploy the model as an Azure Container Instance (ACI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. \n",
"\n",
"2. Install following pre-requisite libraries to your conda environment and restart notebook.\n",
"```shell\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"3. Check that ACI is registered for your Azure Subscription. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider show -n Microsoft.ContainerInstance -o table"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider register -n Microsoft.ContainerInstance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Validate Azure ML SDK installation and get version number for debugging purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"from azureml.core import Experiment, Run, Workspace\n",
"import azureml.core\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set experiment name\n",
"Choose a name for experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-in-notebook'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start a training run in local Notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load diabetes dataset, a well-known small dataset that comes with scikit-learn\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.externals import joblib\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
"data = {\n",
" \"train\":{\"X\": X_train, \"y\": y_train}, \n",
" \"test\":{\"X\": X_test, \"y\": y_test}\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train a simple Ridge model\n",
"Train a very simple Ridge regression model in scikit-learn, and save it as a pickle file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"reg = Ridge(alpha = 0.03)\n",
"reg.fit(X=data['train']['X'], y=data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add experiment tracking\n",
"Now, let's add Azure ML experiment logging, and upload persisted model into run record as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"local run",
"outputs upload"
]
},
"outputs": [],
"source": [
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"run = experiment.start_logging()\n",
"\n",
"run.tag(\"Description\",\"My first run!\")\n",
"run.log('alpha', 0.03)\n",
"reg = Ridge(alpha=0.03)\n",
"reg.fit(data['train']['X'], data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"run.log('mse', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl')\n",
"run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')\n",
"\n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple parameter sweep\n",
"Sweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import os\n",
"from tqdm import tqdm\n",
"\n",
"model_name = \"model.pkl\"\n",
"\n",
"# list of numbers from 0 to 1.0 with a 0.05 interval\n",
"alphas = np.arange(0.0, 1.0, 0.05)\n",
"\n",
"# try a bunch of alpha values in a Linear Regression (Ridge) model\n",
"for alpha in tqdm(alphas):\n",
" # create a bunch of runs, each train a model with a different alpha value\n",
" with experiment.start_logging() as run:\n",
" # Use Ridge algorithm to build a regression model\n",
" reg = Ridge(alpha=alpha)\n",
" reg.fit(X=data[\"train\"][\"X\"], y=data[\"train\"][\"y\"])\n",
" preds = reg.predict(X=data[\"test\"][\"X\"])\n",
" mse = mean_squared_error(y_true=data[\"test\"][\"y\"], y_pred=preds)\n",
"\n",
" # log alpha, mean_squared_error and feature names in run history\n",
" run.log(name=\"alpha\", value=alpha)\n",
" run.log(name=\"mse\", value=mse)\n",
" run.log_list(name=\"columns\", value=columns)\n",
"\n",
" with open(model_name, \"wb\") as file:\n",
" joblib.dump(value=reg, filename=file)\n",
" \n",
" # upload the serialized model into run history record\n",
" run.upload_file(name=\"outputs/\" + model_name, path_or_stream=model_name)\n",
"\n",
" # now delete the serialized model from local folder since it is already uploaded to run history \n",
" os.remove(path=model_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# now let's take a look at the experiment in Azure portal.\n",
"experiment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select best model from the experiment\n",
"Load all experiment run metrics recursively from the experiment into a dictionary object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runs = {}\n",
"run_metrics = {}\n",
"\n",
"for r in tqdm(experiment.get_runs()):\n",
" metrics = r.get_metrics()\n",
" if 'mse' in metrics.keys():\n",
" runs[r.id] = r\n",
" run_metrics[r.id] = metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now find the run with the lowest Mean Squared Error value"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])\n",
"best_run = runs[best_run_id]\n",
"print('Best run is:', best_run_id)\n",
"print('Metrics:', run_metrics[best_run_id])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags to your runs to make them easier to catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"best_run.tag(key=\"Description\", value=\"The best one\")\n",
"best_run.get_tags()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plot MSE over alpha\n",
"\n",
"Let's observe the best model visually by plotting the MSE values over alpha values:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"best_alpha = run_metrics[best_run_id]['alpha']\n",
"min_mse = run_metrics[best_run_id]['mse']\n",
"\n",
"alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])\n",
"sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]\n",
"\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')\n",
"\n",
"plt.xlabel('alpha', fontsize = 14)\n",
"plt.ylabel('mean squared error', fontsize = 14)\n",
"plt.title('MSE over alpha', fontsize = 16)\n",
"\n",
"# plot arrow\n",
"plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,\n",
" width = 0, head_width = .03, head_length = 8)\n",
"\n",
"# plot \"best run\" text\n",
"plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the best model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find the model file saved in the run record of best run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"for f in best_run.get_file_names():\n",
" print(f)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can register this model in the model registry of the workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"models = Model.list(workspace=ws, name='best_model')\n",
"for m in models:\n",
" print(m.name, m.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"download file"
]
},
"outputs": [],
"source": [
"# remove the model file if it is already on disk\n",
"if os.path.isfile('model.pkl'): \n",
" os.remove('model.pkl')\n",
"# download the model\n",
"model.download(target_dir=\"./\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scoring script\n",
"\n",
"Now we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.\n",
"\n",
"Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./score.py', 'r') as scoring_script:\n",
" print(scoring_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create environment dependency file\n",
"\n",
"We need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_conda_package(\"scikit-learn\")\n",
"print(myenv.serialize_to_string())\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy web service into an Azure Container Instance\n",
"The deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).\n",
"\n",
"Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML.\n",
" \n",
"** Note: ** The web service creation can take 6-7 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice, Webservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
" memory_gb=1, \n",
" tags={'sample name': 'AML 101'}, \n",
" description='This is a great example.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. \n",
"\n",
"If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"image_config = ContainerImage.image_configuration(execution_script=\"score.py\", \n",
" runtime=\"python\", \n",
" conda_file=\"myenv.yml\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# this will take 5-10 minutes to finish\n",
"# you can also use \"az container list\" command to find the ACI being deployed\n",
"service = Webservice.deploy_from_model(name='my-aci-svc',\n",
" deployment_config=aciconfig,\n",
" models=[model],\n",
" image_config=image_config,\n",
" workspace=ws)\n",
"\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Test web service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"print('web service is hosted in ACI:', service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the `run` API to call the web service with one row of data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import json\n",
"# score the first row from the test set.\n",
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
"service.run(input_data = test_samples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Feed the entire test set and calculate the errors (residual values)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"# score the entire test set.\n",
"test_samples = json.dumps({'data': X_test.tolist()})\n",
"\n",
"result = json.loads(service.run(input_data = test_samples))['result']\n",
"residual = result - y_test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also send raw HTTP request to test the web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"\n",
"# 2 rows of input data, each with 10 made-up numerical features\n",
"input_data = \"{\\\"data\\\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\"\n",
"\n",
"headers = {'Content-Type':'application/json'}\n",
"\n",
"# for AKS deployment you'd need to the service key in the header as well\n",
"# api_key = service.get_key()\n",
"# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} \n",
"\n",
"resp = requests.post(service.scoring_uri, input_data, headers = headers)\n",
"print(resp.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Residual graph\n",
"Plot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Residual Values', fontsize = 18)\n",
"\n",
"f.set_figheight(6)\n",
"f.set_figwidth(14)\n",
"\n",
"a0.plot(residual, 'bo', alpha=0.4);\n",
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
"a0.set_ylabel('residue values', fontsize=14)\n",
"a0.set_xlabel('test data set', fontsize=14)\n",
"\n",
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');\n",
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete ACI to clean up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deleting ACI is super fast!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"service.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,289 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 03. Train on Azure Container Instance\n",
"\n",
"* Create Workspace\n",
"* Create `train.py` in the project folder.\n",
"* Configure an ACI (Azure Container Instance) run\n",
"* Execute in ACI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-aci'\n",
"experiment = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote execution on ACI\n",
"\n",
"The training script `train.py` is already created for you. Let's have a look."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure for using ACI\n",
"Linux-based ACI is available in `West US`, `East US`, `West Europe`, `North Europe`, `West US 2`, `Southeast Asia`, `Australia East`, `East US 2`, and `Central US` regions. See details [here](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas#region-availability)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"configure run"
]
},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# signal that you want to use ACI to execute script.\n",
"run_config.target = \"containerinstance\"\n",
"\n",
"# ACI container group is only supported in certain regions, which can be different than the region the Workspace is in.\n",
"run_config.container_instance.region = 'eastus2'\n",
"\n",
"# set the ACI CPU and Memory \n",
"run_config.container_instance.cpu_cores = 1\n",
"run_config.container_instance.memory_gb = 2\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit the Experiment\n",
"Finally, run the training job on the ACI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time \n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory='./',\n",
" script='train.py',\n",
" run_config=run_config)\n",
"\n",
"run = experiment.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"# Show run details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# show all the files stored within the run record\n",
"run.get_file_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can take a model produced here, register it and then deploy as a web service."
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,452 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enabling Data Collection for Models in Production\n",
"With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity:\n",
"\n",
"* Monitor data drifts as production data enters your model\n",
"* Make better decisions on when to retrain or optimize your model\n",
"* Retrain your model with the data collected\n",
"\n",
"## What data is collected?\n",
"* Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS)\n",
"* Model predictions using production input data.\n",
"\n",
"**Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product.\n",
"\n",
"## What is different compared to standard production deployment process?\n",
"1. Update scoring file.\n",
"2. Update yml file with new dependency.\n",
"3. Update aks configuration.\n",
"4. Build new image and deploy it. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Import your dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace, Run\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n",
"from azureml.core.model import Model\n",
"\n",
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Set up your configuration and create a workspace\n",
"Follow Notebook 00 instructions to do this.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Register Model\n",
"Register an existing trained model, add descirption and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. *Update your scoring file with Data Collection*\n",
"The file below, compared to the file used in notebook 11, has the following changes:\n",
"### a. Import the module\n",
"```python \n",
"from azureml.monitoring import ModelDataCollector```\n",
"### b. In your init function add:\n",
"```python \n",
"global inputs_dc, prediction_d\n",
"inputs_dc = ModelDataCollector(\"best_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\", \"feat3\", \"feat4\", \"feat5\", \"Feat6\"])\n",
"prediction_dc = ModelDataCollector(\"best_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"])```\n",
" \n",
"* Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide \"raw\" data versus \"processed\".\n",
"* CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.)\n",
"* Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created.\n",
"\n",
"### c. In your run function add:\n",
"```python\n",
"inputs_dc.collect(data)\n",
"prediction_dc.collect(result)```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy \n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"from azureml.monitoring import ModelDataCollector\n",
"import time\n",
"\n",
"def init():\n",
" global model\n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n",
" # this call should return the path to the model.pkl file on the local disk.\n",
" model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
" global inputs_dc, prediction_dc\n",
" # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n",
" inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n",
" # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n",
" prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n",
" \n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" global inputs_dc, prediction_dc\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
" inputs_dc.collect(data) #this call is saving our input data into our blob\n",
" prediction_dc.collect(result)#this call is saving our prediction data into our blob\n",
" print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"result\": result.tolist()})\n",
" except Exception as e:\n",
" result = str(e)\n",
" print (result + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"error\": result})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. *Update your myenv.yml file with the required module*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"myenv.add_pip_package(\"azureml-monitoring\")\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Create your new Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"Image with ridge regression model\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" )\n",
"\n",
"image = ContainerImage.create(name = \"myimage1\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Deploy to AKS service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so (Notebook 11)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-test1' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_target.wait_for_completion(show_output = True)\n",
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you already have a cluster you can attach the service to it:"
]
},
{
"cell_type": "markdown",
"metadata": {
"scrolled": true
},
"source": [
"```python \n",
" %%time\n",
" resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
" create_name= 'myaks4'\n",
" aks_target = AksCompute.attach(workspace = ws, \n",
" name = create_name, \n",
" resource_id=resource_id)\n",
" ## Wait for the operation to complete\n",
" aks_target.wait_for_provisioning(True)```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration*\n",
"In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Set the web service configuration\n",
"aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Deploy your service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-w-dc2'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. Test your service and send some data\n",
"**Note**: It will take around 15 mins for your data to appear in your blob.\n",
"The data will appear in your Azure Blob following this format:\n",
"\n",
"/modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,54,6,7,8,88,10], \n",
" [10,9,8,37,36,45,4,33,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. Validate you data and analyze it\n",
"You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear):\n",
"\n",
"/modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv \n",
"\n",
"For doing further analysis you have multiple options:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. Create DataBricks cluter and connect it to your blob\n",
"https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template \"Azure Blob Storage Import Example Notebook\".\n",
"\n",
"\n",
"Here is an example for setting up the file location to extract the relevant data:\n",
"\n",
"<code> file_location = \"wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv\" \n",
"file_type = \"csv\"</code>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Connect Blob to Power Bi (Small Data only)\n",
"1. Download and Open PowerBi Desktop\n",
"2. Select “Get Data” and click on “Azure Blob Storage” >> Connect\n",
"3. Add your storage account and enter your storage key.\n",
"4. Select the container where your Data Collection is stored and click on Edit. \n",
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"6. Click on the double arrow aside the “Content” column to combine the files. \n",
"7. Click OK and the data will preload.\n",
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Disable Data Collection"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aks_service.update(collect_model_data=False)"
]
}
],
"metadata": {
"authors": [
{
"name": "marthalc"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.10"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.10" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.15"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.15" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.17"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.17" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.18"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.18" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.2"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.2" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.21"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.21" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.23"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.23" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.30"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.30" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.6"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.6" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.8"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.8" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

95
NBSETUP.md Normal file
View File

@@ -0,0 +1,95 @@
# Set up your notebook environment for Azure Machine Learning
To run the notebooks in this repository use one of following options.
## **Option 1: Use Azure Notebooks**
Azure Notebooks is a hosted Jupyter-based notebook service in the Azure cloud. Azure Machine Learning Python SDK is already pre-installed in the Azure Notebooks `Python 3.6` kernel.
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks
1. Follow the instructions in the [Configuration](configuration.ipynb) notebook to create and connect to a workspace
1. Open one of the sample notebooks
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook by choosing Kernel > Change Kernel > Python 3.6 from the menus.
## **Option 2: Use your own notebook server**
### Quick installation
We recommend you create a Python virtual environment ([Miniconda](https://conda.io/miniconda.html) preferred but [virtualenv](https://virtualenv.pypa.io/en/latest/) works too) and install the SDK in it.
```sh
# install just the base SDK
pip install azureml-sdk
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# below steps are optional
# install the base SDK and a Jupyter notebook server
pip install azureml-sdk[notebooks]
# install model explainability component
pip install azureml-sdk[explain]
# install automated ml components
pip install azureml-sdk[automl]
# install experimental features (not ready for production use)
pip install azureml-sdk[contrib]
```
Note the _extras_ (the keywords inside the square brackets) can be combined. For example:
```sh
# install base SDK, Jupyter notebook and automated ml components
pip install azureml-sdk[notebooks,automl]
```
### Full instructions
[Install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)
Please make sure you start with the [Configuration](configuration.ipynb) notebook to create and connect to a workspace.
### Video walkthrough:
[!VIDEO https://youtu.be/VIsXeTuW3FU]
## **Option 3: Use Docker**
You need to have Docker engine installed locally and running. Open a command line window and type the following command.
__Note:__ We use version `1.0.10` below as an exmaple, but you can replace that with any available version number you like.
```sh
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# change current directory to the folder
# where Dockerfile of the specific SDK version is located.
cd MachineLearningNotebooks/Dockerfiles/1.0.10
# build a Docker image with the a name (azuremlsdk for example)
# and a version number tag (1.0.10 for example).
# this can take several minutes depending on your computer speed and network bandwidth.
docker build . -t azuremlsdk:1.0.10
# launch the built Docker container which also automatically starts
# a Jupyter server instance listening on port 8887 of the host machine
docker run -it -p 8887:8887 azuremlsdk:1.0.10
```
Now you can point your browser to http://localhost:8887. We recommend that you start from the `configuration.ipynb` notebook at the root directory.
If you need additional Azure ML SDK components, you can either modify the Docker files before you build the Docker images to add additional steps, or install them through command line in the live container after you build the Docker image. For example:
```sh
# install the core SDK and automated ml components
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[explain]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]
```
Drag and Drop
The image will be downloaded by Fatkun

View File

@@ -1,47 +1,56 @@
Get the full documentation for Azure Machine Learning service at: # Azure Machine Learning service example notebooks
https://docs.microsoft.com/azure/machine-learning/service/ This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
<br> ![Azure ML workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/service/media/overview-what-is-azure-ml/aml.png)
# Sample notebooks for Azure Machine Learning service ## Quick installation
```sh
pip install azureml-sdk
```
Read more detailed instructions on [how to set up your environment](./NBSETUP.md) using Azure Notebook service, your own Jupyter notebook server, or Docker.
To run the notebooks in this repository use one of these methods: ## How to navigate and use the example notebooks?
You should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
## Use Azure Notebooks - Jupyter based notebooks in the Azure cloud If you want to...
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks) * ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/img-classification-part2-deploy.ipynb).
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks. * ...prepare your data and do automated machine learning, start with regression tutorials: [Part 1 (Data Prep)](./tutorials/regression-part1-data-prep.ipynb) and [Part 2 (Automated ML)](./tutorials/regression-part2-automated-ml.ipynb).
1. Follow the instructions in the [00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace. * ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
1. Open one of the sample notebooks. * ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook. ## Tutorials
![set kernel to Python 3.6](images/python36.png) The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs).
## How to use Azure ML
## **Use your own notebook server** The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
1. Setup a Jupyter Notebook server and [install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python). - [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
1. Clone [this repository](https://aka.ms/aml-notebooks). - [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
1. You may need to install other packages for specific notebooks - [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
1. Start your notebook server. - [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
1. Follow the instructions in the [00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace. - [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
1. Open one of the sample notebooks. - [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
> Note: **Looking for automated machine learning samples?** ---
> For your convenience, you can use an installation script instead of the steps below for the automated ML notebooks. Go to the [automl folder README](automl/README.md) and follow the instructions. The script installs all packages needed for notebooks in that folder. ## Documentation
# Contributing * Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
* [Python SDK reference](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
* Azure ML Data Prep SDK [overview](https://aka.ms/data-prep-sdk), [Python SDK reference](https://aka.ms/aml-data-prep-apiref), and [tutorials and how-tos](https://aka.ms/aml-data-prep-notebooks).
This project welcomes contributions and suggestions. Most contributions require you to agree to a ---
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide ## Projects using Azure Machine Learning
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Visit following repos to see projects contributed by Azure ML users:
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. - [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)

View File

@@ -1,224 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 00. Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n",
"\n",
"\n",
"## Prerequisites:\n",
"\n",
"Before running this notebook, run the `automl_setup` script described in README.md.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Machine Learning Services Resource Provider\n",
"\n",
"Microsoft.MachineLearningServices only needs to be registed once in the subscription.\n",
"To register it:\n",
"1. Start the Azure portal.\n",
"2. Select your `All services` and then `Subscription`.\n",
"3. Select the subscription that you want to use.\n",
"4. Click on `Resource providers`\n",
"3. Click the `Register` link next to Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then load the workspace from this config file from any notebook in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load workspace configuration from ./aml_config/config.json file.\n",
"my_workspace = Workspace.from_config()\n",
"my_workspace.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host All Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,294 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 10: Multi-output\n",
"\n",
"This notebook shows how to use AutoML to train multi-output problems by leveraging the correlation between the outputs using indicator vectors.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transformer Functions\n",
"The transformations of inputs `X` and `y` are happening as follows, e.g. `y = {y_1, y_2}`, then `X` becomes\n",
" \n",
"`X 1 0`\n",
" \n",
"`X 0 1`\n",
"\n",
"and `y` becomes,\n",
"\n",
"`y_1`\n",
"\n",
"`y_2`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"from scipy import linalg\n",
"\n",
"#Transformer functions\n",
"def multi_output_transform_x_y(X, y):\n",
" X_new = multi_output_transformer_x(X, y.shape[1])\n",
" y_new = multi_output_transform_y(y)\n",
" return X_new, y_new\n",
"\n",
"def multi_output_transformer_x(X, number_of_columns_y):\n",
" indicator_vecs = linalg.block_diag(*([np.ones((X.shape[0], 1))] * number_of_columns_y))\n",
" if sparse.issparse(X):\n",
" X_new = sparse.vstack(np.tile(X, number_of_columns_y))\n",
" indicator_vecs = sparse.coo_matrix(indicator_vecs)\n",
" X_new = sparse.hstack((X_new, indicator_vecs))\n",
" else:\n",
" X_new = np.tile(X, (number_of_columns_y, 1))\n",
" X_new = np.hstack((X_new, indicator_vecs))\n",
" return X_new\n",
"\n",
"def multi_output_transform_y(y):\n",
" return y.reshape(-1, order=\"F\")\n",
"\n",
"def multi_output_inverse_transform_y(y, number_of_columns_y):\n",
" return y.reshape((-1, number_of_columns_y), order = \"F\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AutoML Experiment Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-multi-output'\n",
"project_folder = './sample_projects/automl-local-multi-output'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Random Dataset for Test Purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rng = np.random.RandomState(1)\n",
"X_train = np.sort(200 * rng.rand(600, 1) - 100, axis = 0)\n",
"y_train = np.array([np.pi * np.sin(X_train).ravel(), np.pi * np.cos(X_train).ravel()]).T\n",
"y_train += (0.5 - rng.rand(*y_train.shape))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perform X and y transformation using the transformer function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train_transformed, y_train_transformed = multi_output_transform_x_y(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Configure AutoML using the transformed results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log = 'automl_errors_multi.log',\n",
" primary_metric = 'r2_score',\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train_transformed,\n",
" y = y_train_transformed,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fit the Transformed Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the best fit model.\n",
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate random data set for predicting.\n",
"X_test = np.sort(200 * rng.rand(200, 1) - 100, axis = 0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Transform predict data.\n",
"X_test_transformed = multi_output_transformer_x(X_test, y_train.shape[1])\n",
"\n",
"# Predict and inverse transform the prediction.\n",
"y_predict = fitted_model.predict(X_test_transformed)\n",
"y_predict = multi_output_inverse_transform_y(y_predict, y_train.shape[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(y_predict)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,232 +0,0 @@
# Table of Contents
1. [Auto ML Introduction](#introduction)
2. [Running samples in a Local Conda environment](#localconda)
3. [Auto ML SDK Sample Notebooks](#samples)
4. [Documentation](#documentation)
5. [Running using python command](#pythoncommand)
6. [Troubleshooting](#troubleshooting)
# Auto ML Introduction <a name="introduction"></a>
AutoML builds high quality Machine Learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, AutoML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
# Running samples in a Local Conda environment <a name="localconda"></a>
You can run these notebooks in Azure Notebooks without any extra installation. To run these notebook on your own notebook server, use these installation instructions.
It is best if you create a new conda environment locally to try this SDK, so it doesn't mess up with your existing Python environment.
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose Python 3.7 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
### 3. Setup a new conda environment
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 30 minutes to execute.
## Windows
Start a conda command windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup
```
## Mac
Install "Command line developer tools" if it is not already installed (you can use the command: `xcode-select --install`).
Start a Terminal windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_mac.sh
```
## Linux
cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup_linux.sh
```
### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook
- Please make sure you use the Python [conda env:azure_automl] kernel when running this notebook.
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in AutoML
# Auto ML SDK Sample Notebooks <a name="samples"></a>
- [00.configuration.ipynb](00.configuration.ipynb)
- Register Machine Learning Services Resource Provider
- Create new Azure ML Workspace
- Save Workspace configuration file
- [01.auto-ml-classification.ipynb](01.auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Uses local compute for training
- [02.auto-ml-regression.ipynb](02.auto-ml-regression.ipynb)
- Dataset: scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)
- Simple example of using Auto ML for regression
- Uses local compute for training
- [03.auto-ml-remote-execution.ipynb](03.auto-ml-remote-execution.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using Auto ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [03b.auto-ml-remote-batchai.ipynb](03b.auto-ml-remote-batchai.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using Auto ML for classification using a remote Batch AI compute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [04.auto-ml-remote-execution-text-data-blob-store.ipynb](04.auto-ml-remote-execution-text-data-blob-store.ipynb)
- Dataset: [Burning Man 2016 dataset](https://innovate.burningman.org/datasets-page/)
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
- [05.auto-ml-missing-data-blacklist-early-termination.ipynb](05.auto-ml-missing-data-blacklist-early-termination.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
- [06.auto-ml-sparse-data-custom-cv-split.ipynb](06.auto-ml-sparse-data-custom-cv-split.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Handle sparse datasets
- Specify custom train and validation set
- [07.auto-ml-exploring-previous-runs.ipynb](07.auto-ml-exploring-previous-runs)
- List all projects for the workspace
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Downlaod fitted pipeline for any iteration
- [08.auto-ml-remote-execution-with-text-file-on-DSVM](08.auto-ml-remote-execution-with-text-file-on-DSVM.ipynb)
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Download the data and store it in the DSVM to improve performance.
- [09.auto-ml-classification-with-deployment.ipynb](09.auto-ml-classification-with-deployment.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
- [10.auto-ml-multi-output-example.ipynb](10.auto-ml-multi-output-example.ipynb)
- Dataset: scikit learn's random example using multi-output pipeline(http://scikit-learn.org/stable/auto_examples/ensemble/plot_random_forest_regression_multioutput.html#sphx-glr-auto-examples-ensemble-plot-random-forest-regression-multioutput-py)
- Simple example of using Auto ML for multi output regression
- Handle both the dense and sparse metrix
- [11.auto-ml-sample-weight.ipynb](11.auto-ml-sample-weight.ipynb)
- How to specifying sample_weight
- The difference that it makes to test results
- [12.auto-ml-retrieve-the-training-sdk-versions.ipynb](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)
- How to get current and training env SDK versions
- [13.auto-ml-dataprep.ipynb](13.auto-ml-dataprep.ipynb)
- Using DataPrep for reading data
- [14a.auto-ml-classification-ensemble.ipynb](14a.auto-ml-classification-ensemble.ipynb)
- Classification with ensembling
- [14b.auto-ml-regression-ensemble.ipynb](14b.auto-ml-regression-ensemble.ipynb)
- Regression with ensembling
# Documentation <a name="documentation"></a>
## Table of Contents
1. [Auto ML Settings ](#automlsettings)
2. [Cross validation split options](#cvsplits)
3. [Get Data Syntax](#getdata)
4. [Data pre-processing and featurization](#preprocessing)
## Auto ML Settings <a name="automlsettings"></a>
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**max_time_sec**|Time limit in seconds for each iteration|None|
|**iterations**|Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. |100|
|**n_cross_validations**|Number of cross validation splits|None|
|**validation_size**|Size of validation set as percentage of all training samples|None|
|**concurrent_iterations**|Max number of iterations that would be executed in parallel|1|
|**preprocess**|*True/False* <br>Setting this to *True* enables preprocessing <br>on the input to handle missing data, and perform some common feature extraction<br>*Note: If input data is Sparse you cannot use preprocess=True*|False|
|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> You can set it to *-1* to use all cores|1|
|**exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|None|
|**blacklist_algos**|*Array* of *strings* indicating pipelines to ignore for Auto ML.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGDClassifierWrapper</i><br><i>NBWrapper</i><br><i>BernoulliNB</i><br><i>SVCWrapper</i><br><i>LinearSVMWrapper</i><br><i>KNeighborsClassifier</i><br><i>DecisionTreeClassifier</i><br><i>RandomForestClassifier</i><br><i>ExtraTreesClassifier</i><br><i>gradient boosting</i><br><i>LightGBMClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoostingRegressor</i><br><i>DecisionTreeRegressor</i><br><i>KNeighborsRegressor</i><br><i>LassoLars</i><br><i>SGDRegressor</i><br><i>RandomForestRegressor</i><br><i>ExtraTreesRegressor</i>|None|
## Cross validation split options <a name="cvsplits"></a>
### K-Folds Cross Validation
Use *n_cross_validations* setting to specify the number of cross validations. The training data set will be randomly split into *n_cross_validations* folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for *n_cross_validations* rounds until each fold is used once as validation set. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)
Use *validation_size* to specify the percentage of the training data set that should be used for validation, and use *n_cross_validations* to specify the number of cross validations. During each cross validation round, a subset of size *validation_size* will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Custom train and validation set
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
## get_data() syntax <a name="getdata"></a>
The *get_data()* function can be used to return a dictionary with these values:
|Key|Type|Dependency|Mutually Exclusive with|Description|
|:-|:-|:-|:-|:-|
|X|Pandas Dataframe or Numpy Array|y|data_train, label, columns|All features to train with|
|y|Pandas Dataframe or Numpy Array|X|label|Label data to train with. For classification, this should be an array of integers. |
|X_valid|Pandas Dataframe or Numpy Array|X, y, y_valid|data_train, label|*Optional* All features to validate with. If this is not specified, X is split between train and validate|
|y_valid|Pandas Dataframe or Numpy Array|X, y, X_valid|data_train, label|*Optional* The label data to validate with. If this is not specified, y is split between train and validate|
|sample_weight|Pandas Dataframe or Numpy Array|y|data_train, label, columns|*Optional* A weight value for each label. Higher values indicate that the sample is more important.|
|sample_weight_valid|Pandas Dataframe or Numpy Array|y_valid|data_train, label, columns|*Optional* A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate|
|data_train|Pandas Dataframe|label|X, y, X_valid, y_valid|All data (features+label) to train with|
|label|string|data_train|X, y, X_valid, y_valid|Which column in data_train represents the label|
|columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features|
|cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation|
## Data pre-processing and featurization <a name="preprocessing"></a>
If you use "preprocess=True", the following data preprocessing steps are performed automatically for you:
### 1. Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
### 2. Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
### 3. Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
### 4. Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
- Depending on cardinality of categorical features label encoding or (hashing) one-hot encoding is performed.
# Running using python command <a name="pythoncommand"></a>
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
However, on Windows the file needs to be modified before it can be run.
The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
# Troubleshooting <a name="troubleshooting"></a>
## Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size.
If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.
## Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the concurrent_iterations setting should always be less than the number of cores of the DSVM.
To resolve this issue, try reducing the value specified for the concurrent_iterations setting.

View File

@@ -1,19 +0,0 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6
- nb_conda
- matplotlib
- numpy>=1.11.0,<1.16.0
- scipy>=0.19.0,<0.20.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- pip:
# Required packages for AzureML execution, history, and data preparation.
- --extra-index-url https://pypi.python.org/simple
- azureml-sdk[automl]
- azureml-train-widgets
- pandas_ml

View File

@@ -1,42 +0,0 @@
@echo off
set conda_env_name=%1
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading azureml-sdk[automl] in existing conda environment %conda_env_name%
call pip install --upgrade azureml-sdk[automl]
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f automl_env.yml -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call pip install psutil
call jupyter nbextension install --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
call jupyter nbextension enable --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
echo.
echo Starting jupyter notebook - please run notebook 00.configuration
echo.
jupyter notebook --log-level=50
goto End
:ErrorExit
echo Install failed
:End

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl]
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -1,36 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl]
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

376
configuration.ipynb Normal file
View File

@@ -0,0 +1,376 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Configuration\n",
"\n",
"_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_\n",
"\n",
"---\n",
"---\n",
"\n",
"## Table of Contents\n",
"\n",
"1. [Introduction](#Introduction)\n",
" 1. What is an Azure Machine Learning workspace\n",
"1. [Setup](#Setup)\n",
" 1. Azure subscription\n",
" 1. Azure ML SDK and other library installation\n",
" 1. Azure Container Instance registration\n",
"1. [Configure your Azure ML Workspace](#Configure%20your%20Azure%20ML%20workspace)\n",
" 1. Workspace parameters\n",
" 1. Access your workspace\n",
" 1. Create a new workspace\n",
" 1. Create compute resources\n",
"1. [Next steps](#Next%20steps)\n",
"\n",
"---\n",
"\n",
"## Introduction\n",
"\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.\n",
"\n",
"Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.\n",
"\n",
"In this notebook you will\n",
"* Learn about getting an Azure subscription\n",
"* Specify your workspace parameters\n",
"* Access or create your workspace\n",
"* Add a default compute cluster for your workspace\n",
"\n",
"### What is an Azure Machine Learning workspace\n",
"\n",
"An Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inferencing, and the monitoring of deployed models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"This section describes activities required before you can access any Azure ML services functionality."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Azure Subscription\n",
"\n",
"In order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces.\n",
"\n",
"### 2. Azure ML SDK and other library installation\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, the following cell checks the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.23 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK.\n",
"\n",
"### 3. Azure Container Instance registration\n",
"Azure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI. \n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure your Azure ML workspace\n",
"\n",
"### Workspace parameters\n",
"\n",
"To use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:\n",
"* Your subscription id\n",
"* A resource group name\n",
"* (optional) The region that will host your workspace\n",
"* A name for your workspace\n",
"\n",
"You can get your subscription ID from the [Azure portal](https://portal.azure.com).\n",
"\n",
"You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.\n",
"\n",
"The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.\n",
"\n",
"The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. \n",
"\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"Replace the default values in the cell below with your workspace parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.getenv(\"SUBSCRIPTION_ID\", default=\"<my-subscription-id>\")\n",
"resource_group = os.getenv(\"RESOURCE_GROUP\", default=\"<my-resource-group>\")\n",
"workspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<my-workspace-name>\")\n",
"workspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Access your workspace\n",
"\n",
"The following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" # write the details of the workspace to a configuration file to the notebook library\n",
" ws.write_config()\n",
" print(\"Workspace configuration succeeded. Skip the workspace creation steps below\")\n",
"except:\n",
" print(\"Workspace not accessible. Change your parameters or create a new workspace below\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a new workspace\n",
"\n",
"If you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.\n",
"\n",
"This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail if:\n",
"* You do not have permission to create a workspace in the resource group\n",
"* You do not have permission to create a resource group if it's non-existing.\n",
"* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"# Create the workspace using the specified parameters\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"\n",
"# write the details of the workspace to a configuration file to the notebook library\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create compute resources for your training experiments\n",
"\n",
"Many of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.\n",
"\n",
"To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.\n",
"\n",
"The cluster parameters are:\n",
"* vm_size - this describes the virtual machine type and size used in the cluster. All machines in the cluster are the same type. You can get the list of vm sizes available in your region by using the CLI command\n",
"\n",
"```shell\n",
"az vm list-skus -o tsv\n",
"```\n",
"* min_nodes - this sets the minimum size of the cluster. If you set the minimum to 0 the cluster will shut down all nodes while note in use. Setting this number to a value higher than 0 will allow for faster start-up times, but you will also be billed when the cluster is not in use.\n",
"* max_nodes - this sets the maximum size of the cluster. Setting this to a larger number allows for more concurrency and a greater distributed processing of scale-out jobs.\n",
"\n",
"\n",
"To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpucluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpucluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
"\n",
" # Create the cluster with the specified name and configuration\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
" # Wait for the cluster to complete, show the output log\n",
" cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print(\"Found existing gpu cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new gpucluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" # Create the cluster with the specified name and configuration\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
" # Wait for the cluster to complete, show the output log\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Next steps\n",
"\n",
"In this notebook you configured this notebook library to connect easily to an Azure ML workspace. You can copy this notebook to your own libraries to connect them to you workspace, or use it to bootstrap new workspaces completely.\n",
"\n",
"If you came here from another notebook, you can return there and complete that exercise, or you can try out the [Tutorials](./tutorials) or jump into \"how-to\" notebooks and start creating and deploying models. A good place to start is the [train within notebook](./how-to-use-azureml/training/train-within-notebook) example that walks through a simplified but complete end to end machine learning process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

307
contrib/RAPIDS/README.md Normal file
View File

@@ -0,0 +1,307 @@
## How to use the RAPIDS on AzureML materials
### Setting up requirements
The material requires the use of the Azure ML SDK and of the Jupyter Notebook Server to run the interactive execution. Please refer to instructions to [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") Follow the instructions under **Local Computer**, make sure to run the last step: <span style="font-family: Courier New;">pip install \<new package\></span> with <span style="font-family: Courier New;">new package = progressbar2 (pip install progressbar2)</span>
After following the directions, the user should end up setting a conda environment (<span style="font-family: Courier New;">myenv</span>)that can be activated in an Anaconda prompt
The user would also require an Azure Subscription with a Machine Learning Services quota on the desired region for 24 nodes or more (to be able to select a vmSize with 4 GPUs as it is used on the Notebook) on the desired VM family ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)), the specific vmSize to be used within the chosen family would also need to be whitelisted for Machine Learning Services usage.
&nbsp;
### Getting and running the material
Clone the AzureML Notebooks repository in GitHub by running the following command on a local_directory:
* C:\local_directory>git clone https://github.com/Azure/MachineLearningNotebooks.git
On a conda prompt navigate to the local directory, activate the conda environment (<span style="font-family: Courier New;">myenv</span>), where the Azure ML SDK was installed and launch Jupyter Notebook.
* (<span style="font-family: Courier New;">myenv</span>) C:\local_directory>jupyter notebook
From the resulting browser at http://localhost:8888/tree, navigate to the master notebook:
* http://localhost:8888/tree/MachineLearningNotebooks/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
&nbsp;
The following notebook will appear:
![](imgs/NotebookHome.png)
&nbsp;
### Master Jupyter Notebook
The notebook can be executed interactively step by step, by pressing the Run button (In a red circle in the above image.)
The first couple of functional steps import the necessary AzureML libraries. If you experience any errors please refer back to the [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") instructions.
&nbsp;
#### Setting up a Workspace
The following step gathers the information necessary to set up a workspace to execute the RAPIDS script. This needs to be done only once, or not at all if you already have a workspace you can use set up on the Azure Portal:
![](imgs/WorkSpaceSetUp.png)
It is important to be sure to set the correct values for the subscription\_id, resource\_group, workspace\_name, and region before executing the step. An example is:
subscription_id = os.environ.get("SUBSCRIPTION_ID", "1358e503-xxxx-4043-xxxx-65b83xxxx32d")
resource_group = os.environ.get("RESOURCE_GROUP", "AML-Rapids-Testing")
workspace_name = os.environ.get("WORKSPACE_NAME", "AML_Rapids_Tester")
workspace_region = os.environ.get("WORKSPACE_REGION", "West US 2")
&nbsp;
The resource\_group and workspace_name could take any value, the region should match the region for which the subscription has the required Machine Learning Services node quota.
The first time the code is executed it will redirect to the Azure Portal to validate subscription credentials. After the workspace is created, its related information is stored on a local file so that this step can be subsequently skipped. The immediate step will just load the saved workspace
![](imgs/saved_workspace.png)
Once a workspace has been created the user could skip its creation and just jump to this step. The configuration file resides in:
* C:\local_directory\\MachineLearningNotebooks\contrib\RAPIDS\aml_config\config.json
&nbsp;
#### Creating an AML Compute Target
Following step, creates an AML Compute Target
![](imgs/target_creation.png)
Parameter vm\_size on function call AmlCompute.provisioning\_configuration() has to be a member of the VM families ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)) that are the ones provided with P40 or V100 GPUs, that are the ones supported by RAPIDS. In this particular case an Standard\_NC24s\_V2 was used.
&nbsp;
If the output of running the step has an error of the form:
![](imgs/targeterror1.png)
It is an indication that even though the subscription has a node quota for VMs for that family, it does not have a node quota for Machine Learning Services for that family.
You will need to request an increase node quota for that family in that region for **Machine Learning Services**.
&nbsp;
Another possible error is the following:
![](imgs/targeterror2.png)
Which indicates that specified vmSize has not been whitelisted for usage on Machine Learning Services and a request to do so should be filled.
The successful creation of the compute target would have an output like the following:
![](imgs/targetsuccess.png)
&nbsp;
#### RAPIDS script uploading and viewing
The next step copies the RAPIDS script process_data.py, which is a slightly modified implementation of the [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb), into a script processing folder and it presents its contents to the user. (The script is discussed in the next section in detail).
If the user wants to use a different RAPIDS script, the references to the <span style="font-family: Courier New;">process_data.py</span> script have to be changed
![](imgs/scriptuploading.png)
&nbsp;
#### Data Uploading
The RAPIDS script loads and extracts features from the Fannie Maes Mortgage Dataset to train an XGBoost prediction model. The script uses two years of data
The next few steps download and decompress the data and is made available to the script as an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data).
&nbsp;
The following functions are used to download and decompress the input data
![](imgs/dcf1.png)
![](imgs/dcf2.png)
![](imgs/dcf3.png)
![](imgs/dcf4.png)
&nbsp;
The next step uses those functions to download locally file:
http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/mortgage_2000-2001.tgz'
And to decompress it, into local folder path = .\mortgage_2000-2001
The step takes several minutes, the intermediate outputs provide progress indicators.
![](imgs/downamddecom.png)
&nbsp;
The decompressed data should have the following structure:
* .\mortgage_2000-2001\acq\Acquisition_<year>Q<num>.txt
* .\mortgage_2000-2001\perf\Performance_<year>Q<num>.txt
* .\mortgage_2000-2001\names.csv
The data is divided in partitions that roughly correspond to yearly quarters. RAPIDS includes support for multi-node, multi-GPU deployments, enabling scaling up and out on much larger dataset sizes. The user will be able to verify that the number of partitions that the script is able to process increases with the number of GPUs used. The RAPIDS script is implemented for single-machine scenarios. An example supporting multiple nodes will be published later.
&nbsp;
The next step upload the data into the [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) under reference <span style="font-family: Courier New;">fileroot = mortgage_2000-2001</span>
The step takes several minutes to load the data, the output provides a progress indicator.
![](imgs/datastore.png)
Once the data has been loaded into the Azure Machine LEarning Data Store, in subsequent run, the user can comment out the ds.upload line and just make reference to the <span style="font-family: Courier New;">mortgage_2000-2001</blog> data store reference
&nbsp;
#### Setting up required libraries and environment to run RAPIDS code
There are two options to setup the environment to run RAPIDS code. The following steps shows how to ues a prebuilt conda environment. A recommended alternative is to specify a base Docker image and package dependencies. You can find sample code for that in the notebook.
![](imgs/install2.png)
&nbsp;
#### Wrapper function to submit the RAPIDS script as an Azure Machine Learning experiment
The next step consists of the definition of a wrapper function to be used when the user attempts to run the RAPIDS script with different arguments. It takes as arguments: <span style="font-family: Times New Roman;">*cpu\_training*</span>; a flag that indicates if the run is meant to be processed with CPU-only, <span style="font-family: Times New Roman;">*gpu\_count*</span>; the number of GPUs to be used if they are meant to be used and part_count: the number of data partitions to be used
![](imgs/wrapper.png)
&nbsp;
The core of the function resides in configuring the run by the instantiation of a ScriptRunConfig object, which defines the source_directory for the script to be executed, the name of the script and the arguments to be passed to the script.
In addition to the wrapper function arguments, two other arguments are passed: <span style="font-family: Times New Roman;">*data\_dir*</span>, the directory where the data is stored and <span style="font-family: Times New Roman;">*end_year*</span> is the largest year to use partition from.
As mentioned earlier the size of the data that can be processed increases with the number of gpus, in the function, dictionary <span style="font-family: Times New Roman;">*max\_gpu\_count\_data\_partition_mapping*</span> maps the maximum number of partitions that we empirically found that the system can handle given the number of GPUs used. The function throws a warning when the number of partitions for a given number of gpus exceeds the maximum but the script is still executed, however the user should expect an error as an out of memory situation would be encountered
If the user wants to use a different RAPIDS script, the reference to the process_data.py script has to be changed
&nbsp;
#### Submitting Experiments
We are ready to submit experiments: launching the RAPIDS script with different sets of parameters.
&nbsp;
The following couple of steps submit experiments under different conditions.
![](imgs/submission1.png)
&nbsp;
The user can change variable num\_gpu between one and the number of GPUs supported by the chosen vmSize. Variable part\_count can take any value between 1 and 11, but if it exceeds the maximum for num_gpu, the run would result in an error
&nbsp;
If the experiment is successfully submitted, it would be placed on a queue for processing, its status would appeared as Queued and an output like the following would appear
![](imgs/queue.png)
&nbsp;
When the experiment starts running, its status would appeared as Running and the output would change to something like this:
![](imgs/running.png)
&nbsp;
#### Reproducing the performance gains plot results on the Blog Post
When the run has finished successfully, its status would appeared as Completed and the output would change to something like this:
&nbsp;
![](imgs/completed.png)
Which is the output for an experiment run with three partitions and one GPU, notice that the reported processing time is 49.16 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/2GPUs.png)
This output corresponds to a run with three partitions and two GPUs, notice that the reported processing time is 37.50 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/3GPUs.png)
This output corresponds to an experiment run with three partitions and three GPUs, notice that the reported processing time is 24.40 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/4gpus.png)
This output corresponds to an experiment run with three partitions and four GPUs, notice that the reported processing time is 23.33 seconds just as depicted on the performance gains plot on the blogpost
&nbsp;
![](imgs/CPUBase.png)
This output corresponds to an experiment run with three partitions and using only CPU, notice that the reported processing time is 9 minutes and 1.21 seconds or 541.21 second just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/OOM.png)
This output corresponds to an experiment run with nine partitions and four GPUs, notice that the notebook throws a warning signaling that the number of partitions exceed the maximum that the system can handle with those many GPUs and the run ends up failing, hence having and status of Failed.
&nbsp;
##### Freeing Resources
In the last step the notebook deletes the compute target. (This step is optional especially if the min_nodes in the cluster is set to 0 with which the cluster will scale down to 0 nodes when there is no usage.)
![](imgs/clusterdelete.png)
&nbsp;
### RAPIDS Script
The Master Notebook runs experiments by launching a RAPIDS script with different sets of parameters. In this section, the RAPIDS script, process_data.py in the material, is analyzed
The script first imports all the necessary libraries and parses the arguments passed by the Master Notebook.
The all internal functions to be used by the script are defined.
&nbsp;
#### Wrapper Auxiliary Functions:
The below functions are wrappers for a configuration module for librmm, the RAPIDS Memory Manager python interface:
![](imgs/wap1.png)![](imgs/wap2.png)
&nbsp;
A couple of other functions are wrappers for the submission of jobs to the DASK client:
![](imgs/wap3.png)
![](imgs/wap4.png)
&nbsp;
#### Data Loading Functions:
The data is loaded through the use of the following three functions
![](imgs/DLF1.png)![](imgs/DLF2.png)![](imgs/DLF3.png)
All three functions use library function cudf.read_csv(), cuDF version for the well known counterpart on Pandas.
&nbsp;
#### Data Transformation and Feature Extraction Functions:
The raw data is transformed and processed to extract features by joining, slicing, grouping, aggregating, factoring, etc, the original dataframes just as is done with Pandas. The following functions in the script are used for that purpose:
![](imgs/fef1.png)![](imgs/fef2.png)![](imgs/fef3.png)![](imgs/fef4.png)![](imgs/fef5.png)
![](imgs/fef6.png)![](imgs/fef7.png)![](imgs/fef8.png)![](imgs/fef9.png)
&nbsp;
#### Main() Function
The previous functions are used in the Main function to accomplish several steps: Set up the Dask client, do all ETL operations, set up and train an XGBoost model, the function also assigns which data needs to be processed by each Dask client
&nbsp;
##### Setting Up DASK client:
The following lines:
![](imgs/daskini.png)
&nbsp;
Initialize and set up a DASK client with a number of workers corresponding to the number of GPUs to be used on the run. A successful execution of the set up will result on the following output:
![](imgs/daskoutput.png)
##### All ETL functions are used on single calls to process\_quarter_gpu, one per data partition
![](imgs/ETL.png)
&nbsp;
##### Concentrating the data assigned to each DASK worker
The partitions assigned to each worker are concatenated and set up for training.
![](imgs/Dask2.png)
&nbsp;
##### Setting Training Parameters
The parameters used for the training of a gradient boosted decision tree model are set up in the following code block:
![](imgs/PArameters.png)
Notice how the parameters are modified when using the CPU-only mode.
&nbsp;
##### Launching the training of a gradient boosted decision tree model using XGBoost.
![](imgs/training.png)
The outputs of the script can be observed in the master notebook as the script is executed
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/RAPIDS/README.png)

View File

@@ -0,0 +1,559 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NVIDIA RAPIDS in Azure Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model in Azure.\n",
" \n",
"In this notebook, we will do the following:\n",
" \n",
"* Create an Azure Machine Learning Workspace\n",
"* Create an AMLCompute target\n",
"* Use a script to process our data and train a model\n",
"* Obtain the data required to run this sample\n",
"* Create an AML run configuration to launch a machine learning job\n",
"* Run the script to prepare data for training and train the model\n",
" \n",
"Prerequisites:\n",
"* An Azure subscription to create a Machine Learning Workspace\n",
"* Familiarity with the Azure ML SDK (refer to [notebook samples](https://github.com/Azure/MachineLearningNotebooks))\n",
"* A Jupyter notebook environment with Azure Machine Learning SDK installed. Refer to instructions to [setup the environment](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Verify if Azure ML SDK is installed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core import ScriptRunConfig\n",
"from azureml.widgets import RunDetails"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Azure ML Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following step is optional if you already have a workspace. If you want to use an existing workspace, then\n",
"skip this workspace creation step and move on to the next step to load the workspace.\n",
" \n",
"<font color='red'>Important</font>: in the code cell below, be sure to set the correct values for the subscription_id, \n",
"resource_group, workspace_name, region before executing this code cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<subscription_id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<resource_group>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<workspace_name>\")\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"<region>\")\n",
"\n",
"ws = Workspace.create(workspace_name, subscription_id=subscription_id, resource_group=resource_group, location=workspace_region)\n",
"\n",
"# write config to a local directory for future use\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load existing Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
"\n",
"scripts_folder = \"scripts_folder\"\n",
"\n",
"if not os.path.isdir(scripts_folder):\n",
" os.mkdir(scripts_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML Compute Target"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because NVIDIA RAPIDS requires P40 or V100 GPUs, the user needs to specify compute targets from one of [NC_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview) virtual machine types in Azure; these are the families of virtual machines in Azure that are provisioned with these GPUs.\n",
" \n",
"Pick one of the supported VM SKUs based on the number of GPUs you want to use for ETL and training in RAPIDS.\n",
" \n",
"The script in this notebook is implemented for single-machine scenarios. An example supporting multiple nodes will be published later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"if gpu_cluster_name in ws.compute_targets:\n",
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
" print('found compute target. just use it. ' + gpu_cluster_name)\n",
"else:\n",
" print(\"creating new cluster\")\n",
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v2\", min_nodes=1, max_nodes = 1)\n",
"\n",
" # create the cluster\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Script to process data and train model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The _process&#95;data.py_ script used in the step below is a slightly modified implementation of [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# copy process_data.py into the script folder\n",
"import shutil\n",
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))\n",
"\n",
"with open(os.path.join(scripts_folder, './process_data.py'), 'r') as process_data_script:\n",
" print(process_data_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data required to run this sample"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample uses [Fannie Mae's Single-Family Loan Performance Data](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). Once you obtain access to the data, you will need to make this data available in an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data), for use in this sample. The following code shows how to do that."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Downloading Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Important</font>: Python package progressbar2 is necessary to run the following cell. If it is not available in your environment where this notebook is running, please install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tarfile\n",
"import hashlib\n",
"from urllib.request import urlretrieve\n",
"from progressbar import ProgressBar\n",
"\n",
"def validate_downloaded_data(path):\n",
" if(os.path.isdir(path) and os.path.exists(path + '//names.csv')) :\n",
" if(os.path.isdir(path + '//acq' ) and len(os.listdir(path + '//acq')) == 8):\n",
" if(os.path.isdir(path + '//perf' ) and len(os.listdir(path + '//perf')) == 11):\n",
" print(\"Data has been downloaded and decompressed at: {0}\".format(path))\n",
" return True\n",
" print(\"Data has not been downloaded and decompressed\")\n",
" return False\n",
"\n",
"def show_progress(count, block_size, total_size):\n",
" global pbar\n",
" global processed\n",
" \n",
" if count == 0:\n",
" pbar = ProgressBar(maxval=total_size)\n",
" processed = 0\n",
" \n",
" processed += block_size\n",
" processed = min(processed,total_size)\n",
" pbar.update(processed)\n",
"\n",
" \n",
"def download_file(fileroot):\n",
" filename = fileroot + '.tgz'\n",
" if(not os.path.exists(filename) or hashlib.md5(open(filename, 'rb').read()).hexdigest() != '82dd47135053303e9526c2d5c43befd5' ):\n",
" url_format = 'http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/{0}.tgz'\n",
" url = url_format.format(fileroot)\n",
" print(\"...Downloading file :{0}\".format(filename))\n",
" urlretrieve(url, filename,show_progress)\n",
" pbar.finish()\n",
" print(\"...File :{0} finished downloading\".format(filename))\n",
" else:\n",
" print(\"...File :{0} has been downloaded already\".format(filename))\n",
" return filename\n",
"\n",
"def decompress_file(filename,path):\n",
" tar = tarfile.open(filename)\n",
" print(\"...Getting information from {0} about files to decompress\".format(filename))\n",
" members = tar.getmembers()\n",
" numFiles = len(members)\n",
" so_far = 0\n",
" for member_info in members:\n",
" tar.extract(member_info,path=path)\n",
" show_progress(so_far, 1, numFiles)\n",
" so_far += 1\n",
" pbar.finish()\n",
" print(\"...All {0} files have been decompressed\".format(numFiles))\n",
" tar.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fileroot = 'mortgage_2000-2001'\n",
"path = '.\\\\{0}'.format(fileroot)\n",
"pbar = None\n",
"processed = 0\n",
"\n",
"if(not validate_downloaded_data(path)):\n",
" print(\"Downloading and Decompressing Input Data\")\n",
" filename = download_file(fileroot)\n",
" decompress_file(filename,path)\n",
" print(\"Input Data has been Downloaded and Decompressed\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Uploading Data to Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"\n",
"# download and uncompress data in a local directory before uploading to data store\n",
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
"ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
"\n",
"# data already uploaded to the datastore\n",
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore=fileroot)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML run configuration to launch a machine learning job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"RunConfiguration is used to submit jobs to Azure Machine Learning service. When creating RunConfiguration for a job, users can either \n",
"1. specify a Docker image with prebuilt conda environment and use it without any modifications to run the job, or \n",
"2. specify a Docker image as the base image and conda or pip packages as dependnecies to let AML build a new Docker image with a conda environment containing specified dependencies to use in the job\n",
"\n",
"The second option is the recommended option in AML. \n",
"The following steps have code for both options. You can pick the one that is more appropriate for your requirements. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Specify prebuilt conda environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code shows how to use an existing image from [Docker Hub](https://hub.docker.com/r/rapidsai/rapidsai/) that has a prebuilt conda environment named 'rapids' when creating a RunConfiguration. Note that this conda environment does not include azureml-defaults package that is required for using AML functionality like metrics tracking, model management etc. This package is automatically installed when you use 'Specify package dependencies' option and that is why it is the recommended option to create RunConfiguraiton in AML."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration()\n",
"run_config.framework = 'python'\n",
"run_config.environment.python.user_managed_dependencies = True\n",
"run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
"run_config.target = gpu_cluster_name\n",
"run_config.environment.docker.enabled = True\n",
"run_config.environment.docker.gpu_support = True\n",
"run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu18.04\"\n",
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"run_config.environment.spark.precache_packages = False\n",
"run_config.data_references={'data':data_ref.to_config()}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Specify package dependencies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code shows how to list package dependencies in a conda environment definition file (rapids.yml) when creating a RunConfiguration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
"# run_config = RunConfiguration(conda_dependencies=cd)\n",
"# run_config.framework = 'python'\n",
"# run_config.target = gpu_cluster_name\n",
"# run_config.environment.docker.enabled = True\n",
"# run_config.environment.docker.gpu_support = True\n",
"# run_config.environment.docker.base_image = \"<image>\"\n",
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"# run_config.environment.spark.precache_packages = False\n",
"# run_config.data_references={'data':data_ref.to_config()}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wrapper function to submit Azure Machine Learning experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# parameter cpu_predictor indicates if training should be done on CPU. If set to true, GPUs are used *only* for ETL and *not* for training\n",
"# parameter num_gpu indicates number of GPUs to use among the GPUs available in the VM for ETL and if cpu_predictor is false, for training as well \n",
"def run_rapids_experiment(cpu_training, gpu_count, part_count):\n",
" # any value between 1-4 is allowed here depending the type of VMs available in gpu_cluster\n",
" if gpu_count not in [1, 2, 3, 4]:\n",
" raise Exception('Value specified for the number of GPUs to use {0} is invalid'.format(gpu_count))\n",
"\n",
" # following data partition mapping is empirical (specific to GPUs used and current data partitioning scheme) and may need to be tweaked\n",
" max_gpu_count_data_partition_mapping = {1: 3, 2: 4, 3: 6, 4: 8}\n",
" \n",
" if part_count > max_gpu_count_data_partition_mapping[gpu_count]:\n",
" print(\"Too many partitions for the number of GPUs, exceeding memory threshold\")\n",
" \n",
" if part_count > 11:\n",
" print(\"Warning: Maximum number of partitions available is 11\")\n",
" part_count = 11\n",
" \n",
" end_year = 2000\n",
" \n",
" if part_count > 4:\n",
" end_year = 2001 # use more data with more GPUs\n",
"\n",
" src = ScriptRunConfig(source_directory=scripts_folder, \n",
" script='process_data.py', \n",
" arguments = ['--num_gpu', gpu_count, '--data_dir', str(data_ref),\n",
" '--part_count', part_count, '--end_year', end_year,\n",
" '--cpu_predictor', cpu_training\n",
" ],\n",
" run_config=run_config\n",
" )\n",
"\n",
" exp = Experiment(ws, 'rapidstest')\n",
" run = exp.submit(config=src)\n",
" RunDetails(run).show()\n",
" return run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL & training on GPU)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = False\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1\n",
"data_part_count = 1\n",
"# train using CPU, use GPU for both ETL and training\n",
"run = run_rapids_experiment(cpu_predictor, num_gpu, data_part_count)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL on GPU, training on CPU)\n",
"\n",
"To observe performance difference between GPU-accelerated RAPIDS based training with CPU-only training, set 'cpu_predictor' predictor to 'True' and rerun the experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = True\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1\n",
"data_part_count = 1\n",
"# train using CPU, use GPU for ETL\n",
"run = run_rapids_experiment(cpu_predictor, num_gpu, data_part_count)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete cluster"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# delete the cluster\n",
"# gpu_cluster.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "ksivas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
contrib/RAPIDS/imgs/ETL.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

BIN
contrib/RAPIDS/imgs/OOM.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -0,0 +1,495 @@
import numpy as np
import datetime
import dask_xgboost as dxgb_gpu
import dask
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.delayed import delayed
from dask.distributed import Client, wait
import xgboost as xgb
import cudf
from cudf.dataframe import DataFrame
from collections import OrderedDict
import gc
from glob import glob
import os
import argparse
def initialize_rmm_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = True
#rmm_cfg.initial_pool_size = 2<<30 # set to 2GiB. Default is 1/2 total GPU memory
import cudf
return cudf._gdf.rmm_initialize()
def initialize_rmm_no_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = False
import cudf
return cudf._gdf.rmm_initialize()
def run_dask_task(func, **kwargs):
task = func(**kwargs)
return task
def process_quarter_gpu(client, col_names_path, acq_data_path, year=2000, quarter=1, perf_file=""):
dask_client = client
ml_arrays = run_dask_task(delayed(run_gpu_workflow),
col_path=col_names_path,
acq_path=acq_data_path,
quarter=quarter,
year=year,
perf_file=perf_file)
return dask_client.compute(ml_arrays,
optimize_graph=False,
fifo_timeout="0ms")
def null_workaround(df, **kwargs):
for column, data_type in df.dtypes.items():
if str(data_type) == "category":
df[column] = df[column].astype('int32').fillna(-1)
if str(data_type) in ['int8', 'int16', 'int32', 'int64', 'float32', 'float64']:
df[column] = df[column].fillna(-1)
return df
def run_gpu_workflow(col_path, acq_path, quarter=1, year=2000, perf_file="", **kwargs):
names = gpu_load_names(col_path=col_path)
acq_gdf = gpu_load_acquisition_csv(acquisition_path= acq_path + "/Acquisition_"
+ str(year) + "Q" + str(quarter) + ".txt")
acq_gdf = acq_gdf.merge(names, how='left', on=['seller_name'])
acq_gdf.drop_column('seller_name')
acq_gdf['seller_name'] = acq_gdf['new']
acq_gdf.drop_column('new')
perf_df_tmp = gpu_load_performance_csv(perf_file)
gdf = perf_df_tmp
everdf = create_ever_features(gdf)
delinq_merge = create_delinq_features(gdf)
everdf = join_ever_delinq_features(everdf, delinq_merge)
del(delinq_merge)
joined_df = create_joined_df(gdf, everdf)
testdf = create_12_mon_features(joined_df)
joined_df = combine_joined_12_mon(joined_df, testdf)
del(testdf)
perf_df = final_performance_delinquency(gdf, joined_df)
del(gdf, joined_df)
final_gdf = join_perf_acq_gdfs(perf_df, acq_gdf)
del(perf_df)
del(acq_gdf)
final_gdf = last_mile_cleaning(final_gdf)
return final_gdf
def gpu_load_performance_csv(performance_path, **kwargs):
""" Loads performance data
Returns
-------
GPU DataFrame
"""
cols = [
"loan_id", "monthly_reporting_period", "servicer", "interest_rate", "current_actual_upb",
"loan_age", "remaining_months_to_legal_maturity", "adj_remaining_months_to_maturity",
"maturity_date", "msa", "current_loan_delinquency_status", "mod_flag", "zero_balance_code",
"zero_balance_effective_date", "last_paid_installment_date", "foreclosed_after",
"disposition_date", "foreclosure_costs", "prop_preservation_and_repair_costs",
"asset_recovery_costs", "misc_holding_expenses", "holding_taxes", "net_sale_proceeds",
"credit_enhancement_proceeds", "repurchase_make_whole_proceeds", "other_foreclosure_proceeds",
"non_interest_bearing_upb", "principal_forgiveness_upb", "repurchase_make_whole_proceeds_flag",
"foreclosure_principal_write_off_amount", "servicing_activity_indicator"
]
dtypes = OrderedDict([
("loan_id", "int64"),
("monthly_reporting_period", "date"),
("servicer", "category"),
("interest_rate", "float64"),
("current_actual_upb", "float64"),
("loan_age", "float64"),
("remaining_months_to_legal_maturity", "float64"),
("adj_remaining_months_to_maturity", "float64"),
("maturity_date", "date"),
("msa", "float64"),
("current_loan_delinquency_status", "int32"),
("mod_flag", "category"),
("zero_balance_code", "category"),
("zero_balance_effective_date", "date"),
("last_paid_installment_date", "date"),
("foreclosed_after", "date"),
("disposition_date", "date"),
("foreclosure_costs", "float64"),
("prop_preservation_and_repair_costs", "float64"),
("asset_recovery_costs", "float64"),
("misc_holding_expenses", "float64"),
("holding_taxes", "float64"),
("net_sale_proceeds", "float64"),
("credit_enhancement_proceeds", "float64"),
("repurchase_make_whole_proceeds", "float64"),
("other_foreclosure_proceeds", "float64"),
("non_interest_bearing_upb", "float64"),
("principal_forgiveness_upb", "float64"),
("repurchase_make_whole_proceeds_flag", "category"),
("foreclosure_principal_write_off_amount", "float64"),
("servicing_activity_indicator", "category")
])
print(performance_path)
return cudf.read_csv(performance_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def gpu_load_acquisition_csv(acquisition_path, **kwargs):
""" Loads acquisition data
Returns
-------
GPU DataFrame
"""
cols = [
'loan_id', 'orig_channel', 'seller_name', 'orig_interest_rate', 'orig_upb', 'orig_loan_term',
'orig_date', 'first_pay_date', 'orig_ltv', 'orig_cltv', 'num_borrowers', 'dti', 'borrower_credit_score',
'first_home_buyer', 'loan_purpose', 'property_type', 'num_units', 'occupancy_status', 'property_state',
'zip', 'mortgage_insurance_percent', 'product_type', 'coborrow_credit_score', 'mortgage_insurance_type',
'relocation_mortgage_indicator'
]
dtypes = OrderedDict([
("loan_id", "int64"),
("orig_channel", "category"),
("seller_name", "category"),
("orig_interest_rate", "float64"),
("orig_upb", "int64"),
("orig_loan_term", "int64"),
("orig_date", "date"),
("first_pay_date", "date"),
("orig_ltv", "float64"),
("orig_cltv", "float64"),
("num_borrowers", "float64"),
("dti", "float64"),
("borrower_credit_score", "float64"),
("first_home_buyer", "category"),
("loan_purpose", "category"),
("property_type", "category"),
("num_units", "int64"),
("occupancy_status", "category"),
("property_state", "category"),
("zip", "int64"),
("mortgage_insurance_percent", "float64"),
("product_type", "category"),
("coborrow_credit_score", "float64"),
("mortgage_insurance_type", "float64"),
("relocation_mortgage_indicator", "category")
])
print(acquisition_path)
return cudf.read_csv(acquisition_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def gpu_load_names(col_path):
""" Loads names used for renaming the banks
Returns
-------
GPU DataFrame
"""
cols = [
'seller_name', 'new'
]
dtypes = OrderedDict([
("seller_name", "category"),
("new", "category"),
])
return cudf.read_csv(col_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def create_ever_features(gdf, **kwargs):
everdf = gdf[['loan_id', 'current_loan_delinquency_status']]
everdf = everdf.groupby('loan_id', method='hash').max()
del(gdf)
everdf['ever_30'] = (everdf['max_current_loan_delinquency_status'] >= 1).astype('int8')
everdf['ever_90'] = (everdf['max_current_loan_delinquency_status'] >= 3).astype('int8')
everdf['ever_180'] = (everdf['max_current_loan_delinquency_status'] >= 6).astype('int8')
everdf.drop_column('max_current_loan_delinquency_status')
return everdf
def create_delinq_features(gdf, **kwargs):
delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']]
del(gdf)
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_30['delinquency_30'] = delinq_30['min_monthly_reporting_period']
delinq_30.drop_column('min_monthly_reporting_period')
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_90['delinquency_90'] = delinq_90['min_monthly_reporting_period']
delinq_90.drop_column('min_monthly_reporting_period')
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_180['delinquency_180'] = delinq_180['min_monthly_reporting_period']
delinq_180.drop_column('min_monthly_reporting_period')
del(delinq_gdf)
delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash')
delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
delinq_merge = delinq_merge.merge(delinq_180, how='left', on=['loan_id'], type='hash')
delinq_merge['delinquency_180'] = delinq_merge['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
del(delinq_30)
del(delinq_90)
del(delinq_180)
return delinq_merge
def join_ever_delinq_features(everdf_tmp, delinq_merge, **kwargs):
everdf = everdf_tmp.merge(delinq_merge, on=['loan_id'], how='left', type='hash')
del(everdf_tmp)
del(delinq_merge)
everdf['delinquency_30'] = everdf['delinquency_30'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
everdf['delinquency_90'] = everdf['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
everdf['delinquency_180'] = everdf['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
return everdf
def create_joined_df(gdf, everdf, **kwargs):
test = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status', 'current_actual_upb']]
del(gdf)
test['timestamp'] = test['monthly_reporting_period']
test.drop_column('monthly_reporting_period')
test['timestamp_month'] = test['timestamp'].dt.month
test['timestamp_year'] = test['timestamp'].dt.year
test['delinquency_12'] = test['current_loan_delinquency_status']
test.drop_column('current_loan_delinquency_status')
test['upb_12'] = test['current_actual_upb']
test.drop_column('current_actual_upb')
test['upb_12'] = test['upb_12'].fillna(999999999)
test['delinquency_12'] = test['delinquency_12'].fillna(-1)
joined_df = test.merge(everdf, how='left', on=['loan_id'], type='hash')
del(everdf)
del(test)
joined_df['ever_30'] = joined_df['ever_30'].fillna(-1)
joined_df['ever_90'] = joined_df['ever_90'].fillna(-1)
joined_df['ever_180'] = joined_df['ever_180'].fillna(-1)
joined_df['delinquency_30'] = joined_df['delinquency_30'].fillna(-1)
joined_df['delinquency_90'] = joined_df['delinquency_90'].fillna(-1)
joined_df['delinquency_180'] = joined_df['delinquency_180'].fillna(-1)
joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int32')
joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int32')
return joined_df
def create_12_mon_features(joined_df, **kwargs):
testdfs = []
n_months = 12
for y in range(1, n_months + 1):
tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']]
tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month']
tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor()
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'})
tmpdf['delinquency_12'] = (tmpdf['max_delinquency_12']>3).astype('int32')
tmpdf['delinquency_12'] +=(tmpdf['min_upb_12']==0).astype('int32')
tmpdf.drop_column('max_delinquency_12')
tmpdf['upb_12'] = tmpdf['min_upb_12']
tmpdf.drop_column('min_upb_12')
tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16')
tmpdf['timestamp_month'] = np.int8(y)
tmpdf.drop_column('josh_mody_n')
testdfs.append(tmpdf)
del(tmpdf)
del(joined_df)
return cudf.concat(testdfs)
def combine_joined_12_mon(joined_df, testdf, **kwargs):
joined_df.drop_column('delinquency_12')
joined_df.drop_column('upb_12')
joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int16')
joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int8')
return joined_df.merge(testdf, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash')
def final_performance_delinquency(gdf, joined_df, **kwargs):
merged = null_workaround(gdf)
joined_df = null_workaround(joined_df)
merged['timestamp_month'] = merged['monthly_reporting_period'].dt.month
merged['timestamp_month'] = merged['timestamp_month'].astype('int8')
merged['timestamp_year'] = merged['monthly_reporting_period'].dt.year
merged['timestamp_year'] = merged['timestamp_year'].astype('int16')
merged = merged.merge(joined_df, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash')
merged.drop_column('timestamp_year')
merged.drop_column('timestamp_month')
return merged
def join_perf_acq_gdfs(perf, acq, **kwargs):
perf = null_workaround(perf)
acq = null_workaround(acq)
return perf.merge(acq, how='left', on=['loan_id'], type='hash')
def last_mile_cleaning(df, **kwargs):
drop_list = [
'loan_id', 'orig_date', 'first_pay_date', 'seller_name',
'monthly_reporting_period', 'last_paid_installment_date', 'maturity_date', 'ever_30', 'ever_90', 'ever_180',
'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12',
'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp'
]
for column in drop_list:
df.drop_column(column)
for col, dtype in df.dtypes.iteritems():
if str(dtype)=='category':
df[col] = df[col].cat.codes
df[col] = df[col].astype('float32')
df['delinquency_12'] = df['delinquency_12'] > 0
df['delinquency_12'] = df['delinquency_12'].fillna(False).astype('int32')
for column in df.columns:
df[column] = df[column].fillna(-1)
return df.to_arrow(preserve_index=False)
def main():
#print('XGBOOST_BUILD_DOC is ' + os.environ['XGBOOST_BUILD_DOC'])
parser = argparse.ArgumentParser("rapidssample")
parser.add_argument("--data_dir", type=str, help="location of data")
parser.add_argument("--num_gpu", type=int, help="Number of GPUs to use", default=1)
parser.add_argument("--part_count", type=int, help="Number of data files to train against", default=2)
parser.add_argument("--end_year", type=int, help="Year to end the data load", default=2000)
parser.add_argument("--cpu_predictor", type=str, help="Flag to use CPU for prediction", default='False')
parser.add_argument('-f', type=str, default='') # added for notebook execution scenarios
args = parser.parse_args()
data_dir = args.data_dir
num_gpu = args.num_gpu
part_count = args.part_count
end_year = args.end_year
cpu_predictor = args.cpu_predictor.lower() in ('yes', 'true', 't', 'y', '1')
if cpu_predictor:
print('Training with CPUs require num gpu = 1')
num_gpu = 1
print('data_dir = {0}'.format(data_dir))
print('num_gpu = {0}'.format(num_gpu))
print('part_count = {0}'.format(part_count))
#part_count = part_count + 1 # adding one because the usage below is not inclusive
print('end_year = {0}'.format(end_year))
print('cpu_predictor = {0}'.format(cpu_predictor))
import subprocess
cmd = "hostname --all-ip-addresses"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
IPADDR = str(output.decode()).split()[0]
cluster = LocalCUDACluster(ip=IPADDR,n_workers=num_gpu)
client = Client(cluster)
client
print(client.ncores())
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
acq_data_path = "{0}/acq".format(data_dir) #"/rapids/data/mortgage/acq"
perf_data_path = "{0}/perf".format(data_dir) #"/rapids/data/mortgage/perf"
col_names_path = "{0}/names.csv".format(data_dir) # "/rapids/data/mortgage/names.csv"
start_year = 2000
#end_year = 2000 # end_year is inclusive -- converted to parameter
#part_count = 2 # the number of data files to train against -- converted to parameter
client.run(initialize_rmm_pool)
client
print(client.ncores())
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
# This can be optimized to avoid calculating the dropped features.
print("Reading ...")
t1 = datetime.datetime.now()
gpu_dfs = []
gpu_time = 0
quarter = 1
year = start_year
count = 0
while year <= end_year:
for file in glob(os.path.join(perf_data_path + "/Performance_" + str(year) + "Q" + str(quarter) + "*")):
if count < part_count:
gpu_dfs.append(process_quarter_gpu(client, col_names_path, acq_data_path, year=year, quarter=quarter, perf_file=file))
count += 1
print('file: {0}'.format(file))
print('count: {0}'.format(count))
quarter += 1
if quarter == 5:
year += 1
quarter = 1
wait(gpu_dfs)
t2 = datetime.datetime.now()
print("Reading time ...")
print(t2-t1)
print('len(gpu_dfs) is {0}'.format(len(gpu_dfs)))
client.run(cudf._gdf.rmm_finalize)
client.run(initialize_rmm_no_pool)
client
print(client.ncores())
dxgb_gpu_params = {
'nround': 100,
'max_depth': 8,
'max_leaves': 2**8,
'alpha': 0.9,
'eta': 0.1,
'gamma': 0.1,
'learning_rate': 0.1,
'subsample': 1,
'reg_lambda': 1,
'scale_pos_weight': 2,
'min_child_weight': 30,
'tree_method': 'gpu_hist',
'n_gpus': 1,
'distributed_dask': True,
'loss': 'ls',
'objective': 'gpu:reg:linear',
'max_features': 'auto',
'criterion': 'friedman_mse',
'grow_policy': 'lossguide',
'verbose': True
}
if cpu_predictor:
print('Training using CPUs')
dxgb_gpu_params['predictor'] = 'cpu_predictor'
dxgb_gpu_params['tree_method'] = 'hist'
dxgb_gpu_params['objective'] = 'reg:linear'
else:
print('Training using GPUs')
print('Training parameters are {0}'.format(dxgb_gpu_params))
gpu_dfs = [delayed(DataFrame.from_arrow)(gpu_df) for gpu_df in gpu_dfs[:part_count]]
gpu_dfs = [gpu_df for gpu_df in gpu_dfs]
wait(gpu_dfs)
tmp_map = [(gpu_df, list(client.who_has(gpu_df).values())[0]) for gpu_df in gpu_dfs]
new_map = {}
for key, value in tmp_map:
if value not in new_map:
new_map[value] = [key]
else:
new_map[value].append(key)
del(tmp_map)
gpu_dfs = []
for list_delayed in new_map.values():
gpu_dfs.append(delayed(cudf.concat)(list_delayed))
del(new_map)
gpu_dfs = [(gpu_df[['delinquency_12']], gpu_df[delayed(list)(gpu_df.columns.difference(['delinquency_12']))]) for gpu_df in gpu_dfs]
gpu_dfs = [(gpu_df[0].persist(), gpu_df[1].persist()) for gpu_df in gpu_dfs]
gpu_dfs = [dask.delayed(xgb.DMatrix)(gpu_df[1], gpu_df[0]) for gpu_df in gpu_dfs]
gpu_dfs = [gpu_df.persist() for gpu_df in gpu_dfs]
gc.collect()
wait(gpu_dfs)
labels = None
t1 = datetime.datetime.now()
bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround'])
t2 = datetime.datetime.now()
print("Training time ...")
print(t2-t1)
print('str(bst) is {0}'.format(str(bst)))
print('Exiting script')
if __name__ == '__main__':
main()

35
contrib/RAPIDS/rapids.yml Normal file
View File

@@ -0,0 +1,35 @@
name: rapids
channels:
- nvidia
- numba
- conda-forge
- rapidsai
- defaults
- pytorch
dependencies:
- arrow-cpp=0.12.0
- bokeh
- cffi=1.11.5
- cmake=3.12
- cuda92
- cython==0.29
- dask=1.1.1
- distributed=1.25.3
- faiss-gpu=1.5.0
- numba=0.42
- numpy=1.15.4
- nvstrings
- pandas=0.23.4
- pyarrow=0.12.0
- scikit-learn
- scipy
- cudf
- cuml
- python=3.6.2
- jupyterlab
- pip:
- file:/rapids/xgboost/python-package/dist/xgboost-0.81-py3-none-any.whl
- git+https://github.com/rapidsai/dask-xgboost@dask-cudf
- git+https://github.com/rapidsai/dask-cudf@master
- git+https://github.com/rapidsai/dask-cuda@master

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run this notebook before proceeding."],"metadata":{}},{"cell_type":"markdown","source":["Now we support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package (during private preview). You can select the option to attach the library to all clusters or just one cluster.\n\nProvide this full string to install the SDK:\n\nazureml-sdk[databricks]"],"metadata":{}},{"cell_type":"code","source":["import azureml.core\n\n# Check core SDK version number - based on build number of preview/master.\nprint(\"SDK version:\", azureml.core.VERSION)"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["subscription_id = \"<your-subscription-id>\"\nresource_group = \"<your-existing-resource-group>\"\nworkspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\nworkspace_region = \"<your-resource group-region>\""],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\n# exist_ok checks if workspace exists or not.\n\nfrom azureml.core import Workspace\n\nws = Workspace.create(name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group, \n location = workspace_region,\n exist_ok=True)\n\nws.get_details()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["ws = Workspace(workspace_name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group)\n\n# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\nws.write_config()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["%sh\ncat /databricks/driver/aml_config/config.json"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\nfrom azureml.core import Workspace\n\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":11}],"metadata":{"name":"01.Installation_and_Configuration","notebookId":3874566296719377},"nbformat":4,"nbformat_minor":0}

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this."],"metadata":{}},{"cell_type":"markdown","source":["#Data Ingestion"],"metadata":{}},{"cell_type":"code","source":["import os\nimport urllib"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\nbasedataurl = \"https://amldockerdatasets.azureedge.net\"\ndatafile = \"AdultCensusIncome.csv\"\ndatafile_dbfs = os.path.join(\"/dbfs\", datafile)\n\nif os.path.isfile(datafile_dbfs):\n print(\"found {} at {}\".format(datafile, datafile_dbfs))\nelse:\n print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n urllib.request.urlretrieve(os.path.join(basedataurl, datafile), datafile_dbfs)"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# Create a Spark dataframe out of the csv file.\ndata_all = sqlContext.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)\nprint(\"({}, {})\".format(data_all.count(), len(data_all.columns)))\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["#renaming columns\ncolumns_new = [col.replace(\"-\", \"_\") for col in data_all.columns]\ndata_all = data_all.toDF(*columns_new)\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["display(data_all.limit(5))"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"markdown","source":["#Data Preparation"],"metadata":{}},{"cell_type":"code","source":["# Choose feature columns and the label column.\nlabel = \"income\"\nxvals_all = set(data_all.columns) - {label}\n\n#dbutils.widgets.remove(\"xvars_multiselect\")\ndbutils.widgets.removeAll()\n\ndbutils.widgets.multiselect('xvars_multiselect', 'hours_per_week', xvals_all)\nxvars_multiselect = dbutils.widgets.get(\"xvars_multiselect\")\nxvars = xvars_multiselect.split(\",\")\n\nprint(\"label = {}\".format(label))\nprint(\"features = {}\".format(xvars))\n\ndata = data_all.select([*xvars, label])\n\n# Split data into train and test.\ntrain, test = data.randomSplit([0.75, 0.25], seed=123)\n\nprint(\"train ({}, {})\".format(train.count(), len(train.columns)))\nprint(\"test ({}, {})\".format(test.count(), len(test.columns)))"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"markdown","source":["#Data Persistence"],"metadata":{}},{"cell_type":"code","source":["# Write the train and test data sets to intermediate storage\ntrain_data_path = \"AdultCensusIncomeTrain\"\ntest_data_path = \"AdultCensusIncomeTest\"\n\ntrain_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTrain\")\ntest_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTest\")\n\ntrain.write.mode('overwrite').parquet(train_data_path)\ntest.write.mode('overwrite').parquet(test_data_path)\nprint(\"train and test datasets saved to {} and {}\".format(train_data_path_dbfs, test_data_path_dbfs))"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":13},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":14}],"metadata":{"name":"02.Ingest_data","notebookId":3874566296719393},"nbformat":4,"nbformat_minor":0}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this. This notebook uses image from ACI notebook for deploying to AKS."],"metadata":{}},{"cell_type":"code","source":["from azureml.core import Workspace\nimport azureml.core\n\n# Check core SDK version number\nprint(\"SDK version:\", azureml.core.VERSION)\n\n#'''\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')\n#'''"],"metadata":{},"outputs":[],"execution_count":3},{"cell_type":"code","source":["# List images by ws\n\nfrom azureml.core.image import ContainerImage\nfor i in ContainerImage.list(workspace = ws):\n print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["from azureml.core.image import Image\nmyimage = Image(workspace=ws, id=\"aciws:25\")"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["#create AKS compute\n#it may take 20-25 minutes to create a new cluster\n\nfrom azureml.core.compute import AksCompute, ComputeTarget\n\n# Use the default configuration (can also provide parameters to customize)\nprov_config = AksCompute.provisioning_configuration()\n\naks_name = 'ps-aks-clus2' \n\n# Create the cluster\naks_target = ComputeTarget.create(workspace = ws, \n name = aks_name, \n provisioning_configuration = prov_config)\n\naks_target.wait_for_completion(show_output = True)\n\nprint(aks_target.provisioning_state)\nprint(aks_target.provisioning_errors)"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["from azureml.core.webservice import Webservice\nhelp( Webservice.deploy_from_image)"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["from azureml.core.webservice import Webservice, AksWebservice\nfrom azureml.core.image import ContainerImage\n\n#Set the web service configuration (using default here)\naks_config = AksWebservice.deploy_configuration()\n\n#unique service name\nservice_name ='ps-aks-service'\n\n# Webservice creation using single command, there is a variant to use image directly as well.\naks_service = Webservice.deploy_from_image(\n workspace=ws, \n name=service_name,\n deployment_config = aks_config,\n image = myimage,\n deployment_target = aks_target\n )\n\naks_service.wait_for_deployment(show_output=True)"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["#for using the Web HTTP API \nprint(aks_service.scoring_uri)\nprint(aks_service.get_keys())"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["import json\n\n#get the some sample data\ntest_data_path = \"AdultCensusIncomeTest\"\ntest = spark.read.parquet(test_data_path).limit(5)\n\ntest_json = json.dumps(test.toJSON().collect())\n\nprint(test_json)"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":["#using data defined above predict if income is >50K (1) or <=50K (0)\naks_service.run(input_data=test_json)"],"metadata":{},"outputs":[],"execution_count":11},{"cell_type":"code","source":["#comment to not delete the web service\naks_service.delete()\n#image.delete()\n#model.delete()\n#aks_target.delete()"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":13}],"metadata":{"name":"04.DeploytoACI","notebookId":3874566296719318},"nbformat":4,"nbformat_minor":0}

View File

@@ -1,29 +0,0 @@
# Azure Databricks - Azure Machine Learning SDK Sample Notebooks
**NOTE**: With the latest version of Azure Machine Learning SDK, there are some API changes due to which previous version of notebooks will not work.
Please remove the previous SDK version and install the latest SDK by installing **azureml-sdk[databricks]** as a PyPi library in Azure Databricks workspace.
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: Some packages like psutil upgrade libs that can cause a conflict, please install such packages by freezing lib version. Eg. "pstuil **cryptography==1.5 pyopenssl==16.0.0 ipython=2.2.0**" to avoid install error. This issue is related to Databricks and not related to AML SDK.
**NOTE**: You should at least have contributor access to your Azure subcription to run some of the notebooks.
The iPython Notebooks have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing .dbc instead of downloading individually.
This set of notebooks are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. For details on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks)
(Recommended) [Azure Databricks AML SDK notebooks](Databricks_AMLSDK_github.dbc) A single DBC package to import all notebooks in your Azure Databricks workspace.
01. [Installation and Configuration](01.Installation_and_Configuration.ipynb): Install the Azure ML Python SDK and Initialize an Azure ML Workspace and save the Workspace configuration file.
02. [Ingest data](02.Ingest_data.ipynb): Download the Adult Census Income dataset and split it into train and test sets.
03. [Build model](03a.Build_model.ipynb): Train a binary classification model in Azure Databricks with a Spark ML Pipeline.
04. [Build model with Run History](03b.Build_model_runHistory.ipynb): Train model and also capture run history (tracking) with Azure ML Python SDK.
05. [Deploy to ACI](04.Deploy_to_ACI.ipynb): Deploy model to Azure Container Instance (ACI) with Azure ML Python SDK.
06. [Deploy to AKS](04.Deploy_to_AKS_existingImage.ipynb): Deploy model to Azure Kubernetis Service (AKS) with Azure ML Python SDK from an existing Image with model, conda and score file.
Copyright (c) Microsoft Corporation. All rights reserved.
All notebooks in this folder are licensed under the MIT License.
Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

View File

@@ -0,0 +1,20 @@
## Examples to get started with Azure Machine Learning service
Learn how to use Azure Machine Learning services for experimentation and model management.
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.
* [logging-api](./training/logging-api): Learn about the details of logging metrics to run history.
* [register-model-create-image-deploy-service](./deployment/register-model-create-image-deploy-service): Learn about the details of model management.
* [production-deploy-to-aks](./deployment/production-deploy-to-aks) Deploy a model to production at scale on Azure Kubernetes Service.
* [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model.
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/README.png)

View File

@@ -0,0 +1,289 @@
# Table of Contents
1. [Automated ML Introduction](#introduction)
1. [Setup using Azure Notebooks](#jupyter)
1. [Setup using Azure Databricks](#databricks)
1. [Setup using a Local Conda environment](#localconda)
1. [Automated ML SDK Sample Notebooks](#samples)
1. [Documentation](#documentation)
1. [Running using python command](#pythoncommand)
1. [Troubleshooting](#troubleshooting)
<a name="introduction"></a>
# Automated ML introduction
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, automated ML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, automated ML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. Automated ML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
Below are the three execution environments supported by automated ML.
<a name="jupyter"></a>
## Setup using Azure Notebooks - Jupyter based notebooks in the Azure cloud
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
1. Follow the instructions in the [configuration](../../configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks.
<a name="databricks"></a>
## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
<a name="localconda"></a>
## Setup using a Local Conda environment
To run these notebook on your own notebook server, use these installation instructions.
The instructions below will install everything you need and then start a Jupyter notebook.
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose 64-bit Python 3.7 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The automated ML sample notebooks are in the "automated-machine-learning" folder.
### 3. Setup a new conda environment
The **automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
Packages installed by the **automl_setup** script:
<ul><li>python</li><li>nb_conda</li><li>matplotlib</li><li>numpy</li><li>cython</li><li>urllib3</li><li>scipy</li><li>scikit-learn</li><li>pandas</li><li>tensorflow</li><li>py-xgboost</li><li>azureml-sdk</li><li>azureml-widgets</li><li>pandas-ml</li></ul>
For more details refer to the [automl_env.yml](./automl_env.yml)
## Windows
Start an **Anaconda Prompt** window, cd to the **how-to-use-azureml/automated-machine-learning** folder where the sample notebooks were extracted and then run:
```
automl_setup
```
## Mac
Install "Command line developer tools" if it is not already installed (you can use the command: `xcode-select --install`).
Start a Terminal windows, cd to the **how-to-use-azureml/automated-machine-learning** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_mac.sh
```
## Linux
cd to the **how-to-use-azureml/automated-machine-learning** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_linux.sh
```
### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on [configuration](../../configuration.ipynb) notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in automated ML.
### 6. Starting jupyter notebook manually
To start your Jupyter notebook manually, use:
```
conda activate azure_automl
jupyter notebook
```
or on Mac or Linux:
```
source activate azure_automl
jupyter notebook
```
<a name="samples"></a>
# Automated ML SDK Sample Notebooks
- [auto-ml-classification.ipynb](classification/auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification
- Uses local compute for training
- [auto-ml-regression.ipynb](regression/auto-ml-regression.ipynb)
- Dataset: scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)
- Simple example of using automated ML for regression
- Uses local compute for training
- [auto-ml-remote-execution.ipynb](remote-execution/auto-ml-remote-execution.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using automated ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automated ML settings as kwargs
- [auto-ml-remote-amlcompute.ipynb](remote-batchai/auto-ml-remote-amlcompute.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using automated ML for classification using remote AmlCompute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automated ML settings as kwargs
- [auto-ml-remote-attach.ipynb](remote-attach/auto-ml-remote-attach.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
- [auto-ml-missing-data-blacklist-early-termination.ipynb](missing-data-blacklist-early-termination/auto-ml-missing-data-blacklist-early-termination.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
- [auto-ml-sparse-data-train-test-split.ipynb](sparse-data-train-test-split/auto-ml-sparse-data-train-test-split.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Handle sparse datasets
- Specify custom train and validation set
- [auto-ml-exploring-previous-runs.ipynb](exploring-previous-runs/auto-ml-exploring-previous-runs.ipynb)
- List all projects for the workspace
- List all automated ML Runs for a given project
- Get details for a automated ML Run. (automated ML settings, run widget & all metrics)
- Download fitted pipeline for any iteration
- [auto-ml-remote-execution-with-datastore.ipynb](remote-execution-with-datastore/auto-ml-remote-execution-with-datastore.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Download the data and store it in DataStore.
- [auto-ml-classification-with-deployment.ipynb](classification-with-deployment/auto-ml-classification-with-deployment.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
- [auto-ml-sample-weight.ipynb](sample-weight/auto-ml-sample-weight.ipynb)
- How to specifying sample_weight
- The difference that it makes to test results
- [auto-ml-subsampling-local.ipynb](subsampling/auto-ml-subsampling-local.ipynb)
- How to enable subsampling
- [auto-ml-dataprep.ipynb](dataprep/auto-ml-dataprep.ipynb)
- Using DataPrep for reading data
- [auto-ml-dataprep-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataprep-remote-execution.ipynb)
- Using DataPrep for reading data with remote execution
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification with whitelisting tensorflow models.
- Uses local compute for training
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv)
- Example of using automated ML for training a forecasting model
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb)
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-classification-with-onnx.ipynb](classification-with-onnx/auto-ml-classification-with-onnx.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification with ONNX models
- Uses local compute for training
<a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
<a name="pythoncommand"></a>
# Running using python command
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
However, on Windows the file needs to be modified before it can be run.
The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
<a name="troubleshooting"></a>
# Troubleshooting
## automl_setup fails
1. On Windows, make sure that you are running automl_setup from an Anconda Prompt window rather than a regular cmd window. You can launch the "Anaconda Prompt" window by hitting the Start button and typing "Anaconda Prompt". If you don't see the application "Anaconda Prompt", you might not have conda or mini conda installed. In that case, you can install it [here](https://conda.io/miniconda.html)
2. Check that you have conda 64-bit installed rather than 32-bit. You can check this with the command `conda info`. The `platform` should be `win-64` for Windows or `osx-64` for Mac.
3. Check that you have conda 4.4.10 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
4. On Linux, if the error is `gcc: error trying to exec 'cc1plus': execvp: No such file or directory`, install build essentials using the command `sudo apt-get install build-essential`.
5. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
## automl_setup_linux.sh fails
If automl_setup_linux.sh fails on Ubuntu Linux with the error: `unable to execute 'gcc': No such file or directory`
1. Make sure that outbound ports 53 and 80 are enabled. On an Azure VM, you can do this from the Azure Portal by selecting the VM and clicking on Networking.
2. Run the command: `sudo apt-get update`
3. Run the command: `sudo apt-get install build-essential --fix-missing`
4. Run `automl_setup_linux.sh` again.
## configuration.ipynb fails
1) For local conda, make sure that you have susccessfully run automl_setup first.
2) Check that the subscription_id is correct. You can find the subscription_id in the Azure Portal by selecting All Service and then Subscriptions. The characters "<" and ">" should not be included in the subscription_id value. For example, `subscription_id = "12345678-90ab-1234-5678-1234567890abcd"` has the valid format.
3) Check that you have Contributor or Owner access to the Subscription.
4) Check that the region is one of the supported regions: `eastus2`, `eastus`, `westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`
5) Check that you have access to the region using the Azure Portal.
## workspace.from_config fails
If the call `ws = Workspace.from_config()` fails:
1) Make sure that you have run the `configuration.ipynb` notebook successfully.
2) If you are running a notebook from a folder that is not under the folder where you ran `configuration.ipynb`, copy the folder aml_config and the file config.json that it contains to the new folder. Workspace.from_config reads the config.json for the notebook folder or it parent folder.
3) If you are switching to a new subscription, resource group, workspace or region, make sure that you run the `configuration.ipynb` notebook again. Changing config.json directly will only work if the workspace already exists in the specified resource group under the specified subscription.
4) If you want to change the region, please change the workspace, resource group or subscription. `Workspace.create` will not create or update a workspace if it already exists, even if the region specified is different.
## Sample notebook fails
If a sample notebook fails with an error that property, method or library does not exist:
1) Check that you have selected correct kernel in jupyter notebook. The kernel is displayed in the top right of the notebook page. It can be changed using the `Kernel | Change Kernel` menu option. For Azure Notebooks, it should be `Python 3.6`. For local conda environments, it should be the conda envioronment name that you specified in automl_setup. The default is azure_automl. Note that the kernel is saved as part of the notebook. So, if you switch to a new conda environment, you will have to select the new kernel in the notebook.
2) Check that the notebook is for the SDK version that you are using. You can check the SDK version by executing `azureml.core.VERSION` in a jupyter notebook cell. You can download previous version of the sample notebooks from GitHub by clicking the `Branch` button, selecting the `Tags` tab and then selecting the version.
## Numpy import fails on Windows
Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
## Numpy import fails
Check the tensorflow version in the automated ml conda environment. Supported versions are < 1.13. Uninstall tensorflow from the environment if version is >= 1.13
You may check the version of tensorflow and uninstall as follows
1) start a command shell, activate conda environment where automated ml packages are installed
2) enter `pip freeze` and look for `tensorflow` , if found, the version listed should be < 1.13
3) If the listed version is a not a supported version, `pip uninstall tensorflow` in the command shell and enter y for confirmation.
## Remote run: DsvmCompute.create fails
There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are:
1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name.
2) `The requested VM size xxxxx is not available in the current region.` You can select a different region or vm_size.
## Remote run: Unable to establish SSH connection
Automated ML uses the SSH protocol to communicate with remote DSVMs. This defaults to port 22. Possible causes for this error are:
1) The DSVM is not ready for SSH connections. When DSVM creation completes, the DSVM might still not be ready to acceept SSH connections. The sample notebooks have a one minute delay to allow for this.
2) Your Azure Subscription may restrict the IP address ranges that can access the DSVM on port 22. You can check this in the Azure Portal by selecting the Virtual Machine and then clicking Networking. The Virtual Machine name is the name that you provided in the notebook plus 10 alpha numeric characters to make the name unique. The Inbound Port Rules define what can access the VM on specific ports. Note that there is a priority priority order. So, a Deny entry with a low priority number will override a Allow entry with a higher priority number.
## Remote run: setup iteration fails
This is often an issue with the `get_data` method.
1) Check that the `get_data` method is valid by running it locally.
2) Make sure that `get_data` isn't referring to any local files. `get_data` is executed on the remote DSVM. So, it doesn't have direct access to local data files. Instead you can store the data files with DataStore. See [auto-ml-remote-execution-with-datastore.ipynb](remote-execution-with-datastore/auto-ml-remote-execution-with-datastore.ipynb)
3) You can get to the error log for the setup iteration by clicking the `Click here to see the run in Azure portal` link, click `Back to Experiment`, click on the highest run number and then click on Logs.
## Remote run: disk full
Automated ML creates files under /tmp/azureml_runs for each iteration that it runs. It creates a folder with the iteration id. For example: AutoML_9a038a18-77cc-48f1-80fb-65abdbc33abe_93. Under this, there is a azureml-logs folder, which contains logs. If you run too many iterations on the same DSVM, these files can fill the disk.
You can delete the files under /tmp/azureml_runs or just delete the VM and create a new one.
If your get_data downloads files, make sure the delete them or they can use disk space as well.
When using DataStore, it is good to specify an absolute path for the files so that they are downloaded just once. If you specify a relative path, it will download a file for each iteration.
## Remote run: Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. Automated ML loads all training data into memory. So, the available memory should be more than the training data size.
If you are using a remote DSVM, memory is needed for each concurrent iteration. The max_concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and max_concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for max_concurrent_iterations.
## Remote run: Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the max_concurrent_iterations setting should always be less than the number of cores of the DSVM.
To resolve this issue, try reducing the value specified for the max_concurrent_iterations setting.

View File

@@ -0,0 +1,21 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python>=3.5.2,<3.6.8
- nb_conda
- matplotlib==2.1.0
- numpy>=1.11.0,<=1.16.2
- cython
- urllib3<1.24
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.19.0,<=0.20.3
- pandas>=0.22.0,<0.23.0
- py-xgboost<=0.80
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,explain]
- azureml-widgets
- pandas_ml

View File

@@ -0,0 +1,51 @@
@echo off
set conda_env_name=%1
set automl_env_file=%2
set options=%3
set PIP_NO_WARN_SCRIPT_LOCATION=0
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
IF "%automl_env_file%"=="" SET automl_env_file="automl_env.yml"
IF NOT EXIST %automl_env_file% GOTO YmlMissing
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment %conda_env_name%
call pip install --upgrade azureml-sdk[automl,notebooks,explain]
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f %automl_env_file% -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
REM azureml.widgets is now installed as part of the pip install under the conda env.
REM Removing the old user install so that the notebooks will use the latest widget.
call jupyter nbextension uninstall --user --py azureml.widgets
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
IF NOT "%options%"=="nolaunch" (
echo.
echo Starting jupyter notebook - please run the configuration notebook
echo.
jupyter notebook --log-level=50 --notebook-dir='..\..'
)
goto End
:YmlMissing
echo File %automl_env_file% not found.
:ErrorExit
echo Install failed
:End

View File

@@ -0,0 +1,52 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_env.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -0,0 +1,54 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_env.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -13,11 +13,26 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# AutoML 09: Classification with Deployment\n", "# Automated Machine Learning\n",
"_**Classification with Deployment**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n", "\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI).\n", "In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI).\n",
"\n", "\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n", "1. Create an experiment using an existing workspace.\n",
@@ -27,14 +42,14 @@
"5. Register the model.\n", "5. Register the model.\n",
"6. Create a container image.\n", "6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n", "7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service.\n" "8. Test the ACI service."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create an Experiment\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
@@ -47,11 +62,8 @@
"source": [ "source": [
"import json\n", "import json\n",
"import logging\n", "import logging\n",
"import os\n",
"import random\n",
"\n", "\n",
"from matplotlib import pyplot as plt\n", "from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n", "import numpy as np\n",
"import pandas as pd\n", "import pandas as pd\n",
"from sklearn import datasets\n", "from sklearn import datasets\n",
@@ -72,9 +84,9 @@
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"\n", "\n",
"# choose a name for experiment\n", "# choose a name for experiment\n",
"experiment_name = 'automl-local-classification'\n", "experiment_name = 'automl-classification-deployment'\n",
"# project folder\n", "# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n", "project_folder = './sample_projects/automl-classification-deployment'\n",
"\n", "\n",
"experiment=Experiment(ws, experiment_name)\n", "experiment=Experiment(ws, experiment_name)\n",
"\n", "\n",
@@ -87,45 +99,27 @@
"output['Project Directory'] = project_folder\n", "output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n", "output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n", "pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T" "outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Train\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n", "\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n", "\n",
"|Property|Description|\n", "|Property|Description|\n",
"|-|-|\n", "|-|-|\n",
"|**task**|classification or regression|\n", "|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n", "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n", "|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", "|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
] ]
}, },
@@ -143,9 +137,8 @@
" name = experiment_name,\n", " name = experiment_name,\n",
" debug_log = 'automl_errors.log',\n", " debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n", " primary_metric = 'AUC_weighted',\n",
" max_time_sec = 1200,\n", " iteration_timeout_minutes = 20,\n",
" iterations = 10,\n", " iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n", " verbosity = logging.INFO,\n",
" X = X_train, \n", " X = X_train, \n",
" y = y_train,\n", " y = y_train,\n",
@@ -156,8 +149,6 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console." "In this example, we specify `show_output = True` to print currently running iterations to the console."
] ]
@@ -171,10 +162,21 @@
"local_run = experiment.submit(automl_config, show_output = True)" "local_run = experiment.submit(automl_config, show_output = True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Deploy\n",
"\n",
"### Retrieve the Best Model\n", "### Retrieve the Best Model\n",
"\n", "\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." "Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
@@ -206,7 +208,8 @@
"description = 'AutoML Model'\n", "description = 'AutoML Model'\n",
"tags = None\n", "tags = None\n",
"model = local_run.register_model(description = description, tags = tags)\n", "model = local_run.register_model(description = description, tags = tags)\n",
"local_run.model_id # This will be written to the script file later in the notebook." "\n",
"print(local_run.model_id) # This will be written to the script file later in the notebook."
] ]
}, },
{ {
@@ -226,6 +229,7 @@
"import pickle\n", "import pickle\n",
"import json\n", "import json\n",
"import numpy\n", "import numpy\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n", "from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"\n", "\n",
@@ -258,7 +262,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)." "To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. The following cells create a file, myenv.yml, which specifies the dependencies from the run."
] ]
}, },
{ {
@@ -267,8 +271,6 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"experiment_name = 'automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n", "experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)" "ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)"
] ]
@@ -298,15 +300,13 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%writefile myenv.yml\n", "from azureml.core.conda_dependencies import CondaDependencies\n",
"name: myenv\n", "\n",
"channels:\n", "myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
" - defaults\n", " pip_packages=['azureml-sdk[automl]'])\n",
"dependencies:\n", "\n",
" - pip:\n", "conda_env_file_name = 'myenv.yml'\n",
" - numpy==1.14.2\n", "myenv.save_to_file('.', conda_env_file_name)"
" - scikit-learn==0.19.2\n",
" - azureml-sdk[notebooks,automl]==<<azureml-version>>"
] ]
}, },
{ {
@@ -316,14 +316,14 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"# Substitute the actual version number in the environment file.\n", "# Substitute the actual version number in the environment file.\n",
"\n", "# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.\n",
"conda_env_file_name = 'myenv.yml'\n", "# However, we include this in case this code is used on an experiment from a previous SDK version.\n",
"\n", "\n",
"with open(conda_env_file_name, 'r') as cefr:\n", "with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n", " content = cefr.read()\n",
"\n", "\n",
"with open(conda_env_file_name, 'w') as cefw:\n", "with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<azureml-version>>', dependencies['azureml-sdk']))\n", " cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))\n",
"\n", "\n",
"# Substitute the actual model id in the script file.\n", "# Substitute the actual model id in the script file.\n",
"\n", "\n",
@@ -363,7 +363,10 @@
" image_config = image_config, \n", " image_config = image_config, \n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"image.wait_for_creation(show_output = True)" "image.wait_for_creation(show_output = True)\n",
"\n",
"if image.creation_state == 'Failed':\n",
" print(\"Image build log at: \" + image.image_build_log_uri)"
] ]
}, },
{ {
@@ -441,7 +444,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Test a Web Service" "## Test"
] ]
}, },
{ {

View File

@@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"Please find the ONNX related documentations [here](https://github.com/onnx/onnx).\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute with ONNX compatible config on.\n",
"4. Explore the results and save the ONNX model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-classification-onnx'\n",
"project_folder = './sample_projects/automl-classification-onnx'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train with enable ONNX compatible models config on\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"Set the parameter enable_onnx_compatible_models=True, if you also want to generate the ONNX compatible models. Please note, the forecasting task and TensorFlow models are not ONNX compatible yet.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" enable_onnx_compatible_models=True,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best ONNX Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.\n",
"\n",
"Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, onnx_mdl = local_run.get_output(return_onnx_model=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save the best ONNX model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl._vendor.automl.client.core.common.onnx_convert import OnnxConverter\n",
"onnx_fl_path = \"./best_model.onnx\"\n",
"OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -13,25 +13,43 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# AutoML 01: Classification with Local Compute\n", "# Automated Machine Learning\n",
"_**Classification using whitelist models**_\n",
"\n", "\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n", "## Contents\n",
"\n", "1. [Introduction](#Introduction)\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n", "1. [Setup](#Setup)\n",
"\n", "1. [Data](#Data)\n",
"In this notebook you will learn how to:\n", "1. [Train](#Train)\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. [Results](#Results)\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "1. [Test](#Test)"
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create an Experiment\n", "## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.\n",
"This trains the model exclusively on tensorflow based models.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model on a whilelisted models using local compute. \n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
@@ -42,12 +60,10 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#Note: This notebook will install tensorflow if not already installed in the enviornment..\n",
"import logging\n", "import logging\n",
"import os\n",
"import random\n",
"\n", "\n",
"from matplotlib import pyplot as plt\n", "from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n", "import numpy as np\n",
"import pandas as pd\n", "import pandas as pd\n",
"from sklearn import datasets\n", "from sklearn import datasets\n",
@@ -55,8 +71,18 @@
"import azureml.core\n", "import azureml.core\n",
"from azureml.core.experiment import Experiment\n", "from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n", "import sys\n",
"from azureml.train.automl.run import AutoMLRun" "whitelist_models=[\"LightGBM\"]\n",
"if \"3.7\" != sys.version[0:3]:\n",
" try:\n",
" import tensorflow as tf1\n",
" except ImportError:\n",
" from pip._internal import main\n",
" main(['install', 'tensorflow>=1.10.0,<=1.12.0'])\n",
" logging.getLogger().setLevel(logging.ERROR)\n",
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"]\n",
"\n",
"from azureml.train.automl import AutoMLConfig"
] ]
}, },
{ {
@@ -68,8 +94,8 @@
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"\n", "\n",
"# Choose a name for the experiment and specify the project folder.\n", "# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n", "experiment_name = 'automl-local-whitelist'\n",
"project_folder = './sample_projects/automl-local-classification'\n", "project_folder = './sample_projects/automl-local-whitelist'\n",
"\n", "\n",
"experiment = Experiment(ws, experiment_name)\n", "experiment = Experiment(ws, experiment_name)\n",
"\n", "\n",
@@ -82,33 +108,15 @@
"output['Project Directory'] = project_folder\n", "output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n", "output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n", "pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T" "outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Data\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data\n",
"\n", "\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method." "This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
] ]
@@ -119,8 +127,6 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n", "digits = datasets.load_digits()\n",
"\n", "\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n", "# Exclude the first 100 rows from training so that they can be used for test.\n",
@@ -132,7 +138,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Configure AutoML\n", "## Train\n",
"\n", "\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n", "\n",
@@ -140,12 +146,13 @@
"|-|-|\n", "|-|-|\n",
"|**task**|classification or regression|\n", "|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n", "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n", "|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", "|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
] ]
}, },
{ {
@@ -157,12 +164,13 @@
"automl_config = AutoMLConfig(task = 'classification',\n", "automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n", " debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n", " primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n", " iteration_timeout_minutes = 60,\n",
" iterations = 50,\n", " iterations = 10,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n", " verbosity = logging.INFO,\n",
" X = X_train, \n", " X = X_train, \n",
" y = y_train,\n", " y = y_train,\n",
" enable_tf=True,\n",
" whitelist_models=whitelist_models,\n",
" path = project_folder)" " path = project_folder)"
] ]
}, },
@@ -170,8 +178,6 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console." "In this example, we specify `show_output = True` to print currently running iterations to the console."
] ]
@@ -198,35 +204,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:" "## Results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_train, \n",
" y = y_train, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
] ]
}, },
{ {
@@ -246,7 +224,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.train.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() " "RunDetails(local_run).show() "
] ]
}, },
@@ -340,7 +318,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Test the Best Fitted Model\n", "## Test\n",
"\n", "\n",
"#### Load Test Data" "#### Load Test Data"
] ]

View File

@@ -0,0 +1,469 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Accessing the Azure ML workspace requires authentication with Azure.\n",
"\n",
"The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.\n",
"\n",
"If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"\n",
"If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-classification'\n",
"project_folder = './sample_projects/automl-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|\n",
"\n",
"Automated machine learning trains multiple machine learning pipelines. Each pipelines training is known as an iteration.\n",
"* You can specify a maximum number of iterations using the `iterations` parameter.\n",
"* You can specify a maximum time for the run using the `experiment_timeout_minutes` parameter.\n",
"* If you specify neither the `iterations` nor the `experiment_timeout_minutes`, automated ML keeps running iterations while it continues to see improvements in the scores.\n",
"\n",
"The following example doesn't specify `iterations` or `experiment_timeout_minutes` and so runs until the scores stop improving.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" primary_metric = 'AUC_weighted',\n",
" X = X_train, \n",
" y = y_train,\n",
" n_cross_validations = 3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_train, \n",
" y = y_train, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Print the properties of the model\n",
"The fitted_model is a python object and you can read the different properties of the object.\n",
"The following shows printing hyperparameters for each step in the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pprint import pprint\n",
"\n",
"def print_model(model, prefix=\"\"):\n",
" for step in model.steps:\n",
" print(prefix + step[0])\n",
" if hasattr(step[1], 'estimators') and hasattr(step[1], 'weights'):\n",
" pprint({'estimators': list(e[0] for e in step[1].estimators), 'weights': step[1].weights})\n",
" print()\n",
" for estimator in step[1].estimators:\n",
" print_model(estimator[1], estimator[0]+ ' - ')\n",
" else:\n",
" pprint(step[1].get_params())\n",
" print()\n",
" \n",
"print_model(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print_model(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print_model(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test \n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -13,10 +13,26 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# AutoML 13: Prepare Data using `azureml.dataprep`\n", "# Automated Machine Learning\n",
"_**Prepare Data using `azureml.dataprep` for Remote Execution (DSVM)**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n", "In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n", "\n",
"Make sure you have executed the [setup](00.configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n", "1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
@@ -28,43 +44,15 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Install `azureml.dataprep` SDK" "## Setup\n",
] "\n",
}, "Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml-dataprep"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
@@ -75,15 +63,13 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "import time\n",
"\n", "\n",
"import pandas as pd\n", "import pandas as pd\n",
"\n", "\n",
"import azureml.core\n", "import azureml.core\n",
"from azureml.core.compute import DsvmCompute\n", "from azureml.core.compute import DsvmCompute\n",
"from azureml.core.experiment import Experiment\n", "from azureml.core.experiment import Experiment\n",
"from azureml.core.runconfig import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"import azureml.dataprep as dprep\n", "import azureml.dataprep as dprep\n",
"from azureml.train.automl import AutoMLConfig" "from azureml.train.automl import AutoMLConfig"
@@ -98,9 +84,9 @@
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
" \n", " \n",
"# choose a name for experiment\n", "# choose a name for experiment\n",
"experiment_name = 'automl-dataprep-classification'\n", "experiment_name = 'automl-dataprep-remote-dsvm'\n",
"# project folder\n", "# project folder\n",
"project_folder = './sample_projects/automl-dataprep-classification'\n", "project_folder = './sample_projects/automl-dataprep-remote-dsvm'\n",
" \n", " \n",
"experiment = Experiment(ws, experiment_name)\n", "experiment = Experiment(ws, experiment_name)\n",
" \n", " \n",
@@ -113,14 +99,15 @@
"output['Project Directory'] = project_folder\n", "output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n", "output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n", "pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T" "outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Loading Data using DataPrep" "## Data"
] ]
}, },
{ {
@@ -129,10 +116,10 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# You can use `smart_read_file` which intelligently figures out delimiters and datatypes of a file.\n", "# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n", "# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n", "simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.smart_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n", "X = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
"\n", "\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n", "# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
"# and convert column types manually.\n", "# and convert column types manually.\n",
@@ -144,8 +131,6 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets." "You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
] ]
}, },
@@ -162,7 +147,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Configure AutoML\n", "## Train\n",
"\n", "\n",
"This creates a general AutoML settings object applicable for both local and remote runs." "This creates a general AutoML settings object applicable for both local and remote runs."
] ]
@@ -174,7 +159,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"automl_settings = {\n", "automl_settings = {\n",
" \"max_time_sec\" : 600,\n", " \"iteration_timeout_minutes\" : 10,\n",
" \"iterations\" : 2,\n", " \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n", " \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : False,\n", " \"preprocess\" : False,\n",
@@ -183,51 +168,6 @@
"}" "}"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `Dataflow` Objects\n",
"\n",
"The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote Run"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
@@ -241,52 +181,38 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"dsvm_name = 'mydsvm'\n", "dsvm_name = 'mydsvmc'\n",
"\n",
"try:\n", "try:\n",
" while ws.compute_targets[dsvm_name].provisioning_state == 'Creating':\n",
" time.sleep(1)\n",
" \n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n", " dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('Found existing DVSM.')\n", " print('Found existing DVSM.')\n",
"except:\n", "except:\n",
" print('Creating a new DSVM.')\n", " print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n", " dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n", " dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)" " dsvm_compute.wait_for_completion(show_output = True)\n",
" print(\"Waiting one minute for ssh to be accessible\")\n",
" time.sleep(90) # Wait for ssh to be accessible"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [],
"source": [ "source": [
"### Update Conda Dependency file to have AutoML and DataPrep SDK\n", "from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n", "\n",
"Currently the AutoML and DataPrep SDKs are not installed with the Azure ML SDK by default. To circumvent this limitation, we update the conda dependency file to add these dependencies." "conda_run_config = RunConfiguration(framework=\"python\")\n",
] "\n",
}, "conda_run_config.target = dsvm_compute\n",
{ "\n",
"cell_type": "code", "cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])\n",
"execution_count": null, "conda_run_config.environment.python.conda_dependencies = cd"
"metadata": {},
"outputs": [],
"source": [
"cd = CondaDependencies()\n",
"cd.add_pip_package(pip_package='azureml-dataprep')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a `RunConfiguration` with DSVM name"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration(conda_dependencies=cd)\n",
"run_config.target = dsvm_compute\n",
"run_config.auto_prepare_environment = True"
] ]
}, },
{ {
@@ -307,18 +233,35 @@
"automl_config = AutoMLConfig(task = 'classification',\n", "automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n", " debug_log = 'automl_errors.log',\n",
" path = project_folder,\n", " path = project_folder,\n",
" run_configuration = run_config,\n", " run_configuration=conda_run_config,\n",
" X = X,\n", " X = X,\n",
" y = y,\n", " y = y,\n",
" **automl_settings)\n", " **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = True)" "remote_run = experiment.submit(automl_config, show_output = True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Explore the Results" "## Results"
] ]
}, },
{ {
@@ -338,8 +281,8 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.train.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show()" "RunDetails(remote_run).show()"
] ]
}, },
{ {
@@ -356,14 +299,13 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"children = list(local_run.get_children())\n", "children = list(remote_run.get_children())\n",
"metricslist = {}\n", "metricslist = {}\n",
"for run in children:\n", "for run in children:\n",
" properties = run.get_properties()\n", " properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n", " metricslist[int(properties['iteration'])] = metrics\n",
" \n", " \n",
"import pandas as pd\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata" "rundata"
] ]
@@ -383,7 +325,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"best_run, fitted_model = local_run.get_output()\n", "best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n", "print(best_run)\n",
"print(fitted_model)" "print(fitted_model)"
] ]
@@ -403,7 +345,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"lookup_metric = \"log_loss\"\n", "lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n", "print(best_run)\n",
"print(fitted_model)" "print(fitted_model)"
] ]
@@ -423,7 +365,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"iteration = 0\n", "iteration = 0\n",
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n", "best_run, fitted_model = remote_run.get_output(iteration = iteration)\n",
"print(best_run)\n", "print(best_run)\n",
"print(fitted_model)" "print(fitted_model)"
] ]
@@ -432,7 +374,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Test the Best Fitted Model\n", "## Test\n",
"\n", "\n",
"#### Load Test Data" "#### Load Test Data"
] ]
@@ -467,8 +409,6 @@
"source": [ "source": [
"#Randomly select digits and test\n", "#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n", "from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import random\n",
"import numpy as np\n", "import numpy as np\n",
"\n", "\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n", "for index in np.random.choice(len(y_test), 2, replace = False):\n",
@@ -506,7 +446,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"# sklearn.digits.data + target\n", "# sklearn.digits.data + target\n",
"digits_complete = dprep.smart_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')" "digits_complete = dprep.auto_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')"
] ]
}, },
{ {
@@ -522,7 +462,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"digits_complete.to_pandas_dataframe().shape\n", "print(digits_complete.to_pandas_dataframe().shape)\n",
"labels_column = 'Column64'\n", "labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n", "dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])" "dflow_y = digits_complete.keep_columns(columns = [labels_column])"

View File

@@ -0,0 +1,448 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Prepare Data using `azureml.dataprep` for Local Execution**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
"2. Pass the `Dataflow` to AutoML for a local run.\n",
"3. Pass the `Dataflow` to AutoML for a remote run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"import azureml.dataprep as dprep\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
" \n",
"# choose a name for experiment\n",
"experiment_name = 'automl-dataprep-local'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-dataprep-local'\n",
" \n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
"\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
"# and convert column types manually.\n",
"# Here we read a comma delimited file and convert all columns to integers.\n",
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X.skip(1).head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\" : 10,\n",
" \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : False,\n",
" \"verbosity\" : logging.INFO\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `Dataflow` Objects\n",
"\n",
"The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the first iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appendix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Capture the `Dataflow` Objects for Later Use in AutoML\n",
"\n",
"`Dataflow` objects are immutable and are composed of a list of data preparation steps. A `Dataflow` object can be branched at any point for further usage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# sklearn.digits.data + target\n",
"digits_complete = dprep.auto_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`digits_complete` (sourced from `sklearn.datasets.load_digits()`) is forked into `dflow_X` to capture all the feature columns and `dflow_y` to capture the label column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(digits_complete.to_pandas_dataframe().shape)\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -13,24 +13,38 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# AutoML 07: Exploring Previous Runs\n", "# Automated Machine Learning\n",
"_**Exploring Previous Runs**_\n",
"\n", "\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n", "## Contents\n",
"\n", "1. [Introduction](#Introduction)\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n", "1. [Setup](#Setup)\n",
"\n", "1. [Explore](#Explore)\n",
"In this notebook you will learn how to:\n", "1. [Download](#Download)\n",
"1. List all experiments in a workspace.\n", "1. [Register](#Register)"
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration.\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# List all AutoML Experiments in a Workspace" "## Introduction\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
] ]
}, },
{ {
@@ -39,22 +53,11 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n",
"import os\n",
"import random\n",
"import re\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n", "import pandas as pd\n",
"from sklearn import datasets\n", "import json\n",
"\n", "\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n", "from azureml.core.experiment import Experiment\n",
"from azureml.core.run import Run\n",
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun" "from azureml.train.automl.run import AutoMLRun"
] ]
}, },
@@ -64,17 +67,34 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List Experiments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_list = Experiment.list(workspace=ws)\n", "experiment_list = Experiment.list(workspace=ws)\n",
"\n", "\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n", "summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"for experiment in experiment_list:\n", "for experiment in experiment_list:\n",
" all_runs = list(experiment.get_runs())\n", " automl_runs = list(experiment.get_runs(type='automl'))\n",
" automl_runs = []\n",
" for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" automl_runs.append(run) \n",
" summary_df[experiment.name] = [len(automl_runs)]\n", " summary_df[experiment.name] = [len(automl_runs)]\n",
" \n", " \n",
"pd.set_option('display.max_colwidth', -1)\n", "pd.set_option('display.max_colwidth', -1)\n",
@@ -85,26 +105,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "### List runs for an experiment\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List AutoML runs for an experiment\n",
"Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs." "Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs."
] ]
}, },
@@ -116,20 +117,21 @@
"source": [ "source": [
"experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n", "experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n",
"\n", "\n",
"proj = ws.experiments()[experiment_name]\n", "proj = ws.experiments[experiment_name]\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n", "summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n", "automl_runs = list(proj.get_runs(type='automl'))\n",
"all_runs = list(proj.get_runs(properties={'azureml.runsource': 'automl'}))\n", "automl_runs_project = []\n",
"for run in all_runs:\n", "for run in automl_runs:\n",
" if(pattern.match(run.id)):\n",
" properties = run.get_properties()\n", " properties = run.get_properties()\n",
" tags = run.get_tags()\n", " tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n", " amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
" if 'iterations' in tags:\n", " if 'iterations' in tags:\n",
" iterations = tags['iterations']\n", " iterations = tags['iterations']\n",
" else:\n", " else:\n",
" iterations = properties['num_iterations']\n", " iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n", " summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" if run.get_details()['status'] == 'Completed':\n",
" automl_runs_project.append(run.id)\n",
" \n", " \n",
"from IPython.display import HTML\n", "from IPython.display import HTML\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n", "projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
@@ -143,7 +145,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Get details for an AutoML run\n", "### Get details for a run\n",
"\n", "\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run." "Copy the project name and run id from the previous cell output to find more details on a particular run."
] ]
@@ -154,10 +156,10 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run_id = '' # Filling your own run_id from above run ids\n", "run_id = automl_runs_project[0] # Replace with your own run_id from above run ids\n",
"assert (run_id in summary_df.keys()),\"Run id not found! Please set run id to a value from above run ids\"\n", "assert (run_id in summary_df.keys()), \"Run id not found! Please set run id to a value from above run ids\"\n",
"\n", "\n",
"from azureml.train.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"\n", "\n",
"experiment = Experiment(ws, experiment_name)\n", "experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n", "ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n",
@@ -166,7 +168,7 @@
"properties = ml_run.get_properties()\n", "properties = ml_run.get_properties()\n",
"tags = ml_run.get_tags()\n", "tags = ml_run.get_tags()\n",
"status = ml_run.get_details()\n", "status = ml_run.get_details()\n",
"amlsettings = eval(properties['RawAMLSettingsString'])\n", "amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
"if 'iterations' in tags:\n", "if 'iterations' in tags:\n",
" iterations = tags['iterations']\n", " iterations = tags['iterations']\n",
"else:\n", "else:\n",
@@ -204,14 +206,14 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Download fitted models" "## Download"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Download the Best Model for Any Given Metric" "### Download the Best Model for Any Given Metric"
] ]
}, },
{ {
@@ -229,7 +231,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Download the Model for Any Given Iteration" "### Download the Model for Any Given Iteration"
] ]
}, },
{ {
@@ -238,7 +240,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"iteration = 4 # Replace with an iteration number.\n", "iteration = 1 # Replace with an iteration number.\n",
"best_run, fitted_model = ml_run.get_output(iteration = iteration)\n", "best_run, fitted_model = ml_run.get_output(iteration = iteration)\n",
"fitted_model" "fitted_model"
] ]
@@ -247,7 +249,14 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Register fitted model for deployment\n", "## Register"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered." "If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
] ]
}, },
@@ -260,14 +269,14 @@
"description = 'AutoML Model'\n", "description = 'AutoML Model'\n",
"tags = None\n", "tags = None\n",
"ml_run.register_model(description = description, tags = tags)\n", "ml_run.register_model(description = description, tags = tags)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure." "print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Register the Best Model for Any Given Metric" "### Register the Best Model for Any Given Metric"
] ]
}, },
{ {
@@ -287,7 +296,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Register the Model for Any Given Iteration" "### Register the Model for Any Given Iteration"
] ]
}, },
{ {
@@ -296,7 +305,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"iteration = 4 # Replace with an iteration number.\n", "iteration = 1 # Replace with an iteration number.\n",
"description = 'AutoML Model'\n", "description = 'AutoML Model'\n",
"tags = None\n", "tags = None\n",
"ml_run.register_model(description = description, tags = tags, iteration = iteration)\n", "ml_run.register_model(description = description, tags = tags, iteration = iteration)\n",

View File

@@ -0,0 +1,493 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"**BikeShare Demand Forecasting**\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Evaluate](#Evaluate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example, we show how AutoML can be used for bike share forecasting.\n",
"\n",
"The purpose is to demonstrate how to take advantage of the built-in holiday featurization, access the feature names, and further demonstrate how to work with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Viewing the engineered names for featurized data and featurization summary for all raw features\n",
"6. Testing the fitted model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import logging\n",
"import warnings\n",
"# Squash warning messages for cleaner output in the notebook\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from matplotlib import pyplot as plt\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-bikeshareforecasting'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-bikeshareforecasting'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"Read bike share demand data from file, and preview data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.read_csv('bike-no.csv', parse_dates=['date'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's set up what we know abou the dataset. \n",
"\n",
"**Target column** is what we want to forecast.\n",
"\n",
"**Time column** is the time axis along which to predict.\n",
"\n",
"**Grain** is another word for an individual time series in your dataset. Grains are identified by values of the columns listed `grain_column_names`, for example \"store\" and \"item\" if your data has multiple time series of sales, one series for each combination of store and item sold.\n",
"\n",
"This dataset has only one time series. Please see the [orange juice notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales) for an example of a multi-time series dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"target_column_name = 'cnt'\n",
"time_column_name = 'date'\n",
"grain_column_names = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Split the data\n",
"\n",
"The first split we make is into train and test sets. Note we are splitting on time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train = data[data[time_column_name] < '2012-09-01']\n",
"test = data[data[time_column_name] >= '2012-09-01']\n",
"\n",
"X_train = train.copy()\n",
"y_train = X_train.pop(target_column_name).values\n",
"\n",
"X_test = test.copy()\n",
"y_test = X_test.pop(target_column_name).values\n",
"\n",
"print(X_train.shape)\n",
"print(y_train.shape)\n",
"print(X_test.shape)\n",
"print(y_test.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting forecaster maximum horizon \n",
"\n",
"Assuming your test data forms a full and regular time series(regular time intervals and no holes), \n",
"the maximum horizon you will need to forecast is the length of the longest grain in your test set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if len(grain_column_names) == 0:\n",
" max_horizon = len(X_test)\n",
"else:\n",
" max_horizon = X_test.groupby(grain_column_names)[time_column_name].count().max()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**country**|The country used to generate holiday features. These should be ISO 3166 two-letter country codes (i.e. 'US', 'GB').|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"time_column_name = 'date'\n",
"automl_settings = {\n",
" \"time_column_name\": time_column_name,\n",
" # these columns are a breakdown of the total and therefore a leak\n",
" \"drop_column_names\": ['casual', 'registered'],\n",
" # knowing the country allows Automated ML to bring in holidays\n",
" \"country\" : 'US',\n",
" \"max_horizon\" : max_horizon,\n",
" \"target_lags\": 1 \n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'forecasting', \n",
" primary_metric='normalized_root_mean_squared_error',\n",
" iterations = 10,\n",
" iteration_timeout_minutes = 5,\n",
" X = X_train,\n",
" y = y_train,\n",
" n_cross_validations = 3, \n",
" path=project_folder,\n",
" verbosity = logging.INFO,\n",
" **automl_settings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now run the experiment, starting with 10 iterations of model search. Experiment can be continued for more iterations if the results are not yet good. You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"fitted_model.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the engineered names for featurized data\n",
"\n",
"You can accees the engineered feature names generated in time-series featurization. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the featurization summary\n",
"\n",
"You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:\n",
"\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"Predict on training and test set, and calculate residual values.\n",
"\n",
"We always score on the original dataset whose schema matches the scheme of the training dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_test.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_query = y_test.copy().astype(np.float)\n",
"y_query.fill(np.NaN)\n",
"y_fcst, X_trans = fitted_model.forecast(X_test, y_query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):\n",
" \"\"\"\n",
" Demonstrates how to get the output aligned to the inputs\n",
" using pandas indexes. Helps understand what happened if\n",
" the output's shape differs from the input shape, or if\n",
" the data got re-sorted by time and grain during forecasting.\n",
" \n",
" Typical causes of misalignment are:\n",
" * we predicted some periods that were missing in actuals -> drop from eval\n",
" * model was asked to predict past max_horizon -> increase max horizon\n",
" * data at start of X_test was needed for lags -> provide previous periods\n",
" \"\"\"\n",
" df_fcst = pd.DataFrame({predicted_column_name : y_predicted})\n",
" # y and X outputs are aligned by forecast() function contract\n",
" df_fcst.index = X_trans.index\n",
" \n",
" # align original X_test to y_test \n",
" X_test_full = X_test.copy()\n",
" X_test_full[target_column_name] = y_test\n",
"\n",
" # X_test_full's index does not include origin, so reset for merge\n",
" df_fcst.reset_index(inplace=True)\n",
" X_test_full = X_test_full.reset_index().drop(columns='index')\n",
" together = df_fcst.merge(X_test_full, how='right')\n",
" \n",
" # drop rows where prediction or actuals are nan \n",
" # happens because of missing actuals \n",
" # or at edges of time due to lags/rolling windows\n",
" clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]\n",
" return(clean)\n",
"\n",
"df_all = align_outputs(y_fcst, X_trans, X_test, y_test)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def MAPE(actual, pred):\n",
" \"\"\"\n",
" Calculate mean absolute percentage error.\n",
" Remove NA and values where actual is close to zero\n",
" \"\"\"\n",
" not_na = ~(np.isnan(actual) | np.isnan(pred))\n",
" not_zero = ~np.isclose(actual, 0.0)\n",
" actual_safe = actual[not_na & not_zero]\n",
" pred_safe = pred[not_na & not_zero]\n",
" APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)\n",
" return np.mean(APE)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Simple forecasting model\")\n",
"rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))\n",
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % rmse)\n",
"mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])\n",
"print('mean_absolute_error score: %.2f' % mae)\n",
"print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))\n",
"\n",
"# Plot outputs\n",
"%matplotlib notebook\n",
"test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "xiaga@microsoft.com, tosingli@microsoft.com"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,732 @@
instant,date,season,yr,mnth,weekday,weathersit,temp,atemp,hum,windspeed,casual,registered,cnt
1,1/1/2011,1,0,1,6,2,0.344167,0.363625,0.805833,0.160446,331,654,985
2,1/2/2011,1,0,1,0,2,0.363478,0.353739,0.696087,0.248539,131,670,801
3,1/3/2011,1,0,1,1,1,0.196364,0.189405,0.437273,0.248309,120,1229,1349
4,1/4/2011,1,0,1,2,1,0.2,0.212122,0.590435,0.160296,108,1454,1562
5,1/5/2011,1,0,1,3,1,0.226957,0.22927,0.436957,0.1869,82,1518,1600
6,1/6/2011,1,0,1,4,1,0.204348,0.233209,0.518261,0.0895652,88,1518,1606
7,1/7/2011,1,0,1,5,2,0.196522,0.208839,0.498696,0.168726,148,1362,1510
8,1/8/2011,1,0,1,6,2,0.165,0.162254,0.535833,0.266804,68,891,959
9,1/9/2011,1,0,1,0,1,0.138333,0.116175,0.434167,0.36195,54,768,822
10,1/10/2011,1,0,1,1,1,0.150833,0.150888,0.482917,0.223267,41,1280,1321
11,1/11/2011,1,0,1,2,2,0.169091,0.191464,0.686364,0.122132,43,1220,1263
12,1/12/2011,1,0,1,3,1,0.172727,0.160473,0.599545,0.304627,25,1137,1162
13,1/13/2011,1,0,1,4,1,0.165,0.150883,0.470417,0.301,38,1368,1406
14,1/14/2011,1,0,1,5,1,0.16087,0.188413,0.537826,0.126548,54,1367,1421
15,1/15/2011,1,0,1,6,2,0.233333,0.248112,0.49875,0.157963,222,1026,1248
16,1/16/2011,1,0,1,0,1,0.231667,0.234217,0.48375,0.188433,251,953,1204
17,1/17/2011,1,0,1,1,2,0.175833,0.176771,0.5375,0.194017,117,883,1000
18,1/18/2011,1,0,1,2,2,0.216667,0.232333,0.861667,0.146775,9,674,683
19,1/19/2011,1,0,1,3,2,0.292174,0.298422,0.741739,0.208317,78,1572,1650
20,1/20/2011,1,0,1,4,2,0.261667,0.25505,0.538333,0.195904,83,1844,1927
21,1/21/2011,1,0,1,5,1,0.1775,0.157833,0.457083,0.353242,75,1468,1543
22,1/22/2011,1,0,1,6,1,0.0591304,0.0790696,0.4,0.17197,93,888,981
23,1/23/2011,1,0,1,0,1,0.0965217,0.0988391,0.436522,0.2466,150,836,986
24,1/24/2011,1,0,1,1,1,0.0973913,0.11793,0.491739,0.15833,86,1330,1416
25,1/25/2011,1,0,1,2,2,0.223478,0.234526,0.616957,0.129796,186,1799,1985
26,1/26/2011,1,0,1,3,3,0.2175,0.2036,0.8625,0.29385,34,472,506
27,1/27/2011,1,0,1,4,1,0.195,0.2197,0.6875,0.113837,15,416,431
28,1/28/2011,1,0,1,5,2,0.203478,0.223317,0.793043,0.1233,38,1129,1167
29,1/29/2011,1,0,1,6,1,0.196522,0.212126,0.651739,0.145365,123,975,1098
30,1/30/2011,1,0,1,0,1,0.216522,0.250322,0.722174,0.0739826,140,956,1096
31,1/31/2011,1,0,1,1,2,0.180833,0.18625,0.60375,0.187192,42,1459,1501
32,2/1/2011,1,0,2,2,2,0.192174,0.23453,0.829565,0.053213,47,1313,1360
33,2/2/2011,1,0,2,3,2,0.26,0.254417,0.775417,0.264308,72,1454,1526
34,2/3/2011,1,0,2,4,1,0.186957,0.177878,0.437826,0.277752,61,1489,1550
35,2/4/2011,1,0,2,5,2,0.211304,0.228587,0.585217,0.127839,88,1620,1708
36,2/5/2011,1,0,2,6,2,0.233333,0.243058,0.929167,0.161079,100,905,1005
37,2/6/2011,1,0,2,0,1,0.285833,0.291671,0.568333,0.1418,354,1269,1623
38,2/7/2011,1,0,2,1,1,0.271667,0.303658,0.738333,0.0454083,120,1592,1712
39,2/8/2011,1,0,2,2,1,0.220833,0.198246,0.537917,0.36195,64,1466,1530
40,2/9/2011,1,0,2,3,2,0.134783,0.144283,0.494783,0.188839,53,1552,1605
41,2/10/2011,1,0,2,4,1,0.144348,0.149548,0.437391,0.221935,47,1491,1538
42,2/11/2011,1,0,2,5,1,0.189091,0.213509,0.506364,0.10855,149,1597,1746
43,2/12/2011,1,0,2,6,1,0.2225,0.232954,0.544167,0.203367,288,1184,1472
44,2/13/2011,1,0,2,0,1,0.316522,0.324113,0.457391,0.260883,397,1192,1589
45,2/14/2011,1,0,2,1,1,0.415,0.39835,0.375833,0.417908,208,1705,1913
46,2/15/2011,1,0,2,2,1,0.266087,0.254274,0.314348,0.291374,140,1675,1815
47,2/16/2011,1,0,2,3,1,0.318261,0.3162,0.423478,0.251791,218,1897,2115
48,2/17/2011,1,0,2,4,1,0.435833,0.428658,0.505,0.230104,259,2216,2475
49,2/18/2011,1,0,2,5,1,0.521667,0.511983,0.516667,0.264925,579,2348,2927
50,2/19/2011,1,0,2,6,1,0.399167,0.391404,0.187917,0.507463,532,1103,1635
51,2/20/2011,1,0,2,0,1,0.285217,0.27733,0.407826,0.223235,639,1173,1812
52,2/21/2011,1,0,2,1,2,0.303333,0.284075,0.605,0.307846,195,912,1107
53,2/22/2011,1,0,2,2,1,0.182222,0.186033,0.577778,0.195683,74,1376,1450
54,2/23/2011,1,0,2,3,1,0.221739,0.245717,0.423043,0.094113,139,1778,1917
55,2/24/2011,1,0,2,4,2,0.295652,0.289191,0.697391,0.250496,100,1707,1807
56,2/25/2011,1,0,2,5,2,0.364348,0.350461,0.712174,0.346539,120,1341,1461
57,2/26/2011,1,0,2,6,1,0.2825,0.282192,0.537917,0.186571,424,1545,1969
58,2/27/2011,1,0,2,0,1,0.343478,0.351109,0.68,0.125248,694,1708,2402
59,2/28/2011,1,0,2,1,2,0.407273,0.400118,0.876364,0.289686,81,1365,1446
60,3/1/2011,1,0,3,2,1,0.266667,0.263879,0.535,0.216425,137,1714,1851
61,3/2/2011,1,0,3,3,1,0.335,0.320071,0.449583,0.307833,231,1903,2134
62,3/3/2011,1,0,3,4,1,0.198333,0.200133,0.318333,0.225754,123,1562,1685
63,3/4/2011,1,0,3,5,2,0.261667,0.255679,0.610417,0.203346,214,1730,1944
64,3/5/2011,1,0,3,6,2,0.384167,0.378779,0.789167,0.251871,640,1437,2077
65,3/6/2011,1,0,3,0,2,0.376522,0.366252,0.948261,0.343287,114,491,605
66,3/7/2011,1,0,3,1,1,0.261739,0.238461,0.551304,0.341352,244,1628,1872
67,3/8/2011,1,0,3,2,1,0.2925,0.3024,0.420833,0.12065,316,1817,2133
68,3/9/2011,1,0,3,3,2,0.295833,0.286608,0.775417,0.22015,191,1700,1891
69,3/10/2011,1,0,3,4,3,0.389091,0.385668,0,0.261877,46,577,623
70,3/11/2011,1,0,3,5,2,0.316522,0.305,0.649565,0.23297,247,1730,1977
71,3/12/2011,1,0,3,6,1,0.329167,0.32575,0.594583,0.220775,724,1408,2132
72,3/13/2011,1,0,3,0,1,0.384348,0.380091,0.527391,0.270604,982,1435,2417
73,3/14/2011,1,0,3,1,1,0.325217,0.332,0.496957,0.136926,359,1687,2046
74,3/15/2011,1,0,3,2,2,0.317391,0.318178,0.655652,0.184309,289,1767,2056
75,3/16/2011,1,0,3,3,2,0.365217,0.36693,0.776522,0.203117,321,1871,2192
76,3/17/2011,1,0,3,4,1,0.415,0.410333,0.602917,0.209579,424,2320,2744
77,3/18/2011,1,0,3,5,1,0.54,0.527009,0.525217,0.231017,884,2355,3239
78,3/19/2011,1,0,3,6,1,0.4725,0.466525,0.379167,0.368167,1424,1693,3117
79,3/20/2011,1,0,3,0,1,0.3325,0.32575,0.47375,0.207721,1047,1424,2471
80,3/21/2011,2,0,3,1,2,0.430435,0.409735,0.737391,0.288783,401,1676,2077
81,3/22/2011,2,0,3,2,1,0.441667,0.440642,0.624583,0.22575,460,2243,2703
82,3/23/2011,2,0,3,3,2,0.346957,0.337939,0.839565,0.234261,203,1918,2121
83,3/24/2011,2,0,3,4,2,0.285,0.270833,0.805833,0.243787,166,1699,1865
84,3/25/2011,2,0,3,5,1,0.264167,0.256312,0.495,0.230725,300,1910,2210
85,3/26/2011,2,0,3,6,1,0.265833,0.257571,0.394167,0.209571,981,1515,2496
86,3/27/2011,2,0,3,0,2,0.253043,0.250339,0.493913,0.1843,472,1221,1693
87,3/28/2011,2,0,3,1,1,0.264348,0.257574,0.302174,0.212204,222,1806,2028
88,3/29/2011,2,0,3,2,1,0.3025,0.292908,0.314167,0.226996,317,2108,2425
89,3/30/2011,2,0,3,3,2,0.3,0.29735,0.646667,0.172888,168,1368,1536
90,3/31/2011,2,0,3,4,3,0.268333,0.257575,0.918333,0.217646,179,1506,1685
91,4/1/2011,2,0,4,5,2,0.3,0.283454,0.68625,0.258708,307,1920,2227
92,4/2/2011,2,0,4,6,2,0.315,0.315637,0.65375,0.197146,898,1354,2252
93,4/3/2011,2,0,4,0,1,0.378333,0.378767,0.48,0.182213,1651,1598,3249
94,4/4/2011,2,0,4,1,1,0.573333,0.542929,0.42625,0.385571,734,2381,3115
95,4/5/2011,2,0,4,2,2,0.414167,0.39835,0.642083,0.388067,167,1628,1795
96,4/6/2011,2,0,4,3,1,0.390833,0.387608,0.470833,0.263063,413,2395,2808
97,4/7/2011,2,0,4,4,1,0.4375,0.433696,0.602917,0.162312,571,2570,3141
98,4/8/2011,2,0,4,5,2,0.335833,0.324479,0.83625,0.226992,172,1299,1471
99,4/9/2011,2,0,4,6,2,0.3425,0.341529,0.8775,0.133083,879,1576,2455
100,4/10/2011,2,0,4,0,2,0.426667,0.426737,0.8575,0.146767,1188,1707,2895
101,4/11/2011,2,0,4,1,2,0.595652,0.565217,0.716956,0.324474,855,2493,3348
102,4/12/2011,2,0,4,2,2,0.5025,0.493054,0.739167,0.274879,257,1777,2034
103,4/13/2011,2,0,4,3,2,0.4125,0.417283,0.819167,0.250617,209,1953,2162
104,4/14/2011,2,0,4,4,1,0.4675,0.462742,0.540417,0.1107,529,2738,3267
105,4/15/2011,2,0,4,5,1,0.446667,0.441913,0.67125,0.226375,642,2484,3126
106,4/16/2011,2,0,4,6,3,0.430833,0.425492,0.888333,0.340808,121,674,795
107,4/17/2011,2,0,4,0,1,0.456667,0.445696,0.479583,0.303496,1558,2186,3744
108,4/18/2011,2,0,4,1,1,0.5125,0.503146,0.5425,0.163567,669,2760,3429
109,4/19/2011,2,0,4,2,2,0.505833,0.489258,0.665833,0.157971,409,2795,3204
110,4/20/2011,2,0,4,3,1,0.595,0.564392,0.614167,0.241925,613,3331,3944
111,4/21/2011,2,0,4,4,1,0.459167,0.453892,0.407083,0.325258,745,3444,4189
112,4/22/2011,2,0,4,5,2,0.336667,0.321954,0.729583,0.219521,177,1506,1683
113,4/23/2011,2,0,4,6,2,0.46,0.450121,0.887917,0.230725,1462,2574,4036
114,4/24/2011,2,0,4,0,2,0.581667,0.551763,0.810833,0.192175,1710,2481,4191
115,4/25/2011,2,0,4,1,1,0.606667,0.5745,0.776667,0.185333,773,3300,4073
116,4/26/2011,2,0,4,2,1,0.631667,0.594083,0.729167,0.3265,678,3722,4400
117,4/27/2011,2,0,4,3,2,0.62,0.575142,0.835417,0.3122,547,3325,3872
118,4/28/2011,2,0,4,4,2,0.6175,0.578929,0.700833,0.320908,569,3489,4058
119,4/29/2011,2,0,4,5,1,0.51,0.497463,0.457083,0.240063,878,3717,4595
120,4/30/2011,2,0,4,6,1,0.4725,0.464021,0.503333,0.235075,1965,3347,5312
121,5/1/2011,2,0,5,0,2,0.451667,0.448204,0.762083,0.106354,1138,2213,3351
122,5/2/2011,2,0,5,1,2,0.549167,0.532833,0.73,0.183454,847,3554,4401
123,5/3/2011,2,0,5,2,2,0.616667,0.582079,0.697083,0.342667,603,3848,4451
124,5/4/2011,2,0,5,3,2,0.414167,0.40465,0.737083,0.328996,255,2378,2633
125,5/5/2011,2,0,5,4,1,0.459167,0.441917,0.444167,0.295392,614,3819,4433
126,5/6/2011,2,0,5,5,1,0.479167,0.474117,0.59,0.228246,894,3714,4608
127,5/7/2011,2,0,5,6,1,0.52,0.512621,0.54125,0.16045,1612,3102,4714
128,5/8/2011,2,0,5,0,1,0.528333,0.518933,0.631667,0.0746375,1401,2932,4333
129,5/9/2011,2,0,5,1,1,0.5325,0.525246,0.58875,0.176,664,3698,4362
130,5/10/2011,2,0,5,2,1,0.5325,0.522721,0.489167,0.115671,694,4109,4803
131,5/11/2011,2,0,5,3,1,0.5425,0.5284,0.632917,0.120642,550,3632,4182
132,5/12/2011,2,0,5,4,1,0.535,0.523363,0.7475,0.189667,695,4169,4864
133,5/13/2011,2,0,5,5,2,0.5125,0.4943,0.863333,0.179725,692,3413,4105
134,5/14/2011,2,0,5,6,2,0.520833,0.500629,0.9225,0.13495,902,2507,3409
135,5/15/2011,2,0,5,0,2,0.5625,0.536,0.867083,0.152979,1582,2971,4553
136,5/16/2011,2,0,5,1,1,0.5775,0.550512,0.787917,0.126871,773,3185,3958
137,5/17/2011,2,0,5,2,2,0.561667,0.538529,0.837917,0.277354,678,3445,4123
138,5/18/2011,2,0,5,3,2,0.55,0.527158,0.87,0.201492,536,3319,3855
139,5/19/2011,2,0,5,4,2,0.530833,0.510742,0.829583,0.108213,735,3840,4575
140,5/20/2011,2,0,5,5,1,0.536667,0.529042,0.719583,0.125013,909,4008,4917
141,5/21/2011,2,0,5,6,1,0.6025,0.571975,0.626667,0.12065,2258,3547,5805
142,5/22/2011,2,0,5,0,1,0.604167,0.5745,0.749583,0.148008,1576,3084,4660
143,5/23/2011,2,0,5,1,2,0.631667,0.590296,0.81,0.233842,836,3438,4274
144,5/24/2011,2,0,5,2,2,0.66,0.604813,0.740833,0.207092,659,3833,4492
145,5/25/2011,2,0,5,3,1,0.660833,0.615542,0.69625,0.154233,740,4238,4978
146,5/26/2011,2,0,5,4,1,0.708333,0.654688,0.6775,0.199642,758,3919,4677
147,5/27/2011,2,0,5,5,1,0.681667,0.637008,0.65375,0.240679,871,3808,4679
148,5/28/2011,2,0,5,6,1,0.655833,0.612379,0.729583,0.230092,2001,2757,4758
149,5/29/2011,2,0,5,0,1,0.6675,0.61555,0.81875,0.213938,2355,2433,4788
150,5/30/2011,2,0,5,1,1,0.733333,0.671092,0.685,0.131225,1549,2549,4098
151,5/31/2011,2,0,5,2,1,0.775,0.725383,0.636667,0.111329,673,3309,3982
152,6/1/2011,2,0,6,3,2,0.764167,0.720967,0.677083,0.207092,513,3461,3974
153,6/2/2011,2,0,6,4,1,0.715,0.643942,0.305,0.292287,736,4232,4968
154,6/3/2011,2,0,6,5,1,0.62,0.587133,0.354167,0.253121,898,4414,5312
155,6/4/2011,2,0,6,6,1,0.635,0.594696,0.45625,0.123142,1869,3473,5342
156,6/5/2011,2,0,6,0,2,0.648333,0.616804,0.6525,0.138692,1685,3221,4906
157,6/6/2011,2,0,6,1,1,0.678333,0.621858,0.6,0.121896,673,3875,4548
158,6/7/2011,2,0,6,2,1,0.7075,0.65595,0.597917,0.187808,763,4070,4833
159,6/8/2011,2,0,6,3,1,0.775833,0.727279,0.622083,0.136817,676,3725,4401
160,6/9/2011,2,0,6,4,2,0.808333,0.757579,0.568333,0.149883,563,3352,3915
161,6/10/2011,2,0,6,5,1,0.755,0.703292,0.605,0.140554,815,3771,4586
162,6/11/2011,2,0,6,6,1,0.725,0.678038,0.654583,0.15485,1729,3237,4966
163,6/12/2011,2,0,6,0,1,0.6925,0.643325,0.747917,0.163567,1467,2993,4460
164,6/13/2011,2,0,6,1,1,0.635,0.601654,0.494583,0.30535,863,4157,5020
165,6/14/2011,2,0,6,2,1,0.604167,0.591546,0.507083,0.269283,727,4164,4891
166,6/15/2011,2,0,6,3,1,0.626667,0.587754,0.471667,0.167912,769,4411,5180
167,6/16/2011,2,0,6,4,2,0.628333,0.595346,0.688333,0.206471,545,3222,3767
168,6/17/2011,2,0,6,5,1,0.649167,0.600383,0.735833,0.143029,863,3981,4844
169,6/18/2011,2,0,6,6,1,0.696667,0.643954,0.670417,0.119408,1807,3312,5119
170,6/19/2011,2,0,6,0,2,0.699167,0.645846,0.666667,0.102,1639,3105,4744
171,6/20/2011,2,0,6,1,2,0.635,0.595346,0.74625,0.155475,699,3311,4010
172,6/21/2011,3,0,6,2,2,0.680833,0.637646,0.770417,0.171025,774,4061,4835
173,6/22/2011,3,0,6,3,1,0.733333,0.693829,0.7075,0.172262,661,3846,4507
174,6/23/2011,3,0,6,4,2,0.728333,0.693833,0.703333,0.238804,746,4044,4790
175,6/24/2011,3,0,6,5,1,0.724167,0.656583,0.573333,0.222025,969,4022,4991
176,6/25/2011,3,0,6,6,1,0.695,0.643313,0.483333,0.209571,1782,3420,5202
177,6/26/2011,3,0,6,0,1,0.68,0.637629,0.513333,0.0945333,1920,3385,5305
178,6/27/2011,3,0,6,1,2,0.6825,0.637004,0.658333,0.107588,854,3854,4708
179,6/28/2011,3,0,6,2,1,0.744167,0.692558,0.634167,0.144283,732,3916,4648
180,6/29/2011,3,0,6,3,1,0.728333,0.654688,0.497917,0.261821,848,4377,5225
181,6/30/2011,3,0,6,4,1,0.696667,0.637008,0.434167,0.185312,1027,4488,5515
182,7/1/2011,3,0,7,5,1,0.7225,0.652162,0.39625,0.102608,1246,4116,5362
183,7/2/2011,3,0,7,6,1,0.738333,0.667308,0.444583,0.115062,2204,2915,5119
184,7/3/2011,3,0,7,0,2,0.716667,0.668575,0.6825,0.228858,2282,2367,4649
185,7/4/2011,3,0,7,1,2,0.726667,0.665417,0.637917,0.0814792,3065,2978,6043
186,7/5/2011,3,0,7,2,1,0.746667,0.696338,0.590417,0.126258,1031,3634,4665
187,7/6/2011,3,0,7,3,1,0.72,0.685633,0.743333,0.149883,784,3845,4629
188,7/7/2011,3,0,7,4,1,0.75,0.686871,0.65125,0.1592,754,3838,4592
189,7/8/2011,3,0,7,5,2,0.709167,0.670483,0.757917,0.225129,692,3348,4040
190,7/9/2011,3,0,7,6,1,0.733333,0.664158,0.609167,0.167912,1988,3348,5336
191,7/10/2011,3,0,7,0,1,0.7475,0.690025,0.578333,0.183471,1743,3138,4881
192,7/11/2011,3,0,7,1,1,0.7625,0.729804,0.635833,0.282337,723,3363,4086
193,7/12/2011,3,0,7,2,1,0.794167,0.739275,0.559167,0.200254,662,3596,4258
194,7/13/2011,3,0,7,3,1,0.746667,0.689404,0.631667,0.146133,748,3594,4342
195,7/14/2011,3,0,7,4,1,0.680833,0.635104,0.47625,0.240667,888,4196,5084
196,7/15/2011,3,0,7,5,1,0.663333,0.624371,0.59125,0.182833,1318,4220,5538
197,7/16/2011,3,0,7,6,1,0.686667,0.638263,0.585,0.208342,2418,3505,5923
198,7/17/2011,3,0,7,0,1,0.719167,0.669833,0.604167,0.245033,2006,3296,5302
199,7/18/2011,3,0,7,1,1,0.746667,0.703925,0.65125,0.215804,841,3617,4458
200,7/19/2011,3,0,7,2,1,0.776667,0.747479,0.650417,0.1306,752,3789,4541
201,7/20/2011,3,0,7,3,1,0.768333,0.74685,0.707083,0.113817,644,3688,4332
202,7/21/2011,3,0,7,4,2,0.815,0.826371,0.69125,0.222021,632,3152,3784
203,7/22/2011,3,0,7,5,1,0.848333,0.840896,0.580417,0.1331,562,2825,3387
204,7/23/2011,3,0,7,6,1,0.849167,0.804287,0.5,0.131221,987,2298,3285
205,7/24/2011,3,0,7,0,1,0.83,0.794829,0.550833,0.169171,1050,2556,3606
206,7/25/2011,3,0,7,1,1,0.743333,0.720958,0.757083,0.0908083,568,3272,3840
207,7/26/2011,3,0,7,2,1,0.771667,0.696979,0.540833,0.200258,750,3840,4590
208,7/27/2011,3,0,7,3,1,0.775,0.690667,0.402917,0.183463,755,3901,4656
209,7/28/2011,3,0,7,4,1,0.779167,0.7399,0.583333,0.178479,606,3784,4390
210,7/29/2011,3,0,7,5,1,0.838333,0.785967,0.5425,0.174138,670,3176,3846
211,7/30/2011,3,0,7,6,1,0.804167,0.728537,0.465833,0.168537,1559,2916,4475
212,7/31/2011,3,0,7,0,1,0.805833,0.729796,0.480833,0.164813,1524,2778,4302
213,8/1/2011,3,0,8,1,1,0.771667,0.703292,0.550833,0.156717,729,3537,4266
214,8/2/2011,3,0,8,2,1,0.783333,0.707071,0.49125,0.20585,801,4044,4845
215,8/3/2011,3,0,8,3,2,0.731667,0.679937,0.6575,0.135583,467,3107,3574
216,8/4/2011,3,0,8,4,2,0.71,0.664788,0.7575,0.19715,799,3777,4576
217,8/5/2011,3,0,8,5,1,0.710833,0.656567,0.630833,0.184696,1023,3843,4866
218,8/6/2011,3,0,8,6,2,0.716667,0.676154,0.755,0.22825,1521,2773,4294
219,8/7/2011,3,0,8,0,1,0.7425,0.715292,0.752917,0.201487,1298,2487,3785
220,8/8/2011,3,0,8,1,1,0.765,0.703283,0.592083,0.192175,846,3480,4326
221,8/9/2011,3,0,8,2,1,0.775,0.724121,0.570417,0.151121,907,3695,4602
222,8/10/2011,3,0,8,3,1,0.766667,0.684983,0.424167,0.200258,884,3896,4780
223,8/11/2011,3,0,8,4,1,0.7175,0.651521,0.42375,0.164796,812,3980,4792
224,8/12/2011,3,0,8,5,1,0.708333,0.654042,0.415,0.125621,1051,3854,4905
225,8/13/2011,3,0,8,6,2,0.685833,0.645858,0.729583,0.211454,1504,2646,4150
226,8/14/2011,3,0,8,0,2,0.676667,0.624388,0.8175,0.222633,1338,2482,3820
227,8/15/2011,3,0,8,1,1,0.665833,0.616167,0.712083,0.208954,775,3563,4338
228,8/16/2011,3,0,8,2,1,0.700833,0.645837,0.578333,0.236329,721,4004,4725
229,8/17/2011,3,0,8,3,1,0.723333,0.666671,0.575417,0.143667,668,4026,4694
230,8/18/2011,3,0,8,4,1,0.711667,0.662258,0.654583,0.233208,639,3166,3805
231,8/19/2011,3,0,8,5,2,0.685,0.633221,0.722917,0.139308,797,3356,4153
232,8/20/2011,3,0,8,6,1,0.6975,0.648996,0.674167,0.104467,1914,3277,5191
233,8/21/2011,3,0,8,0,1,0.710833,0.675525,0.77,0.248754,1249,2624,3873
234,8/22/2011,3,0,8,1,1,0.691667,0.638254,0.47,0.27675,833,3925,4758
235,8/23/2011,3,0,8,2,1,0.640833,0.606067,0.455417,0.146763,1281,4614,5895
236,8/24/2011,3,0,8,3,1,0.673333,0.630692,0.605,0.253108,949,4181,5130
237,8/25/2011,3,0,8,4,2,0.684167,0.645854,0.771667,0.210833,435,3107,3542
238,8/26/2011,3,0,8,5,1,0.7,0.659733,0.76125,0.0839625,768,3893,4661
239,8/27/2011,3,0,8,6,2,0.68,0.635556,0.85,0.375617,226,889,1115
240,8/28/2011,3,0,8,0,1,0.707059,0.647959,0.561765,0.304659,1415,2919,4334
241,8/29/2011,3,0,8,1,1,0.636667,0.607958,0.554583,0.159825,729,3905,4634
242,8/30/2011,3,0,8,2,1,0.639167,0.594704,0.548333,0.125008,775,4429,5204
243,8/31/2011,3,0,8,3,1,0.656667,0.611121,0.597917,0.0833333,688,4370,5058
244,9/1/2011,3,0,9,4,1,0.655,0.614921,0.639167,0.141796,783,4332,5115
245,9/2/2011,3,0,9,5,2,0.643333,0.604808,0.727083,0.139929,875,3852,4727
246,9/3/2011,3,0,9,6,1,0.669167,0.633213,0.716667,0.185325,1935,2549,4484
247,9/4/2011,3,0,9,0,1,0.709167,0.665429,0.742083,0.206467,2521,2419,4940
248,9/5/2011,3,0,9,1,2,0.673333,0.625646,0.790417,0.212696,1236,2115,3351
249,9/6/2011,3,0,9,2,3,0.54,0.5152,0.886957,0.343943,204,2506,2710
250,9/7/2011,3,0,9,3,3,0.599167,0.544229,0.917083,0.0970208,118,1878,1996
251,9/8/2011,3,0,9,4,3,0.633913,0.555361,0.939565,0.192748,153,1689,1842
252,9/9/2011,3,0,9,5,2,0.65,0.578946,0.897917,0.124379,417,3127,3544
253,9/10/2011,3,0,9,6,1,0.66,0.607962,0.75375,0.153608,1750,3595,5345
254,9/11/2011,3,0,9,0,1,0.653333,0.609229,0.71375,0.115054,1633,3413,5046
255,9/12/2011,3,0,9,1,1,0.644348,0.60213,0.692174,0.088913,690,4023,4713
256,9/13/2011,3,0,9,2,1,0.650833,0.603554,0.7125,0.141804,701,4062,4763
257,9/14/2011,3,0,9,3,1,0.673333,0.6269,0.697083,0.1673,647,4138,4785
258,9/15/2011,3,0,9,4,2,0.5775,0.553671,0.709167,0.271146,428,3231,3659
259,9/16/2011,3,0,9,5,2,0.469167,0.461475,0.590417,0.164183,742,4018,4760
260,9/17/2011,3,0,9,6,2,0.491667,0.478512,0.718333,0.189675,1434,3077,4511
261,9/18/2011,3,0,9,0,1,0.5075,0.490537,0.695,0.178483,1353,2921,4274
262,9/19/2011,3,0,9,1,2,0.549167,0.529675,0.69,0.151742,691,3848,4539
263,9/20/2011,3,0,9,2,2,0.561667,0.532217,0.88125,0.134954,438,3203,3641
264,9/21/2011,3,0,9,3,2,0.595,0.550533,0.9,0.0964042,539,3813,4352
265,9/22/2011,3,0,9,4,2,0.628333,0.554963,0.902083,0.128125,555,4240,4795
266,9/23/2011,4,0,9,5,2,0.609167,0.522125,0.9725,0.0783667,258,2137,2395
267,9/24/2011,4,0,9,6,2,0.606667,0.564412,0.8625,0.0783833,1776,3647,5423
268,9/25/2011,4,0,9,0,2,0.634167,0.572637,0.845,0.0503792,1544,3466,5010
269,9/26/2011,4,0,9,1,2,0.649167,0.589042,0.848333,0.1107,684,3946,4630
270,9/27/2011,4,0,9,2,2,0.636667,0.574525,0.885417,0.118171,477,3643,4120
271,9/28/2011,4,0,9,3,2,0.635,0.575158,0.84875,0.148629,480,3427,3907
272,9/29/2011,4,0,9,4,1,0.616667,0.574512,0.699167,0.172883,653,4186,4839
273,9/30/2011,4,0,9,5,1,0.564167,0.544829,0.6475,0.206475,830,4372,5202
274,10/1/2011,4,0,10,6,2,0.41,0.412863,0.75375,0.292296,480,1949,2429
275,10/2/2011,4,0,10,0,2,0.356667,0.345317,0.791667,0.222013,616,2302,2918
276,10/3/2011,4,0,10,1,2,0.384167,0.392046,0.760833,0.0833458,330,3240,3570
277,10/4/2011,4,0,10,2,1,0.484167,0.472858,0.71,0.205854,486,3970,4456
278,10/5/2011,4,0,10,3,1,0.538333,0.527138,0.647917,0.17725,559,4267,4826
279,10/6/2011,4,0,10,4,1,0.494167,0.480425,0.620833,0.134954,639,4126,4765
280,10/7/2011,4,0,10,5,1,0.510833,0.504404,0.684167,0.0223917,949,4036,4985
281,10/8/2011,4,0,10,6,1,0.521667,0.513242,0.70125,0.0454042,2235,3174,5409
282,10/9/2011,4,0,10,0,1,0.540833,0.523983,0.7275,0.06345,2397,3114,5511
283,10/10/2011,4,0,10,1,1,0.570833,0.542925,0.73375,0.0423042,1514,3603,5117
284,10/11/2011,4,0,10,2,2,0.566667,0.546096,0.80875,0.143042,667,3896,4563
285,10/12/2011,4,0,10,3,3,0.543333,0.517717,0.90625,0.24815,217,2199,2416
286,10/13/2011,4,0,10,4,2,0.589167,0.551804,0.896667,0.141787,290,2623,2913
287,10/14/2011,4,0,10,5,2,0.550833,0.529675,0.71625,0.223883,529,3115,3644
288,10/15/2011,4,0,10,6,1,0.506667,0.498725,0.483333,0.258083,1899,3318,5217
289,10/16/2011,4,0,10,0,1,0.511667,0.503154,0.486667,0.281717,1748,3293,5041
290,10/17/2011,4,0,10,1,1,0.534167,0.510725,0.579583,0.175379,713,3857,4570
291,10/18/2011,4,0,10,2,2,0.5325,0.522721,0.701667,0.110087,637,4111,4748
292,10/19/2011,4,0,10,3,3,0.541739,0.513848,0.895217,0.243339,254,2170,2424
293,10/20/2011,4,0,10,4,1,0.475833,0.466525,0.63625,0.422275,471,3724,4195
294,10/21/2011,4,0,10,5,1,0.4275,0.423596,0.574167,0.221396,676,3628,4304
295,10/22/2011,4,0,10,6,1,0.4225,0.425492,0.629167,0.0926667,1499,2809,4308
296,10/23/2011,4,0,10,0,1,0.421667,0.422333,0.74125,0.0995125,1619,2762,4381
297,10/24/2011,4,0,10,1,1,0.463333,0.457067,0.772083,0.118792,699,3488,4187
298,10/25/2011,4,0,10,2,1,0.471667,0.463375,0.622917,0.166658,695,3992,4687
299,10/26/2011,4,0,10,3,2,0.484167,0.472846,0.720417,0.148642,404,3490,3894
300,10/27/2011,4,0,10,4,2,0.47,0.457046,0.812917,0.197763,240,2419,2659
301,10/28/2011,4,0,10,5,2,0.330833,0.318812,0.585833,0.229479,456,3291,3747
302,10/29/2011,4,0,10,6,3,0.254167,0.227913,0.8825,0.351371,57,570,627
303,10/30/2011,4,0,10,0,1,0.319167,0.321329,0.62375,0.176617,885,2446,3331
304,10/31/2011,4,0,10,1,1,0.34,0.356063,0.703333,0.10635,362,3307,3669
305,11/1/2011,4,0,11,2,1,0.400833,0.397088,0.68375,0.135571,410,3658,4068
306,11/2/2011,4,0,11,3,1,0.3775,0.390133,0.71875,0.0820917,370,3816,4186
307,11/3/2011,4,0,11,4,1,0.408333,0.405921,0.702083,0.136817,318,3656,3974
308,11/4/2011,4,0,11,5,2,0.403333,0.403392,0.6225,0.271779,470,3576,4046
309,11/5/2011,4,0,11,6,1,0.326667,0.323854,0.519167,0.189062,1156,2770,3926
310,11/6/2011,4,0,11,0,1,0.348333,0.362358,0.734583,0.0920542,952,2697,3649
311,11/7/2011,4,0,11,1,1,0.395,0.400871,0.75875,0.057225,373,3662,4035
312,11/8/2011,4,0,11,2,1,0.408333,0.412246,0.721667,0.0690375,376,3829,4205
313,11/9/2011,4,0,11,3,1,0.4,0.409079,0.758333,0.0621958,305,3804,4109
314,11/10/2011,4,0,11,4,2,0.38,0.373721,0.813333,0.189067,190,2743,2933
315,11/11/2011,4,0,11,5,1,0.324167,0.306817,0.44625,0.314675,440,2928,3368
316,11/12/2011,4,0,11,6,1,0.356667,0.357942,0.552917,0.212062,1275,2792,4067
317,11/13/2011,4,0,11,0,1,0.440833,0.43055,0.458333,0.281721,1004,2713,3717
318,11/14/2011,4,0,11,1,1,0.53,0.524612,0.587083,0.306596,595,3891,4486
319,11/15/2011,4,0,11,2,2,0.53,0.507579,0.68875,0.199633,449,3746,4195
320,11/16/2011,4,0,11,3,3,0.456667,0.451988,0.93,0.136829,145,1672,1817
321,11/17/2011,4,0,11,4,2,0.341667,0.323221,0.575833,0.305362,139,2914,3053
322,11/18/2011,4,0,11,5,1,0.274167,0.272721,0.41,0.168533,245,3147,3392
323,11/19/2011,4,0,11,6,1,0.329167,0.324483,0.502083,0.224496,943,2720,3663
324,11/20/2011,4,0,11,0,2,0.463333,0.457058,0.684583,0.18595,787,2733,3520
325,11/21/2011,4,0,11,1,3,0.4475,0.445062,0.91,0.138054,220,2545,2765
326,11/22/2011,4,0,11,2,3,0.416667,0.421696,0.9625,0.118792,69,1538,1607
327,11/23/2011,4,0,11,3,2,0.440833,0.430537,0.757917,0.335825,112,2454,2566
328,11/24/2011,4,0,11,4,1,0.373333,0.372471,0.549167,0.167304,560,935,1495
329,11/25/2011,4,0,11,5,1,0.375,0.380671,0.64375,0.0988958,1095,1697,2792
330,11/26/2011,4,0,11,6,1,0.375833,0.385087,0.681667,0.0684208,1249,1819,3068
331,11/27/2011,4,0,11,0,1,0.459167,0.4558,0.698333,0.208954,810,2261,3071
332,11/28/2011,4,0,11,1,1,0.503478,0.490122,0.743043,0.142122,253,3614,3867
333,11/29/2011,4,0,11,2,2,0.458333,0.451375,0.830833,0.258092,96,2818,2914
334,11/30/2011,4,0,11,3,1,0.325,0.311221,0.613333,0.271158,188,3425,3613
335,12/1/2011,4,0,12,4,1,0.3125,0.305554,0.524583,0.220158,182,3545,3727
336,12/2/2011,4,0,12,5,1,0.314167,0.331433,0.625833,0.100754,268,3672,3940
337,12/3/2011,4,0,12,6,1,0.299167,0.310604,0.612917,0.0957833,706,2908,3614
338,12/4/2011,4,0,12,0,1,0.330833,0.3491,0.775833,0.0839583,634,2851,3485
339,12/5/2011,4,0,12,1,2,0.385833,0.393925,0.827083,0.0622083,233,3578,3811
340,12/6/2011,4,0,12,2,3,0.4625,0.4564,0.949583,0.232583,126,2468,2594
341,12/7/2011,4,0,12,3,3,0.41,0.400246,0.970417,0.266175,50,655,705
342,12/8/2011,4,0,12,4,1,0.265833,0.256938,0.58,0.240058,150,3172,3322
343,12/9/2011,4,0,12,5,1,0.290833,0.317542,0.695833,0.0827167,261,3359,3620
344,12/10/2011,4,0,12,6,1,0.275,0.266412,0.5075,0.233221,502,2688,3190
345,12/11/2011,4,0,12,0,1,0.220833,0.253154,0.49,0.0665417,377,2366,2743
346,12/12/2011,4,0,12,1,1,0.238333,0.270196,0.670833,0.06345,143,3167,3310
347,12/13/2011,4,0,12,2,1,0.2825,0.301138,0.59,0.14055,155,3368,3523
348,12/14/2011,4,0,12,3,2,0.3175,0.338362,0.66375,0.0609583,178,3562,3740
349,12/15/2011,4,0,12,4,2,0.4225,0.412237,0.634167,0.268042,181,3528,3709
350,12/16/2011,4,0,12,5,2,0.375,0.359825,0.500417,0.260575,178,3399,3577
351,12/17/2011,4,0,12,6,2,0.258333,0.249371,0.560833,0.243167,275,2464,2739
352,12/18/2011,4,0,12,0,1,0.238333,0.245579,0.58625,0.169779,220,2211,2431
353,12/19/2011,4,0,12,1,1,0.276667,0.280933,0.6375,0.172896,260,3143,3403
354,12/20/2011,4,0,12,2,2,0.385833,0.396454,0.595417,0.0615708,216,3534,3750
355,12/21/2011,1,0,12,3,2,0.428333,0.428017,0.858333,0.2214,107,2553,2660
356,12/22/2011,1,0,12,4,2,0.423333,0.426121,0.7575,0.047275,227,2841,3068
357,12/23/2011,1,0,12,5,1,0.373333,0.377513,0.68625,0.274246,163,2046,2209
358,12/24/2011,1,0,12,6,1,0.3025,0.299242,0.5425,0.190304,155,856,1011
359,12/25/2011,1,0,12,0,1,0.274783,0.279961,0.681304,0.155091,303,451,754
360,12/26/2011,1,0,12,1,1,0.321739,0.315535,0.506957,0.239465,430,887,1317
361,12/27/2011,1,0,12,2,2,0.325,0.327633,0.7625,0.18845,103,1059,1162
362,12/28/2011,1,0,12,3,1,0.29913,0.279974,0.503913,0.293961,255,2047,2302
363,12/29/2011,1,0,12,4,1,0.248333,0.263892,0.574167,0.119412,254,2169,2423
364,12/30/2011,1,0,12,5,1,0.311667,0.318812,0.636667,0.134337,491,2508,2999
365,12/31/2011,1,0,12,6,1,0.41,0.414121,0.615833,0.220154,665,1820,2485
366,1/1/2012,1,1,1,0,1,0.37,0.375621,0.6925,0.192167,686,1608,2294
367,1/2/2012,1,1,1,1,1,0.273043,0.252304,0.381304,0.329665,244,1707,1951
368,1/3/2012,1,1,1,2,1,0.15,0.126275,0.44125,0.365671,89,2147,2236
369,1/4/2012,1,1,1,3,2,0.1075,0.119337,0.414583,0.1847,95,2273,2368
370,1/5/2012,1,1,1,4,1,0.265833,0.278412,0.524167,0.129987,140,3132,3272
371,1/6/2012,1,1,1,5,1,0.334167,0.340267,0.542083,0.167908,307,3791,4098
372,1/7/2012,1,1,1,6,1,0.393333,0.390779,0.531667,0.174758,1070,3451,4521
373,1/8/2012,1,1,1,0,1,0.3375,0.340258,0.465,0.191542,599,2826,3425
374,1/9/2012,1,1,1,1,2,0.224167,0.247479,0.701667,0.0989,106,2270,2376
375,1/10/2012,1,1,1,2,1,0.308696,0.318826,0.646522,0.187552,173,3425,3598
376,1/11/2012,1,1,1,3,2,0.274167,0.282821,0.8475,0.131221,92,2085,2177
377,1/12/2012,1,1,1,4,2,0.3825,0.381938,0.802917,0.180967,269,3828,4097
378,1/13/2012,1,1,1,5,1,0.274167,0.249362,0.5075,0.378108,174,3040,3214
379,1/14/2012,1,1,1,6,1,0.18,0.183087,0.4575,0.187183,333,2160,2493
380,1/15/2012,1,1,1,0,1,0.166667,0.161625,0.419167,0.251258,284,2027,2311
381,1/16/2012,1,1,1,1,1,0.19,0.190663,0.5225,0.231358,217,2081,2298
382,1/17/2012,1,1,1,2,2,0.373043,0.364278,0.716087,0.34913,127,2808,2935
383,1/18/2012,1,1,1,3,1,0.303333,0.275254,0.443333,0.415429,109,3267,3376
384,1/19/2012,1,1,1,4,1,0.19,0.190038,0.4975,0.220158,130,3162,3292
385,1/20/2012,1,1,1,5,2,0.2175,0.220958,0.45,0.20275,115,3048,3163
386,1/21/2012,1,1,1,6,2,0.173333,0.174875,0.83125,0.222642,67,1234,1301
387,1/22/2012,1,1,1,0,2,0.1625,0.16225,0.79625,0.199638,196,1781,1977
388,1/23/2012,1,1,1,1,2,0.218333,0.243058,0.91125,0.110708,145,2287,2432
389,1/24/2012,1,1,1,2,1,0.3425,0.349108,0.835833,0.123767,439,3900,4339
390,1/25/2012,1,1,1,3,1,0.294167,0.294821,0.64375,0.161071,467,3803,4270
391,1/26/2012,1,1,1,4,2,0.341667,0.35605,0.769583,0.0733958,244,3831,4075
392,1/27/2012,1,1,1,5,2,0.425,0.415383,0.74125,0.342667,269,3187,3456
393,1/28/2012,1,1,1,6,1,0.315833,0.326379,0.543333,0.210829,775,3248,4023
394,1/29/2012,1,1,1,0,1,0.2825,0.272721,0.31125,0.24005,558,2685,3243
395,1/30/2012,1,1,1,1,1,0.269167,0.262625,0.400833,0.215792,126,3498,3624
396,1/31/2012,1,1,1,2,1,0.39,0.381317,0.416667,0.261817,324,4185,4509
397,2/1/2012,1,1,2,3,1,0.469167,0.466538,0.507917,0.189067,304,4275,4579
398,2/2/2012,1,1,2,4,2,0.399167,0.398971,0.672917,0.187187,190,3571,3761
399,2/3/2012,1,1,2,5,1,0.313333,0.309346,0.526667,0.178496,310,3841,4151
400,2/4/2012,1,1,2,6,2,0.264167,0.272725,0.779583,0.121896,384,2448,2832
401,2/5/2012,1,1,2,0,2,0.265833,0.264521,0.687917,0.175996,318,2629,2947
402,2/6/2012,1,1,2,1,1,0.282609,0.296426,0.622174,0.1538,206,3578,3784
403,2/7/2012,1,1,2,2,1,0.354167,0.361104,0.49625,0.147379,199,4176,4375
404,2/8/2012,1,1,2,3,2,0.256667,0.266421,0.722917,0.133721,109,2693,2802
405,2/9/2012,1,1,2,4,1,0.265,0.261988,0.562083,0.194037,163,3667,3830
406,2/10/2012,1,1,2,5,2,0.280833,0.293558,0.54,0.116929,227,3604,3831
407,2/11/2012,1,1,2,6,3,0.224167,0.210867,0.73125,0.289796,192,1977,2169
408,2/12/2012,1,1,2,0,1,0.1275,0.101658,0.464583,0.409212,73,1456,1529
409,2/13/2012,1,1,2,1,1,0.2225,0.227913,0.41125,0.167283,94,3328,3422
410,2/14/2012,1,1,2,2,2,0.319167,0.333946,0.50875,0.141179,135,3787,3922
411,2/15/2012,1,1,2,3,1,0.348333,0.351629,0.53125,0.1816,141,4028,4169
412,2/16/2012,1,1,2,4,2,0.316667,0.330162,0.752917,0.091425,74,2931,3005
413,2/17/2012,1,1,2,5,1,0.343333,0.351629,0.634583,0.205846,349,3805,4154
414,2/18/2012,1,1,2,6,1,0.346667,0.355425,0.534583,0.190929,1435,2883,4318
415,2/19/2012,1,1,2,0,2,0.28,0.265788,0.515833,0.253112,618,2071,2689
416,2/20/2012,1,1,2,1,1,0.28,0.273391,0.507826,0.229083,502,2627,3129
417,2/21/2012,1,1,2,2,1,0.287826,0.295113,0.594348,0.205717,163,3614,3777
418,2/22/2012,1,1,2,3,1,0.395833,0.392667,0.567917,0.234471,394,4379,4773
419,2/23/2012,1,1,2,4,1,0.454167,0.444446,0.554583,0.190913,516,4546,5062
420,2/24/2012,1,1,2,5,2,0.4075,0.410971,0.7375,0.237567,246,3241,3487
421,2/25/2012,1,1,2,6,1,0.290833,0.255675,0.395833,0.421642,317,2415,2732
422,2/26/2012,1,1,2,0,1,0.279167,0.268308,0.41,0.205229,515,2874,3389
423,2/27/2012,1,1,2,1,1,0.366667,0.357954,0.490833,0.268033,253,4069,4322
424,2/28/2012,1,1,2,2,1,0.359167,0.353525,0.395833,0.193417,229,4134,4363
425,2/29/2012,1,1,2,3,2,0.344348,0.34847,0.804783,0.179117,65,1769,1834
426,3/1/2012,1,1,3,4,1,0.485833,0.475371,0.615417,0.226987,325,4665,4990
427,3/2/2012,1,1,3,5,2,0.353333,0.359842,0.657083,0.144904,246,2948,3194
428,3/3/2012,1,1,3,6,2,0.414167,0.413492,0.62125,0.161079,956,3110,4066
429,3/4/2012,1,1,3,0,1,0.325833,0.303021,0.403333,0.334571,710,2713,3423
430,3/5/2012,1,1,3,1,1,0.243333,0.241171,0.50625,0.228858,203,3130,3333
431,3/6/2012,1,1,3,2,1,0.258333,0.255042,0.456667,0.200875,221,3735,3956
432,3/7/2012,1,1,3,3,1,0.404167,0.3851,0.513333,0.345779,432,4484,4916
433,3/8/2012,1,1,3,4,1,0.5275,0.524604,0.5675,0.441563,486,4896,5382
434,3/9/2012,1,1,3,5,2,0.410833,0.397083,0.407083,0.4148,447,4122,4569
435,3/10/2012,1,1,3,6,1,0.2875,0.277767,0.350417,0.22575,968,3150,4118
436,3/11/2012,1,1,3,0,1,0.361739,0.35967,0.476957,0.222587,1658,3253,4911
437,3/12/2012,1,1,3,1,1,0.466667,0.459592,0.489167,0.207713,838,4460,5298
438,3/13/2012,1,1,3,2,1,0.565,0.542929,0.6175,0.23695,762,5085,5847
439,3/14/2012,1,1,3,3,1,0.5725,0.548617,0.507083,0.115062,997,5315,6312
440,3/15/2012,1,1,3,4,1,0.5575,0.532825,0.579583,0.149883,1005,5187,6192
441,3/16/2012,1,1,3,5,2,0.435833,0.436229,0.842083,0.113192,548,3830,4378
442,3/17/2012,1,1,3,6,2,0.514167,0.505046,0.755833,0.110704,3155,4681,7836
443,3/18/2012,1,1,3,0,2,0.4725,0.464,0.81,0.126883,2207,3685,5892
444,3/19/2012,1,1,3,1,1,0.545,0.532821,0.72875,0.162317,982,5171,6153
445,3/20/2012,1,1,3,2,1,0.560833,0.538533,0.807917,0.121271,1051,5042,6093
446,3/21/2012,2,1,3,3,2,0.531667,0.513258,0.82125,0.0895583,1122,5108,6230
447,3/22/2012,2,1,3,4,1,0.554167,0.531567,0.83125,0.117562,1334,5537,6871
448,3/23/2012,2,1,3,5,2,0.601667,0.570067,0.694167,0.1163,2469,5893,8362
449,3/24/2012,2,1,3,6,2,0.5025,0.486733,0.885417,0.192783,1033,2339,3372
450,3/25/2012,2,1,3,0,2,0.4375,0.437488,0.880833,0.220775,1532,3464,4996
451,3/26/2012,2,1,3,1,1,0.445833,0.43875,0.477917,0.386821,795,4763,5558
452,3/27/2012,2,1,3,2,1,0.323333,0.315654,0.29,0.187192,531,4571,5102
453,3/28/2012,2,1,3,3,1,0.484167,0.47095,0.48125,0.291671,674,5024,5698
454,3/29/2012,2,1,3,4,1,0.494167,0.482304,0.439167,0.31965,834,5299,6133
455,3/30/2012,2,1,3,5,2,0.37,0.375621,0.580833,0.138067,796,4663,5459
456,3/31/2012,2,1,3,6,2,0.424167,0.421708,0.738333,0.250617,2301,3934,6235
457,4/1/2012,2,1,4,0,2,0.425833,0.417287,0.67625,0.172267,2347,3694,6041
458,4/2/2012,2,1,4,1,1,0.433913,0.427513,0.504348,0.312139,1208,4728,5936
459,4/3/2012,2,1,4,2,1,0.466667,0.461483,0.396667,0.100133,1348,5424,6772
460,4/4/2012,2,1,4,3,1,0.541667,0.53345,0.469583,0.180975,1058,5378,6436
461,4/5/2012,2,1,4,4,1,0.435,0.431163,0.374167,0.219529,1192,5265,6457
462,4/6/2012,2,1,4,5,1,0.403333,0.390767,0.377083,0.300388,1807,4653,6460
463,4/7/2012,2,1,4,6,1,0.4375,0.426129,0.254167,0.274871,3252,3605,6857
464,4/8/2012,2,1,4,0,1,0.5,0.492425,0.275833,0.232596,2230,2939,5169
465,4/9/2012,2,1,4,1,1,0.489167,0.476638,0.3175,0.358196,905,4680,5585
466,4/10/2012,2,1,4,2,1,0.446667,0.436233,0.435,0.249375,819,5099,5918
467,4/11/2012,2,1,4,3,1,0.348696,0.337274,0.469565,0.295274,482,4380,4862
468,4/12/2012,2,1,4,4,1,0.3975,0.387604,0.46625,0.290429,663,4746,5409
469,4/13/2012,2,1,4,5,1,0.4425,0.431808,0.408333,0.155471,1252,5146,6398
470,4/14/2012,2,1,4,6,1,0.495,0.487996,0.502917,0.190917,2795,4665,7460
471,4/15/2012,2,1,4,0,1,0.606667,0.573875,0.507917,0.225129,2846,4286,7132
472,4/16/2012,2,1,4,1,1,0.664167,0.614925,0.561667,0.284829,1198,5172,6370
473,4/17/2012,2,1,4,2,1,0.608333,0.598487,0.390417,0.273629,989,5702,6691
474,4/18/2012,2,1,4,3,2,0.463333,0.457038,0.569167,0.167912,347,4020,4367
475,4/19/2012,2,1,4,4,1,0.498333,0.493046,0.6125,0.0659292,846,5719,6565
476,4/20/2012,2,1,4,5,1,0.526667,0.515775,0.694583,0.149871,1340,5950,7290
477,4/21/2012,2,1,4,6,1,0.57,0.542921,0.682917,0.283587,2541,4083,6624
478,4/22/2012,2,1,4,0,3,0.396667,0.389504,0.835417,0.344546,120,907,1027
479,4/23/2012,2,1,4,1,2,0.321667,0.301125,0.766667,0.303496,195,3019,3214
480,4/24/2012,2,1,4,2,1,0.413333,0.405283,0.454167,0.249383,518,5115,5633
481,4/25/2012,2,1,4,3,1,0.476667,0.470317,0.427917,0.118792,655,5541,6196
482,4/26/2012,2,1,4,4,2,0.498333,0.483583,0.756667,0.176625,475,4551,5026
483,4/27/2012,2,1,4,5,1,0.4575,0.452637,0.400833,0.347633,1014,5219,6233
484,4/28/2012,2,1,4,6,2,0.376667,0.377504,0.489583,0.129975,1120,3100,4220
485,4/29/2012,2,1,4,0,1,0.458333,0.450121,0.587083,0.116908,2229,4075,6304
486,4/30/2012,2,1,4,1,2,0.464167,0.457696,0.57,0.171638,665,4907,5572
487,5/1/2012,2,1,5,2,2,0.613333,0.577021,0.659583,0.156096,653,5087,5740
488,5/2/2012,2,1,5,3,1,0.564167,0.537896,0.797083,0.138058,667,5502,6169
489,5/3/2012,2,1,5,4,2,0.56,0.537242,0.768333,0.133696,764,5657,6421
490,5/4/2012,2,1,5,5,1,0.6275,0.590917,0.735417,0.162938,1069,5227,6296
491,5/5/2012,2,1,5,6,2,0.621667,0.584608,0.756667,0.152992,2496,4387,6883
492,5/6/2012,2,1,5,0,2,0.5625,0.546737,0.74,0.149879,2135,4224,6359
493,5/7/2012,2,1,5,1,2,0.5375,0.527142,0.664167,0.230721,1008,5265,6273
494,5/8/2012,2,1,5,2,2,0.581667,0.557471,0.685833,0.296029,738,4990,5728
495,5/9/2012,2,1,5,3,2,0.575,0.553025,0.744167,0.216412,620,4097,4717
496,5/10/2012,2,1,5,4,1,0.505833,0.491783,0.552083,0.314063,1026,5546,6572
497,5/11/2012,2,1,5,5,1,0.533333,0.520833,0.360417,0.236937,1319,5711,7030
498,5/12/2012,2,1,5,6,1,0.564167,0.544817,0.480417,0.123133,2622,4807,7429
499,5/13/2012,2,1,5,0,1,0.6125,0.585238,0.57625,0.225117,2172,3946,6118
500,5/14/2012,2,1,5,1,2,0.573333,0.5499,0.789583,0.212692,342,2501,2843
501,5/15/2012,2,1,5,2,2,0.611667,0.576404,0.794583,0.147392,625,4490,5115
502,5/16/2012,2,1,5,3,1,0.636667,0.595975,0.697917,0.122512,991,6433,7424
503,5/17/2012,2,1,5,4,1,0.593333,0.572613,0.52,0.229475,1242,6142,7384
504,5/18/2012,2,1,5,5,1,0.564167,0.551121,0.523333,0.136817,1521,6118,7639
505,5/19/2012,2,1,5,6,1,0.6,0.566908,0.45625,0.083975,3410,4884,8294
506,5/20/2012,2,1,5,0,1,0.620833,0.583967,0.530417,0.254367,2704,4425,7129
507,5/21/2012,2,1,5,1,2,0.598333,0.565667,0.81125,0.233204,630,3729,4359
508,5/22/2012,2,1,5,2,2,0.615,0.580825,0.765833,0.118167,819,5254,6073
509,5/23/2012,2,1,5,3,2,0.621667,0.584612,0.774583,0.102,766,4494,5260
510,5/24/2012,2,1,5,4,1,0.655,0.6067,0.716667,0.172896,1059,5711,6770
511,5/25/2012,2,1,5,5,1,0.68,0.627529,0.747083,0.14055,1417,5317,6734
512,5/26/2012,2,1,5,6,1,0.6925,0.642696,0.7325,0.198992,2855,3681,6536
513,5/27/2012,2,1,5,0,1,0.69,0.641425,0.697083,0.215171,3283,3308,6591
514,5/28/2012,2,1,5,1,1,0.7125,0.6793,0.67625,0.196521,2557,3486,6043
515,5/29/2012,2,1,5,2,1,0.7225,0.672992,0.684583,0.2954,880,4863,5743
516,5/30/2012,2,1,5,3,2,0.656667,0.611129,0.67,0.134329,745,6110,6855
517,5/31/2012,2,1,5,4,1,0.68,0.631329,0.492917,0.195279,1100,6238,7338
518,6/1/2012,2,1,6,5,2,0.654167,0.607962,0.755417,0.237563,533,3594,4127
519,6/2/2012,2,1,6,6,1,0.583333,0.566288,0.549167,0.186562,2795,5325,8120
520,6/3/2012,2,1,6,0,1,0.6025,0.575133,0.493333,0.184087,2494,5147,7641
521,6/4/2012,2,1,6,1,1,0.5975,0.578283,0.487083,0.284833,1071,5927,6998
522,6/5/2012,2,1,6,2,2,0.540833,0.525892,0.613333,0.209575,968,6033,7001
523,6/6/2012,2,1,6,3,1,0.554167,0.542292,0.61125,0.077125,1027,6028,7055
524,6/7/2012,2,1,6,4,1,0.6025,0.569442,0.567083,0.15735,1038,6456,7494
525,6/8/2012,2,1,6,5,1,0.649167,0.597862,0.467917,0.175383,1488,6248,7736
526,6/9/2012,2,1,6,6,1,0.710833,0.648367,0.437083,0.144287,2708,4790,7498
527,6/10/2012,2,1,6,0,1,0.726667,0.663517,0.538333,0.133721,2224,4374,6598
528,6/11/2012,2,1,6,1,2,0.720833,0.659721,0.587917,0.207713,1017,5647,6664
529,6/12/2012,2,1,6,2,2,0.653333,0.597875,0.833333,0.214546,477,4495,4972
530,6/13/2012,2,1,6,3,1,0.655833,0.611117,0.582083,0.343279,1173,6248,7421
531,6/14/2012,2,1,6,4,1,0.648333,0.624383,0.569583,0.253733,1180,6183,7363
532,6/15/2012,2,1,6,5,1,0.639167,0.599754,0.589583,0.176617,1563,6102,7665
533,6/16/2012,2,1,6,6,1,0.631667,0.594708,0.504167,0.166667,2963,4739,7702
534,6/17/2012,2,1,6,0,1,0.5925,0.571975,0.59875,0.144904,2634,4344,6978
535,6/18/2012,2,1,6,1,2,0.568333,0.544842,0.777917,0.174746,653,4446,5099
536,6/19/2012,2,1,6,2,1,0.688333,0.654692,0.69,0.148017,968,5857,6825
537,6/20/2012,2,1,6,3,1,0.7825,0.720975,0.592083,0.113812,872,5339,6211
538,6/21/2012,3,1,6,4,1,0.805833,0.752542,0.567917,0.118787,778,5127,5905
539,6/22/2012,3,1,6,5,1,0.7775,0.724121,0.57375,0.182842,964,4859,5823
540,6/23/2012,3,1,6,6,1,0.731667,0.652792,0.534583,0.179721,2657,4801,7458
541,6/24/2012,3,1,6,0,1,0.743333,0.674254,0.479167,0.145525,2551,4340,6891
542,6/25/2012,3,1,6,1,1,0.715833,0.654042,0.504167,0.300383,1139,5640,6779
543,6/26/2012,3,1,6,2,1,0.630833,0.594704,0.373333,0.347642,1077,6365,7442
544,6/27/2012,3,1,6,3,1,0.6975,0.640792,0.36,0.271775,1077,6258,7335
545,6/28/2012,3,1,6,4,1,0.749167,0.675512,0.4225,0.17165,921,5958,6879
546,6/29/2012,3,1,6,5,1,0.834167,0.786613,0.48875,0.165417,829,4634,5463
547,6/30/2012,3,1,6,6,1,0.765,0.687508,0.60125,0.161071,1455,4232,5687
548,7/1/2012,3,1,7,0,1,0.815833,0.750629,0.51875,0.168529,1421,4110,5531
549,7/2/2012,3,1,7,1,1,0.781667,0.702038,0.447083,0.195267,904,5323,6227
550,7/3/2012,3,1,7,2,1,0.780833,0.70265,0.492083,0.126237,1052,5608,6660
551,7/4/2012,3,1,7,3,1,0.789167,0.732337,0.53875,0.13495,2562,4841,7403
552,7/5/2012,3,1,7,4,1,0.8275,0.761367,0.457917,0.194029,1405,4836,6241
553,7/6/2012,3,1,7,5,1,0.828333,0.752533,0.450833,0.146142,1366,4841,6207
554,7/7/2012,3,1,7,6,1,0.861667,0.804913,0.492083,0.163554,1448,3392,4840
555,7/8/2012,3,1,7,0,1,0.8225,0.790396,0.57375,0.125629,1203,3469,4672
556,7/9/2012,3,1,7,1,2,0.710833,0.654054,0.683333,0.180975,998,5571,6569
557,7/10/2012,3,1,7,2,2,0.720833,0.664796,0.6675,0.151737,954,5336,6290
558,7/11/2012,3,1,7,3,1,0.716667,0.650271,0.633333,0.151733,975,6289,7264
559,7/12/2012,3,1,7,4,1,0.715833,0.654683,0.529583,0.146775,1032,6414,7446
560,7/13/2012,3,1,7,5,2,0.731667,0.667933,0.485833,0.08085,1511,5988,7499
561,7/14/2012,3,1,7,6,2,0.703333,0.666042,0.699167,0.143679,2355,4614,6969
562,7/15/2012,3,1,7,0,1,0.745833,0.705196,0.717917,0.166667,1920,4111,6031
563,7/16/2012,3,1,7,1,1,0.763333,0.724125,0.645,0.164187,1088,5742,6830
564,7/17/2012,3,1,7,2,1,0.818333,0.755683,0.505833,0.114429,921,5865,6786
565,7/18/2012,3,1,7,3,1,0.793333,0.745583,0.577083,0.137442,799,4914,5713
566,7/19/2012,3,1,7,4,1,0.77,0.714642,0.600417,0.165429,888,5703,6591
567,7/20/2012,3,1,7,5,2,0.665833,0.613025,0.844167,0.208967,747,5123,5870
568,7/21/2012,3,1,7,6,3,0.595833,0.549912,0.865417,0.2133,1264,3195,4459
569,7/22/2012,3,1,7,0,2,0.6675,0.623125,0.7625,0.0939208,2544,4866,7410
570,7/23/2012,3,1,7,1,1,0.741667,0.690017,0.694167,0.138683,1135,5831,6966
571,7/24/2012,3,1,7,2,1,0.750833,0.70645,0.655,0.211454,1140,6452,7592
572,7/25/2012,3,1,7,3,1,0.724167,0.654054,0.45,0.1648,1383,6790,8173
573,7/26/2012,3,1,7,4,1,0.776667,0.739263,0.596667,0.284813,1036,5825,6861
574,7/27/2012,3,1,7,5,1,0.781667,0.734217,0.594583,0.152992,1259,5645,6904
575,7/28/2012,3,1,7,6,1,0.755833,0.697604,0.613333,0.15735,2234,4451,6685
576,7/29/2012,3,1,7,0,1,0.721667,0.667933,0.62375,0.170396,2153,4444,6597
577,7/30/2012,3,1,7,1,1,0.730833,0.684987,0.66875,0.153617,1040,6065,7105
578,7/31/2012,3,1,7,2,1,0.713333,0.662896,0.704167,0.165425,968,6248,7216
579,8/1/2012,3,1,8,3,1,0.7175,0.667308,0.6775,0.141179,1074,6506,7580
580,8/2/2012,3,1,8,4,1,0.7525,0.707088,0.659583,0.129354,983,6278,7261
581,8/3/2012,3,1,8,5,2,0.765833,0.722867,0.6425,0.215792,1328,5847,7175
582,8/4/2012,3,1,8,6,1,0.793333,0.751267,0.613333,0.257458,2345,4479,6824
583,8/5/2012,3,1,8,0,1,0.769167,0.731079,0.6525,0.290421,1707,3757,5464
584,8/6/2012,3,1,8,1,2,0.7525,0.710246,0.654167,0.129354,1233,5780,7013
585,8/7/2012,3,1,8,2,2,0.735833,0.697621,0.70375,0.116908,1278,5995,7273
586,8/8/2012,3,1,8,3,2,0.75,0.707717,0.672917,0.1107,1263,6271,7534
587,8/9/2012,3,1,8,4,1,0.755833,0.699508,0.620417,0.1561,1196,6090,7286
588,8/10/2012,3,1,8,5,2,0.715833,0.667942,0.715833,0.238813,1065,4721,5786
589,8/11/2012,3,1,8,6,2,0.6925,0.638267,0.732917,0.206479,2247,4052,6299
590,8/12/2012,3,1,8,0,1,0.700833,0.644579,0.530417,0.122512,2182,4362,6544
591,8/13/2012,3,1,8,1,1,0.720833,0.662254,0.545417,0.136212,1207,5676,6883
592,8/14/2012,3,1,8,2,1,0.726667,0.676779,0.686667,0.169158,1128,5656,6784
593,8/15/2012,3,1,8,3,1,0.706667,0.654037,0.619583,0.169771,1198,6149,7347
594,8/16/2012,3,1,8,4,1,0.719167,0.654688,0.519167,0.141796,1338,6267,7605
595,8/17/2012,3,1,8,5,1,0.723333,0.2424,0.570833,0.231354,1483,5665,7148
596,8/18/2012,3,1,8,6,1,0.678333,0.618071,0.603333,0.177867,2827,5038,7865
597,8/19/2012,3,1,8,0,2,0.635833,0.603554,0.711667,0.08645,1208,3341,4549
598,8/20/2012,3,1,8,1,2,0.635833,0.595967,0.734167,0.129979,1026,5504,6530
599,8/21/2012,3,1,8,2,1,0.649167,0.601025,0.67375,0.0727708,1081,5925,7006
600,8/22/2012,3,1,8,3,1,0.6675,0.621854,0.677083,0.0702833,1094,6281,7375
601,8/23/2012,3,1,8,4,1,0.695833,0.637008,0.635833,0.0845958,1363,6402,7765
602,8/24/2012,3,1,8,5,2,0.7025,0.6471,0.615,0.0721458,1325,6257,7582
603,8/25/2012,3,1,8,6,2,0.661667,0.618696,0.712917,0.244408,1829,4224,6053
604,8/26/2012,3,1,8,0,2,0.653333,0.595996,0.845833,0.228858,1483,3772,5255
605,8/27/2012,3,1,8,1,1,0.703333,0.654688,0.730417,0.128733,989,5928,6917
606,8/28/2012,3,1,8,2,1,0.728333,0.66605,0.62,0.190925,935,6105,7040
607,8/29/2012,3,1,8,3,1,0.685,0.635733,0.552083,0.112562,1177,6520,7697
608,8/30/2012,3,1,8,4,1,0.706667,0.652779,0.590417,0.0771167,1172,6541,7713
609,8/31/2012,3,1,8,5,1,0.764167,0.6894,0.5875,0.168533,1433,5917,7350
610,9/1/2012,3,1,9,6,2,0.753333,0.702654,0.638333,0.113187,2352,3788,6140
611,9/2/2012,3,1,9,0,2,0.696667,0.649,0.815,0.0640708,2613,3197,5810
612,9/3/2012,3,1,9,1,1,0.7075,0.661629,0.790833,0.151121,1965,4069,6034
613,9/4/2012,3,1,9,2,1,0.725833,0.686888,0.755,0.236321,867,5997,6864
614,9/5/2012,3,1,9,3,1,0.736667,0.708983,0.74125,0.187808,832,6280,7112
615,9/6/2012,3,1,9,4,2,0.696667,0.655329,0.810417,0.142421,611,5592,6203
616,9/7/2012,3,1,9,5,1,0.703333,0.657204,0.73625,0.171646,1045,6459,7504
617,9/8/2012,3,1,9,6,2,0.659167,0.611121,0.799167,0.281104,1557,4419,5976
618,9/9/2012,3,1,9,0,1,0.61,0.578925,0.5475,0.224496,2570,5657,8227
619,9/10/2012,3,1,9,1,1,0.583333,0.565654,0.50375,0.258713,1118,6407,7525
620,9/11/2012,3,1,9,2,1,0.5775,0.554292,0.52,0.0920542,1070,6697,7767
621,9/12/2012,3,1,9,3,1,0.599167,0.570075,0.577083,0.131846,1050,6820,7870
622,9/13/2012,3,1,9,4,1,0.6125,0.579558,0.637083,0.0827208,1054,6750,7804
623,9/14/2012,3,1,9,5,1,0.633333,0.594083,0.6725,0.103863,1379,6630,8009
624,9/15/2012,3,1,9,6,1,0.608333,0.585867,0.501667,0.247521,3160,5554,8714
625,9/16/2012,3,1,9,0,1,0.58,0.563125,0.57,0.0901833,2166,5167,7333
626,9/17/2012,3,1,9,1,2,0.580833,0.55305,0.734583,0.151742,1022,5847,6869
627,9/18/2012,3,1,9,2,2,0.623333,0.565067,0.8725,0.357587,371,3702,4073
628,9/19/2012,3,1,9,3,1,0.5525,0.540404,0.536667,0.215175,788,6803,7591
629,9/20/2012,3,1,9,4,1,0.546667,0.532192,0.618333,0.118167,939,6781,7720
630,9/21/2012,3,1,9,5,1,0.599167,0.571971,0.66875,0.154229,1250,6917,8167
631,9/22/2012,3,1,9,6,1,0.65,0.610488,0.646667,0.283583,2512,5883,8395
632,9/23/2012,4,1,9,0,1,0.529167,0.518933,0.467083,0.223258,2454,5453,7907
633,9/24/2012,4,1,9,1,1,0.514167,0.502513,0.492917,0.142404,1001,6435,7436
634,9/25/2012,4,1,9,2,1,0.55,0.544179,0.57,0.236321,845,6693,7538
635,9/26/2012,4,1,9,3,1,0.635,0.596613,0.630833,0.2444,787,6946,7733
636,9/27/2012,4,1,9,4,2,0.65,0.607975,0.690833,0.134342,751,6642,7393
637,9/28/2012,4,1,9,5,2,0.619167,0.585863,0.69,0.164179,1045,6370,7415
638,9/29/2012,4,1,9,6,1,0.5425,0.530296,0.542917,0.227604,2589,5966,8555
639,9/30/2012,4,1,9,0,1,0.526667,0.517663,0.583333,0.134958,2015,4874,6889
640,10/1/2012,4,1,10,1,2,0.520833,0.512,0.649167,0.0908042,763,6015,6778
641,10/2/2012,4,1,10,2,3,0.590833,0.542333,0.871667,0.104475,315,4324,4639
642,10/3/2012,4,1,10,3,2,0.6575,0.599133,0.79375,0.0665458,728,6844,7572
643,10/4/2012,4,1,10,4,2,0.6575,0.607975,0.722917,0.117546,891,6437,7328
644,10/5/2012,4,1,10,5,1,0.615,0.580187,0.6275,0.10635,1516,6640,8156
645,10/6/2012,4,1,10,6,1,0.554167,0.538521,0.664167,0.268025,3031,4934,7965
646,10/7/2012,4,1,10,0,2,0.415833,0.419813,0.708333,0.141162,781,2729,3510
647,10/8/2012,4,1,10,1,2,0.383333,0.387608,0.709583,0.189679,874,4604,5478
648,10/9/2012,4,1,10,2,2,0.446667,0.438112,0.761667,0.1903,601,5791,6392
649,10/10/2012,4,1,10,3,1,0.514167,0.503142,0.630833,0.187821,780,6911,7691
650,10/11/2012,4,1,10,4,1,0.435,0.431167,0.463333,0.181596,834,6736,7570
651,10/12/2012,4,1,10,5,1,0.4375,0.433071,0.539167,0.235092,1060,6222,7282
652,10/13/2012,4,1,10,6,1,0.393333,0.391396,0.494583,0.146142,2252,4857,7109
653,10/14/2012,4,1,10,0,1,0.521667,0.508204,0.640417,0.278612,2080,4559,6639
654,10/15/2012,4,1,10,1,2,0.561667,0.53915,0.7075,0.296037,760,5115,5875
655,10/16/2012,4,1,10,2,1,0.468333,0.460846,0.558333,0.182221,922,6612,7534
656,10/17/2012,4,1,10,3,1,0.455833,0.450108,0.692917,0.101371,979,6482,7461
657,10/18/2012,4,1,10,4,2,0.5225,0.512625,0.728333,0.236937,1008,6501,7509
658,10/19/2012,4,1,10,5,2,0.563333,0.537896,0.815,0.134954,753,4671,5424
659,10/20/2012,4,1,10,6,1,0.484167,0.472842,0.572917,0.117537,2806,5284,8090
660,10/21/2012,4,1,10,0,1,0.464167,0.456429,0.51,0.166054,2132,4692,6824
661,10/22/2012,4,1,10,1,1,0.4875,0.482942,0.568333,0.0814833,830,6228,7058
662,10/23/2012,4,1,10,2,1,0.544167,0.530304,0.641667,0.0945458,841,6625,7466
663,10/24/2012,4,1,10,3,1,0.5875,0.558721,0.63625,0.0727792,795,6898,7693
664,10/25/2012,4,1,10,4,2,0.55,0.529688,0.800417,0.124375,875,6484,7359
665,10/26/2012,4,1,10,5,2,0.545833,0.52275,0.807083,0.132467,1182,6262,7444
666,10/27/2012,4,1,10,6,2,0.53,0.515133,0.72,0.235692,2643,5209,7852
667,10/28/2012,4,1,10,0,2,0.4775,0.467771,0.694583,0.398008,998,3461,4459
668,10/29/2012,4,1,10,1,3,0.44,0.4394,0.88,0.3582,2,20,22
669,10/30/2012,4,1,10,2,2,0.318182,0.309909,0.825455,0.213009,87,1009,1096
670,10/31/2012,4,1,10,3,2,0.3575,0.3611,0.666667,0.166667,419,5147,5566
671,11/1/2012,4,1,11,4,2,0.365833,0.369942,0.581667,0.157346,466,5520,5986
672,11/2/2012,4,1,11,5,1,0.355,0.356042,0.522083,0.266175,618,5229,5847
673,11/3/2012,4,1,11,6,2,0.343333,0.323846,0.49125,0.270529,1029,4109,5138
674,11/4/2012,4,1,11,0,1,0.325833,0.329538,0.532917,0.179108,1201,3906,5107
675,11/5/2012,4,1,11,1,1,0.319167,0.308075,0.494167,0.236325,378,4881,5259
676,11/6/2012,4,1,11,2,1,0.280833,0.281567,0.567083,0.173513,466,5220,5686
677,11/7/2012,4,1,11,3,2,0.295833,0.274621,0.5475,0.304108,326,4709,5035
678,11/8/2012,4,1,11,4,1,0.352174,0.341891,0.333478,0.347835,340,4975,5315
679,11/9/2012,4,1,11,5,1,0.361667,0.355413,0.540833,0.214558,709,5283,5992
680,11/10/2012,4,1,11,6,1,0.389167,0.393937,0.645417,0.0578458,2090,4446,6536
681,11/11/2012,4,1,11,0,1,0.420833,0.421713,0.659167,0.1275,2290,4562,6852
682,11/12/2012,4,1,11,1,1,0.485,0.475383,0.741667,0.173517,1097,5172,6269
683,11/13/2012,4,1,11,2,2,0.343333,0.323225,0.662917,0.342046,327,3767,4094
684,11/14/2012,4,1,11,3,1,0.289167,0.281563,0.552083,0.199625,373,5122,5495
685,11/15/2012,4,1,11,4,2,0.321667,0.324492,0.620417,0.152987,320,5125,5445
686,11/16/2012,4,1,11,5,1,0.345,0.347204,0.524583,0.171025,484,5214,5698
687,11/17/2012,4,1,11,6,1,0.325,0.326383,0.545417,0.179729,1313,4316,5629
688,11/18/2012,4,1,11,0,1,0.3425,0.337746,0.692917,0.227612,922,3747,4669
689,11/19/2012,4,1,11,1,2,0.380833,0.375621,0.623333,0.235067,449,5050,5499
690,11/20/2012,4,1,11,2,2,0.374167,0.380667,0.685,0.082725,534,5100,5634
691,11/21/2012,4,1,11,3,1,0.353333,0.364892,0.61375,0.103246,615,4531,5146
692,11/22/2012,4,1,11,4,1,0.34,0.350371,0.580417,0.0528708,955,1470,2425
693,11/23/2012,4,1,11,5,1,0.368333,0.378779,0.56875,0.148021,1603,2307,3910
694,11/24/2012,4,1,11,6,1,0.278333,0.248742,0.404583,0.376871,532,1745,2277
695,11/25/2012,4,1,11,0,1,0.245833,0.257583,0.468333,0.1505,309,2115,2424
696,11/26/2012,4,1,11,1,1,0.313333,0.339004,0.535417,0.04665,337,4750,5087
697,11/27/2012,4,1,11,2,2,0.291667,0.281558,0.786667,0.237562,123,3836,3959
698,11/28/2012,4,1,11,3,1,0.296667,0.289762,0.50625,0.210821,198,5062,5260
699,11/29/2012,4,1,11,4,1,0.28087,0.298422,0.555652,0.115522,243,5080,5323
700,11/30/2012,4,1,11,5,1,0.298333,0.323867,0.649583,0.0584708,362,5306,5668
701,12/1/2012,4,1,12,6,2,0.298333,0.316904,0.806667,0.0597042,951,4240,5191
702,12/2/2012,4,1,12,0,2,0.3475,0.359208,0.823333,0.124379,892,3757,4649
703,12/3/2012,4,1,12,1,1,0.4525,0.455796,0.7675,0.0827208,555,5679,6234
704,12/4/2012,4,1,12,2,1,0.475833,0.469054,0.73375,0.174129,551,6055,6606
705,12/5/2012,4,1,12,3,1,0.438333,0.428012,0.485,0.324021,331,5398,5729
706,12/6/2012,4,1,12,4,1,0.255833,0.258204,0.50875,0.174754,340,5035,5375
707,12/7/2012,4,1,12,5,2,0.320833,0.321958,0.764167,0.1306,349,4659,5008
708,12/8/2012,4,1,12,6,2,0.381667,0.389508,0.91125,0.101379,1153,4429,5582
709,12/9/2012,4,1,12,0,2,0.384167,0.390146,0.905417,0.157975,441,2787,3228
710,12/10/2012,4,1,12,1,2,0.435833,0.435575,0.925,0.190308,329,4841,5170
711,12/11/2012,4,1,12,2,2,0.353333,0.338363,0.596667,0.296037,282,5219,5501
712,12/12/2012,4,1,12,3,2,0.2975,0.297338,0.538333,0.162937,310,5009,5319
713,12/13/2012,4,1,12,4,1,0.295833,0.294188,0.485833,0.174129,425,5107,5532
714,12/14/2012,4,1,12,5,1,0.281667,0.294192,0.642917,0.131229,429,5182,5611
715,12/15/2012,4,1,12,6,1,0.324167,0.338383,0.650417,0.10635,767,4280,5047
716,12/16/2012,4,1,12,0,2,0.3625,0.369938,0.83875,0.100742,538,3248,3786
717,12/17/2012,4,1,12,1,2,0.393333,0.4015,0.907083,0.0982583,212,4373,4585
718,12/18/2012,4,1,12,2,1,0.410833,0.409708,0.66625,0.221404,433,5124,5557
719,12/19/2012,4,1,12,3,1,0.3325,0.342162,0.625417,0.184092,333,4934,5267
720,12/20/2012,4,1,12,4,2,0.33,0.335217,0.667917,0.132463,314,3814,4128
721,12/21/2012,1,1,12,5,2,0.326667,0.301767,0.556667,0.374383,221,3402,3623
722,12/22/2012,1,1,12,6,1,0.265833,0.236113,0.44125,0.407346,205,1544,1749
723,12/23/2012,1,1,12,0,1,0.245833,0.259471,0.515417,0.133083,408,1379,1787
724,12/24/2012,1,1,12,1,2,0.231304,0.2589,0.791304,0.0772304,174,746,920
725,12/25/2012,1,1,12,2,2,0.291304,0.294465,0.734783,0.168726,440,573,1013
726,12/26/2012,1,1,12,3,3,0.243333,0.220333,0.823333,0.316546,9,432,441
727,12/27/2012,1,1,12,4,2,0.254167,0.226642,0.652917,0.350133,247,1867,2114
728,12/28/2012,1,1,12,5,2,0.253333,0.255046,0.59,0.155471,644,2451,3095
729,12/29/2012,1,1,12,6,2,0.253333,0.2424,0.752917,0.124383,159,1182,1341
730,12/30/2012,1,1,12,0,1,0.255833,0.2317,0.483333,0.350754,364,1432,1796
731,12/31/2012,1,1,12,1,2,0.215833,0.223487,0.5775,0.154846,439,2290,2729
1 instant date season yr mnth weekday weathersit temp atemp hum windspeed casual registered cnt
2 1 1/1/2011 1 0 1 6 2 0.344167 0.363625 0.805833 0.160446 331 654 985
3 2 1/2/2011 1 0 1 0 2 0.363478 0.353739 0.696087 0.248539 131 670 801
4 3 1/3/2011 1 0 1 1 1 0.196364 0.189405 0.437273 0.248309 120 1229 1349
5 4 1/4/2011 1 0 1 2 1 0.2 0.212122 0.590435 0.160296 108 1454 1562
6 5 1/5/2011 1 0 1 3 1 0.226957 0.22927 0.436957 0.1869 82 1518 1600
7 6 1/6/2011 1 0 1 4 1 0.204348 0.233209 0.518261 0.0895652 88 1518 1606
8 7 1/7/2011 1 0 1 5 2 0.196522 0.208839 0.498696 0.168726 148 1362 1510
9 8 1/8/2011 1 0 1 6 2 0.165 0.162254 0.535833 0.266804 68 891 959
10 9 1/9/2011 1 0 1 0 1 0.138333 0.116175 0.434167 0.36195 54 768 822
11 10 1/10/2011 1 0 1 1 1 0.150833 0.150888 0.482917 0.223267 41 1280 1321
12 11 1/11/2011 1 0 1 2 2 0.169091 0.191464 0.686364 0.122132 43 1220 1263
13 12 1/12/2011 1 0 1 3 1 0.172727 0.160473 0.599545 0.304627 25 1137 1162
14 13 1/13/2011 1 0 1 4 1 0.165 0.150883 0.470417 0.301 38 1368 1406
15 14 1/14/2011 1 0 1 5 1 0.16087 0.188413 0.537826 0.126548 54 1367 1421
16 15 1/15/2011 1 0 1 6 2 0.233333 0.248112 0.49875 0.157963 222 1026 1248
17 16 1/16/2011 1 0 1 0 1 0.231667 0.234217 0.48375 0.188433 251 953 1204
18 17 1/17/2011 1 0 1 1 2 0.175833 0.176771 0.5375 0.194017 117 883 1000
19 18 1/18/2011 1 0 1 2 2 0.216667 0.232333 0.861667 0.146775 9 674 683
20 19 1/19/2011 1 0 1 3 2 0.292174 0.298422 0.741739 0.208317 78 1572 1650
21 20 1/20/2011 1 0 1 4 2 0.261667 0.25505 0.538333 0.195904 83 1844 1927
22 21 1/21/2011 1 0 1 5 1 0.1775 0.157833 0.457083 0.353242 75 1468 1543
23 22 1/22/2011 1 0 1 6 1 0.0591304 0.0790696 0.4 0.17197 93 888 981
24 23 1/23/2011 1 0 1 0 1 0.0965217 0.0988391 0.436522 0.2466 150 836 986
25 24 1/24/2011 1 0 1 1 1 0.0973913 0.11793 0.491739 0.15833 86 1330 1416
26 25 1/25/2011 1 0 1 2 2 0.223478 0.234526 0.616957 0.129796 186 1799 1985
27 26 1/26/2011 1 0 1 3 3 0.2175 0.2036 0.8625 0.29385 34 472 506
28 27 1/27/2011 1 0 1 4 1 0.195 0.2197 0.6875 0.113837 15 416 431
29 28 1/28/2011 1 0 1 5 2 0.203478 0.223317 0.793043 0.1233 38 1129 1167
30 29 1/29/2011 1 0 1 6 1 0.196522 0.212126 0.651739 0.145365 123 975 1098
31 30 1/30/2011 1 0 1 0 1 0.216522 0.250322 0.722174 0.0739826 140 956 1096
32 31 1/31/2011 1 0 1 1 2 0.180833 0.18625 0.60375 0.187192 42 1459 1501
33 32 2/1/2011 1 0 2 2 2 0.192174 0.23453 0.829565 0.053213 47 1313 1360
34 33 2/2/2011 1 0 2 3 2 0.26 0.254417 0.775417 0.264308 72 1454 1526
35 34 2/3/2011 1 0 2 4 1 0.186957 0.177878 0.437826 0.277752 61 1489 1550
36 35 2/4/2011 1 0 2 5 2 0.211304 0.228587 0.585217 0.127839 88 1620 1708
37 36 2/5/2011 1 0 2 6 2 0.233333 0.243058 0.929167 0.161079 100 905 1005
38 37 2/6/2011 1 0 2 0 1 0.285833 0.291671 0.568333 0.1418 354 1269 1623
39 38 2/7/2011 1 0 2 1 1 0.271667 0.303658 0.738333 0.0454083 120 1592 1712
40 39 2/8/2011 1 0 2 2 1 0.220833 0.198246 0.537917 0.36195 64 1466 1530
41 40 2/9/2011 1 0 2 3 2 0.134783 0.144283 0.494783 0.188839 53 1552 1605
42 41 2/10/2011 1 0 2 4 1 0.144348 0.149548 0.437391 0.221935 47 1491 1538
43 42 2/11/2011 1 0 2 5 1 0.189091 0.213509 0.506364 0.10855 149 1597 1746
44 43 2/12/2011 1 0 2 6 1 0.2225 0.232954 0.544167 0.203367 288 1184 1472
45 44 2/13/2011 1 0 2 0 1 0.316522 0.324113 0.457391 0.260883 397 1192 1589
46 45 2/14/2011 1 0 2 1 1 0.415 0.39835 0.375833 0.417908 208 1705 1913
47 46 2/15/2011 1 0 2 2 1 0.266087 0.254274 0.314348 0.291374 140 1675 1815
48 47 2/16/2011 1 0 2 3 1 0.318261 0.3162 0.423478 0.251791 218 1897 2115
49 48 2/17/2011 1 0 2 4 1 0.435833 0.428658 0.505 0.230104 259 2216 2475
50 49 2/18/2011 1 0 2 5 1 0.521667 0.511983 0.516667 0.264925 579 2348 2927
51 50 2/19/2011 1 0 2 6 1 0.399167 0.391404 0.187917 0.507463 532 1103 1635
52 51 2/20/2011 1 0 2 0 1 0.285217 0.27733 0.407826 0.223235 639 1173 1812
53 52 2/21/2011 1 0 2 1 2 0.303333 0.284075 0.605 0.307846 195 912 1107
54 53 2/22/2011 1 0 2 2 1 0.182222 0.186033 0.577778 0.195683 74 1376 1450
55 54 2/23/2011 1 0 2 3 1 0.221739 0.245717 0.423043 0.094113 139 1778 1917
56 55 2/24/2011 1 0 2 4 2 0.295652 0.289191 0.697391 0.250496 100 1707 1807
57 56 2/25/2011 1 0 2 5 2 0.364348 0.350461 0.712174 0.346539 120 1341 1461
58 57 2/26/2011 1 0 2 6 1 0.2825 0.282192 0.537917 0.186571 424 1545 1969
59 58 2/27/2011 1 0 2 0 1 0.343478 0.351109 0.68 0.125248 694 1708 2402
60 59 2/28/2011 1 0 2 1 2 0.407273 0.400118 0.876364 0.289686 81 1365 1446
61 60 3/1/2011 1 0 3 2 1 0.266667 0.263879 0.535 0.216425 137 1714 1851
62 61 3/2/2011 1 0 3 3 1 0.335 0.320071 0.449583 0.307833 231 1903 2134
63 62 3/3/2011 1 0 3 4 1 0.198333 0.200133 0.318333 0.225754 123 1562 1685
64 63 3/4/2011 1 0 3 5 2 0.261667 0.255679 0.610417 0.203346 214 1730 1944
65 64 3/5/2011 1 0 3 6 2 0.384167 0.378779 0.789167 0.251871 640 1437 2077
66 65 3/6/2011 1 0 3 0 2 0.376522 0.366252 0.948261 0.343287 114 491 605
67 66 3/7/2011 1 0 3 1 1 0.261739 0.238461 0.551304 0.341352 244 1628 1872
68 67 3/8/2011 1 0 3 2 1 0.2925 0.3024 0.420833 0.12065 316 1817 2133
69 68 3/9/2011 1 0 3 3 2 0.295833 0.286608 0.775417 0.22015 191 1700 1891
70 69 3/10/2011 1 0 3 4 3 0.389091 0.385668 0 0.261877 46 577 623
71 70 3/11/2011 1 0 3 5 2 0.316522 0.305 0.649565 0.23297 247 1730 1977
72 71 3/12/2011 1 0 3 6 1 0.329167 0.32575 0.594583 0.220775 724 1408 2132
73 72 3/13/2011 1 0 3 0 1 0.384348 0.380091 0.527391 0.270604 982 1435 2417
74 73 3/14/2011 1 0 3 1 1 0.325217 0.332 0.496957 0.136926 359 1687 2046
75 74 3/15/2011 1 0 3 2 2 0.317391 0.318178 0.655652 0.184309 289 1767 2056
76 75 3/16/2011 1 0 3 3 2 0.365217 0.36693 0.776522 0.203117 321 1871 2192
77 76 3/17/2011 1 0 3 4 1 0.415 0.410333 0.602917 0.209579 424 2320 2744
78 77 3/18/2011 1 0 3 5 1 0.54 0.527009 0.525217 0.231017 884 2355 3239
79 78 3/19/2011 1 0 3 6 1 0.4725 0.466525 0.379167 0.368167 1424 1693 3117
80 79 3/20/2011 1 0 3 0 1 0.3325 0.32575 0.47375 0.207721 1047 1424 2471
81 80 3/21/2011 2 0 3 1 2 0.430435 0.409735 0.737391 0.288783 401 1676 2077
82 81 3/22/2011 2 0 3 2 1 0.441667 0.440642 0.624583 0.22575 460 2243 2703
83 82 3/23/2011 2 0 3 3 2 0.346957 0.337939 0.839565 0.234261 203 1918 2121
84 83 3/24/2011 2 0 3 4 2 0.285 0.270833 0.805833 0.243787 166 1699 1865
85 84 3/25/2011 2 0 3 5 1 0.264167 0.256312 0.495 0.230725 300 1910 2210
86 85 3/26/2011 2 0 3 6 1 0.265833 0.257571 0.394167 0.209571 981 1515 2496
87 86 3/27/2011 2 0 3 0 2 0.253043 0.250339 0.493913 0.1843 472 1221 1693
88 87 3/28/2011 2 0 3 1 1 0.264348 0.257574 0.302174 0.212204 222 1806 2028
89 88 3/29/2011 2 0 3 2 1 0.3025 0.292908 0.314167 0.226996 317 2108 2425
90 89 3/30/2011 2 0 3 3 2 0.3 0.29735 0.646667 0.172888 168 1368 1536
91 90 3/31/2011 2 0 3 4 3 0.268333 0.257575 0.918333 0.217646 179 1506 1685
92 91 4/1/2011 2 0 4 5 2 0.3 0.283454 0.68625 0.258708 307 1920 2227
93 92 4/2/2011 2 0 4 6 2 0.315 0.315637 0.65375 0.197146 898 1354 2252
94 93 4/3/2011 2 0 4 0 1 0.378333 0.378767 0.48 0.182213 1651 1598 3249
95 94 4/4/2011 2 0 4 1 1 0.573333 0.542929 0.42625 0.385571 734 2381 3115
96 95 4/5/2011 2 0 4 2 2 0.414167 0.39835 0.642083 0.388067 167 1628 1795
97 96 4/6/2011 2 0 4 3 1 0.390833 0.387608 0.470833 0.263063 413 2395 2808
98 97 4/7/2011 2 0 4 4 1 0.4375 0.433696 0.602917 0.162312 571 2570 3141
99 98 4/8/2011 2 0 4 5 2 0.335833 0.324479 0.83625 0.226992 172 1299 1471
100 99 4/9/2011 2 0 4 6 2 0.3425 0.341529 0.8775 0.133083 879 1576 2455
101 100 4/10/2011 2 0 4 0 2 0.426667 0.426737 0.8575 0.146767 1188 1707 2895
102 101 4/11/2011 2 0 4 1 2 0.595652 0.565217 0.716956 0.324474 855 2493 3348
103 102 4/12/2011 2 0 4 2 2 0.5025 0.493054 0.739167 0.274879 257 1777 2034
104 103 4/13/2011 2 0 4 3 2 0.4125 0.417283 0.819167 0.250617 209 1953 2162
105 104 4/14/2011 2 0 4 4 1 0.4675 0.462742 0.540417 0.1107 529 2738 3267
106 105 4/15/2011 2 0 4 5 1 0.446667 0.441913 0.67125 0.226375 642 2484 3126
107 106 4/16/2011 2 0 4 6 3 0.430833 0.425492 0.888333 0.340808 121 674 795
108 107 4/17/2011 2 0 4 0 1 0.456667 0.445696 0.479583 0.303496 1558 2186 3744
109 108 4/18/2011 2 0 4 1 1 0.5125 0.503146 0.5425 0.163567 669 2760 3429
110 109 4/19/2011 2 0 4 2 2 0.505833 0.489258 0.665833 0.157971 409 2795 3204
111 110 4/20/2011 2 0 4 3 1 0.595 0.564392 0.614167 0.241925 613 3331 3944
112 111 4/21/2011 2 0 4 4 1 0.459167 0.453892 0.407083 0.325258 745 3444 4189
113 112 4/22/2011 2 0 4 5 2 0.336667 0.321954 0.729583 0.219521 177 1506 1683
114 113 4/23/2011 2 0 4 6 2 0.46 0.450121 0.887917 0.230725 1462 2574 4036
115 114 4/24/2011 2 0 4 0 2 0.581667 0.551763 0.810833 0.192175 1710 2481 4191
116 115 4/25/2011 2 0 4 1 1 0.606667 0.5745 0.776667 0.185333 773 3300 4073
117 116 4/26/2011 2 0 4 2 1 0.631667 0.594083 0.729167 0.3265 678 3722 4400
118 117 4/27/2011 2 0 4 3 2 0.62 0.575142 0.835417 0.3122 547 3325 3872
119 118 4/28/2011 2 0 4 4 2 0.6175 0.578929 0.700833 0.320908 569 3489 4058
120 119 4/29/2011 2 0 4 5 1 0.51 0.497463 0.457083 0.240063 878 3717 4595
121 120 4/30/2011 2 0 4 6 1 0.4725 0.464021 0.503333 0.235075 1965 3347 5312
122 121 5/1/2011 2 0 5 0 2 0.451667 0.448204 0.762083 0.106354 1138 2213 3351
123 122 5/2/2011 2 0 5 1 2 0.549167 0.532833 0.73 0.183454 847 3554 4401
124 123 5/3/2011 2 0 5 2 2 0.616667 0.582079 0.697083 0.342667 603 3848 4451
125 124 5/4/2011 2 0 5 3 2 0.414167 0.40465 0.737083 0.328996 255 2378 2633
126 125 5/5/2011 2 0 5 4 1 0.459167 0.441917 0.444167 0.295392 614 3819 4433
127 126 5/6/2011 2 0 5 5 1 0.479167 0.474117 0.59 0.228246 894 3714 4608
128 127 5/7/2011 2 0 5 6 1 0.52 0.512621 0.54125 0.16045 1612 3102 4714
129 128 5/8/2011 2 0 5 0 1 0.528333 0.518933 0.631667 0.0746375 1401 2932 4333
130 129 5/9/2011 2 0 5 1 1 0.5325 0.525246 0.58875 0.176 664 3698 4362
131 130 5/10/2011 2 0 5 2 1 0.5325 0.522721 0.489167 0.115671 694 4109 4803
132 131 5/11/2011 2 0 5 3 1 0.5425 0.5284 0.632917 0.120642 550 3632 4182
133 132 5/12/2011 2 0 5 4 1 0.535 0.523363 0.7475 0.189667 695 4169 4864
134 133 5/13/2011 2 0 5 5 2 0.5125 0.4943 0.863333 0.179725 692 3413 4105
135 134 5/14/2011 2 0 5 6 2 0.520833 0.500629 0.9225 0.13495 902 2507 3409
136 135 5/15/2011 2 0 5 0 2 0.5625 0.536 0.867083 0.152979 1582 2971 4553
137 136 5/16/2011 2 0 5 1 1 0.5775 0.550512 0.787917 0.126871 773 3185 3958
138 137 5/17/2011 2 0 5 2 2 0.561667 0.538529 0.837917 0.277354 678 3445 4123
139 138 5/18/2011 2 0 5 3 2 0.55 0.527158 0.87 0.201492 536 3319 3855
140 139 5/19/2011 2 0 5 4 2 0.530833 0.510742 0.829583 0.108213 735 3840 4575
141 140 5/20/2011 2 0 5 5 1 0.536667 0.529042 0.719583 0.125013 909 4008 4917
142 141 5/21/2011 2 0 5 6 1 0.6025 0.571975 0.626667 0.12065 2258 3547 5805
143 142 5/22/2011 2 0 5 0 1 0.604167 0.5745 0.749583 0.148008 1576 3084 4660
144 143 5/23/2011 2 0 5 1 2 0.631667 0.590296 0.81 0.233842 836 3438 4274
145 144 5/24/2011 2 0 5 2 2 0.66 0.604813 0.740833 0.207092 659 3833 4492
146 145 5/25/2011 2 0 5 3 1 0.660833 0.615542 0.69625 0.154233 740 4238 4978
147 146 5/26/2011 2 0 5 4 1 0.708333 0.654688 0.6775 0.199642 758 3919 4677
148 147 5/27/2011 2 0 5 5 1 0.681667 0.637008 0.65375 0.240679 871 3808 4679
149 148 5/28/2011 2 0 5 6 1 0.655833 0.612379 0.729583 0.230092 2001 2757 4758
150 149 5/29/2011 2 0 5 0 1 0.6675 0.61555 0.81875 0.213938 2355 2433 4788
151 150 5/30/2011 2 0 5 1 1 0.733333 0.671092 0.685 0.131225 1549 2549 4098
152 151 5/31/2011 2 0 5 2 1 0.775 0.725383 0.636667 0.111329 673 3309 3982
153 152 6/1/2011 2 0 6 3 2 0.764167 0.720967 0.677083 0.207092 513 3461 3974
154 153 6/2/2011 2 0 6 4 1 0.715 0.643942 0.305 0.292287 736 4232 4968
155 154 6/3/2011 2 0 6 5 1 0.62 0.587133 0.354167 0.253121 898 4414 5312
156 155 6/4/2011 2 0 6 6 1 0.635 0.594696 0.45625 0.123142 1869 3473 5342
157 156 6/5/2011 2 0 6 0 2 0.648333 0.616804 0.6525 0.138692 1685 3221 4906
158 157 6/6/2011 2 0 6 1 1 0.678333 0.621858 0.6 0.121896 673 3875 4548
159 158 6/7/2011 2 0 6 2 1 0.7075 0.65595 0.597917 0.187808 763 4070 4833
160 159 6/8/2011 2 0 6 3 1 0.775833 0.727279 0.622083 0.136817 676 3725 4401
161 160 6/9/2011 2 0 6 4 2 0.808333 0.757579 0.568333 0.149883 563 3352 3915
162 161 6/10/2011 2 0 6 5 1 0.755 0.703292 0.605 0.140554 815 3771 4586
163 162 6/11/2011 2 0 6 6 1 0.725 0.678038 0.654583 0.15485 1729 3237 4966
164 163 6/12/2011 2 0 6 0 1 0.6925 0.643325 0.747917 0.163567 1467 2993 4460
165 164 6/13/2011 2 0 6 1 1 0.635 0.601654 0.494583 0.30535 863 4157 5020
166 165 6/14/2011 2 0 6 2 1 0.604167 0.591546 0.507083 0.269283 727 4164 4891
167 166 6/15/2011 2 0 6 3 1 0.626667 0.587754 0.471667 0.167912 769 4411 5180
168 167 6/16/2011 2 0 6 4 2 0.628333 0.595346 0.688333 0.206471 545 3222 3767
169 168 6/17/2011 2 0 6 5 1 0.649167 0.600383 0.735833 0.143029 863 3981 4844
170 169 6/18/2011 2 0 6 6 1 0.696667 0.643954 0.670417 0.119408 1807 3312 5119
171 170 6/19/2011 2 0 6 0 2 0.699167 0.645846 0.666667 0.102 1639 3105 4744
172 171 6/20/2011 2 0 6 1 2 0.635 0.595346 0.74625 0.155475 699 3311 4010
173 172 6/21/2011 3 0 6 2 2 0.680833 0.637646 0.770417 0.171025 774 4061 4835
174 173 6/22/2011 3 0 6 3 1 0.733333 0.693829 0.7075 0.172262 661 3846 4507
175 174 6/23/2011 3 0 6 4 2 0.728333 0.693833 0.703333 0.238804 746 4044 4790
176 175 6/24/2011 3 0 6 5 1 0.724167 0.656583 0.573333 0.222025 969 4022 4991
177 176 6/25/2011 3 0 6 6 1 0.695 0.643313 0.483333 0.209571 1782 3420 5202
178 177 6/26/2011 3 0 6 0 1 0.68 0.637629 0.513333 0.0945333 1920 3385 5305
179 178 6/27/2011 3 0 6 1 2 0.6825 0.637004 0.658333 0.107588 854 3854 4708
180 179 6/28/2011 3 0 6 2 1 0.744167 0.692558 0.634167 0.144283 732 3916 4648
181 180 6/29/2011 3 0 6 3 1 0.728333 0.654688 0.497917 0.261821 848 4377 5225
182 181 6/30/2011 3 0 6 4 1 0.696667 0.637008 0.434167 0.185312 1027 4488 5515
183 182 7/1/2011 3 0 7 5 1 0.7225 0.652162 0.39625 0.102608 1246 4116 5362
184 183 7/2/2011 3 0 7 6 1 0.738333 0.667308 0.444583 0.115062 2204 2915 5119
185 184 7/3/2011 3 0 7 0 2 0.716667 0.668575 0.6825 0.228858 2282 2367 4649
186 185 7/4/2011 3 0 7 1 2 0.726667 0.665417 0.637917 0.0814792 3065 2978 6043
187 186 7/5/2011 3 0 7 2 1 0.746667 0.696338 0.590417 0.126258 1031 3634 4665
188 187 7/6/2011 3 0 7 3 1 0.72 0.685633 0.743333 0.149883 784 3845 4629
189 188 7/7/2011 3 0 7 4 1 0.75 0.686871 0.65125 0.1592 754 3838 4592
190 189 7/8/2011 3 0 7 5 2 0.709167 0.670483 0.757917 0.225129 692 3348 4040
191 190 7/9/2011 3 0 7 6 1 0.733333 0.664158 0.609167 0.167912 1988 3348 5336
192 191 7/10/2011 3 0 7 0 1 0.7475 0.690025 0.578333 0.183471 1743 3138 4881
193 192 7/11/2011 3 0 7 1 1 0.7625 0.729804 0.635833 0.282337 723 3363 4086
194 193 7/12/2011 3 0 7 2 1 0.794167 0.739275 0.559167 0.200254 662 3596 4258
195 194 7/13/2011 3 0 7 3 1 0.746667 0.689404 0.631667 0.146133 748 3594 4342
196 195 7/14/2011 3 0 7 4 1 0.680833 0.635104 0.47625 0.240667 888 4196 5084
197 196 7/15/2011 3 0 7 5 1 0.663333 0.624371 0.59125 0.182833 1318 4220 5538
198 197 7/16/2011 3 0 7 6 1 0.686667 0.638263 0.585 0.208342 2418 3505 5923
199 198 7/17/2011 3 0 7 0 1 0.719167 0.669833 0.604167 0.245033 2006 3296 5302
200 199 7/18/2011 3 0 7 1 1 0.746667 0.703925 0.65125 0.215804 841 3617 4458
201 200 7/19/2011 3 0 7 2 1 0.776667 0.747479 0.650417 0.1306 752 3789 4541
202 201 7/20/2011 3 0 7 3 1 0.768333 0.74685 0.707083 0.113817 644 3688 4332
203 202 7/21/2011 3 0 7 4 2 0.815 0.826371 0.69125 0.222021 632 3152 3784
204 203 7/22/2011 3 0 7 5 1 0.848333 0.840896 0.580417 0.1331 562 2825 3387
205 204 7/23/2011 3 0 7 6 1 0.849167 0.804287 0.5 0.131221 987 2298 3285
206 205 7/24/2011 3 0 7 0 1 0.83 0.794829 0.550833 0.169171 1050 2556 3606
207 206 7/25/2011 3 0 7 1 1 0.743333 0.720958 0.757083 0.0908083 568 3272 3840
208 207 7/26/2011 3 0 7 2 1 0.771667 0.696979 0.540833 0.200258 750 3840 4590
209 208 7/27/2011 3 0 7 3 1 0.775 0.690667 0.402917 0.183463 755 3901 4656
210 209 7/28/2011 3 0 7 4 1 0.779167 0.7399 0.583333 0.178479 606 3784 4390
211 210 7/29/2011 3 0 7 5 1 0.838333 0.785967 0.5425 0.174138 670 3176 3846
212 211 7/30/2011 3 0 7 6 1 0.804167 0.728537 0.465833 0.168537 1559 2916 4475
213 212 7/31/2011 3 0 7 0 1 0.805833 0.729796 0.480833 0.164813 1524 2778 4302
214 213 8/1/2011 3 0 8 1 1 0.771667 0.703292 0.550833 0.156717 729 3537 4266
215 214 8/2/2011 3 0 8 2 1 0.783333 0.707071 0.49125 0.20585 801 4044 4845
216 215 8/3/2011 3 0 8 3 2 0.731667 0.679937 0.6575 0.135583 467 3107 3574
217 216 8/4/2011 3 0 8 4 2 0.71 0.664788 0.7575 0.19715 799 3777 4576
218 217 8/5/2011 3 0 8 5 1 0.710833 0.656567 0.630833 0.184696 1023 3843 4866
219 218 8/6/2011 3 0 8 6 2 0.716667 0.676154 0.755 0.22825 1521 2773 4294
220 219 8/7/2011 3 0 8 0 1 0.7425 0.715292 0.752917 0.201487 1298 2487 3785
221 220 8/8/2011 3 0 8 1 1 0.765 0.703283 0.592083 0.192175 846 3480 4326
222 221 8/9/2011 3 0 8 2 1 0.775 0.724121 0.570417 0.151121 907 3695 4602
223 222 8/10/2011 3 0 8 3 1 0.766667 0.684983 0.424167 0.200258 884 3896 4780
224 223 8/11/2011 3 0 8 4 1 0.7175 0.651521 0.42375 0.164796 812 3980 4792
225 224 8/12/2011 3 0 8 5 1 0.708333 0.654042 0.415 0.125621 1051 3854 4905
226 225 8/13/2011 3 0 8 6 2 0.685833 0.645858 0.729583 0.211454 1504 2646 4150
227 226 8/14/2011 3 0 8 0 2 0.676667 0.624388 0.8175 0.222633 1338 2482 3820
228 227 8/15/2011 3 0 8 1 1 0.665833 0.616167 0.712083 0.208954 775 3563 4338
229 228 8/16/2011 3 0 8 2 1 0.700833 0.645837 0.578333 0.236329 721 4004 4725
230 229 8/17/2011 3 0 8 3 1 0.723333 0.666671 0.575417 0.143667 668 4026 4694
231 230 8/18/2011 3 0 8 4 1 0.711667 0.662258 0.654583 0.233208 639 3166 3805
232 231 8/19/2011 3 0 8 5 2 0.685 0.633221 0.722917 0.139308 797 3356 4153
233 232 8/20/2011 3 0 8 6 1 0.6975 0.648996 0.674167 0.104467 1914 3277 5191
234 233 8/21/2011 3 0 8 0 1 0.710833 0.675525 0.77 0.248754 1249 2624 3873
235 234 8/22/2011 3 0 8 1 1 0.691667 0.638254 0.47 0.27675 833 3925 4758
236 235 8/23/2011 3 0 8 2 1 0.640833 0.606067 0.455417 0.146763 1281 4614 5895
237 236 8/24/2011 3 0 8 3 1 0.673333 0.630692 0.605 0.253108 949 4181 5130
238 237 8/25/2011 3 0 8 4 2 0.684167 0.645854 0.771667 0.210833 435 3107 3542
239 238 8/26/2011 3 0 8 5 1 0.7 0.659733 0.76125 0.0839625 768 3893 4661
240 239 8/27/2011 3 0 8 6 2 0.68 0.635556 0.85 0.375617 226 889 1115
241 240 8/28/2011 3 0 8 0 1 0.707059 0.647959 0.561765 0.304659 1415 2919 4334
242 241 8/29/2011 3 0 8 1 1 0.636667 0.607958 0.554583 0.159825 729 3905 4634
243 242 8/30/2011 3 0 8 2 1 0.639167 0.594704 0.548333 0.125008 775 4429 5204
244 243 8/31/2011 3 0 8 3 1 0.656667 0.611121 0.597917 0.0833333 688 4370 5058
245 244 9/1/2011 3 0 9 4 1 0.655 0.614921 0.639167 0.141796 783 4332 5115
246 245 9/2/2011 3 0 9 5 2 0.643333 0.604808 0.727083 0.139929 875 3852 4727
247 246 9/3/2011 3 0 9 6 1 0.669167 0.633213 0.716667 0.185325 1935 2549 4484
248 247 9/4/2011 3 0 9 0 1 0.709167 0.665429 0.742083 0.206467 2521 2419 4940
249 248 9/5/2011 3 0 9 1 2 0.673333 0.625646 0.790417 0.212696 1236 2115 3351
250 249 9/6/2011 3 0 9 2 3 0.54 0.5152 0.886957 0.343943 204 2506 2710
251 250 9/7/2011 3 0 9 3 3 0.599167 0.544229 0.917083 0.0970208 118 1878 1996
252 251 9/8/2011 3 0 9 4 3 0.633913 0.555361 0.939565 0.192748 153 1689 1842
253 252 9/9/2011 3 0 9 5 2 0.65 0.578946 0.897917 0.124379 417 3127 3544
254 253 9/10/2011 3 0 9 6 1 0.66 0.607962 0.75375 0.153608 1750 3595 5345
255 254 9/11/2011 3 0 9 0 1 0.653333 0.609229 0.71375 0.115054 1633 3413 5046
256 255 9/12/2011 3 0 9 1 1 0.644348 0.60213 0.692174 0.088913 690 4023 4713
257 256 9/13/2011 3 0 9 2 1 0.650833 0.603554 0.7125 0.141804 701 4062 4763
258 257 9/14/2011 3 0 9 3 1 0.673333 0.6269 0.697083 0.1673 647 4138 4785
259 258 9/15/2011 3 0 9 4 2 0.5775 0.553671 0.709167 0.271146 428 3231 3659
260 259 9/16/2011 3 0 9 5 2 0.469167 0.461475 0.590417 0.164183 742 4018 4760
261 260 9/17/2011 3 0 9 6 2 0.491667 0.478512 0.718333 0.189675 1434 3077 4511
262 261 9/18/2011 3 0 9 0 1 0.5075 0.490537 0.695 0.178483 1353 2921 4274
263 262 9/19/2011 3 0 9 1 2 0.549167 0.529675 0.69 0.151742 691 3848 4539
264 263 9/20/2011 3 0 9 2 2 0.561667 0.532217 0.88125 0.134954 438 3203 3641
265 264 9/21/2011 3 0 9 3 2 0.595 0.550533 0.9 0.0964042 539 3813 4352
266 265 9/22/2011 3 0 9 4 2 0.628333 0.554963 0.902083 0.128125 555 4240 4795
267 266 9/23/2011 4 0 9 5 2 0.609167 0.522125 0.9725 0.0783667 258 2137 2395
268 267 9/24/2011 4 0 9 6 2 0.606667 0.564412 0.8625 0.0783833 1776 3647 5423
269 268 9/25/2011 4 0 9 0 2 0.634167 0.572637 0.845 0.0503792 1544 3466 5010
270 269 9/26/2011 4 0 9 1 2 0.649167 0.589042 0.848333 0.1107 684 3946 4630
271 270 9/27/2011 4 0 9 2 2 0.636667 0.574525 0.885417 0.118171 477 3643 4120
272 271 9/28/2011 4 0 9 3 2 0.635 0.575158 0.84875 0.148629 480 3427 3907
273 272 9/29/2011 4 0 9 4 1 0.616667 0.574512 0.699167 0.172883 653 4186 4839
274 273 9/30/2011 4 0 9 5 1 0.564167 0.544829 0.6475 0.206475 830 4372 5202
275 274 10/1/2011 4 0 10 6 2 0.41 0.412863 0.75375 0.292296 480 1949 2429
276 275 10/2/2011 4 0 10 0 2 0.356667 0.345317 0.791667 0.222013 616 2302 2918
277 276 10/3/2011 4 0 10 1 2 0.384167 0.392046 0.760833 0.0833458 330 3240 3570
278 277 10/4/2011 4 0 10 2 1 0.484167 0.472858 0.71 0.205854 486 3970 4456
279 278 10/5/2011 4 0 10 3 1 0.538333 0.527138 0.647917 0.17725 559 4267 4826
280 279 10/6/2011 4 0 10 4 1 0.494167 0.480425 0.620833 0.134954 639 4126 4765
281 280 10/7/2011 4 0 10 5 1 0.510833 0.504404 0.684167 0.0223917 949 4036 4985
282 281 10/8/2011 4 0 10 6 1 0.521667 0.513242 0.70125 0.0454042 2235 3174 5409
283 282 10/9/2011 4 0 10 0 1 0.540833 0.523983 0.7275 0.06345 2397 3114 5511
284 283 10/10/2011 4 0 10 1 1 0.570833 0.542925 0.73375 0.0423042 1514 3603 5117
285 284 10/11/2011 4 0 10 2 2 0.566667 0.546096 0.80875 0.143042 667 3896 4563
286 285 10/12/2011 4 0 10 3 3 0.543333 0.517717 0.90625 0.24815 217 2199 2416
287 286 10/13/2011 4 0 10 4 2 0.589167 0.551804 0.896667 0.141787 290 2623 2913
288 287 10/14/2011 4 0 10 5 2 0.550833 0.529675 0.71625 0.223883 529 3115 3644
289 288 10/15/2011 4 0 10 6 1 0.506667 0.498725 0.483333 0.258083 1899 3318 5217
290 289 10/16/2011 4 0 10 0 1 0.511667 0.503154 0.486667 0.281717 1748 3293 5041
291 290 10/17/2011 4 0 10 1 1 0.534167 0.510725 0.579583 0.175379 713 3857 4570
292 291 10/18/2011 4 0 10 2 2 0.5325 0.522721 0.701667 0.110087 637 4111 4748
293 292 10/19/2011 4 0 10 3 3 0.541739 0.513848 0.895217 0.243339 254 2170 2424
294 293 10/20/2011 4 0 10 4 1 0.475833 0.466525 0.63625 0.422275 471 3724 4195
295 294 10/21/2011 4 0 10 5 1 0.4275 0.423596 0.574167 0.221396 676 3628 4304
296 295 10/22/2011 4 0 10 6 1 0.4225 0.425492 0.629167 0.0926667 1499 2809 4308
297 296 10/23/2011 4 0 10 0 1 0.421667 0.422333 0.74125 0.0995125 1619 2762 4381
298 297 10/24/2011 4 0 10 1 1 0.463333 0.457067 0.772083 0.118792 699 3488 4187
299 298 10/25/2011 4 0 10 2 1 0.471667 0.463375 0.622917 0.166658 695 3992 4687
300 299 10/26/2011 4 0 10 3 2 0.484167 0.472846 0.720417 0.148642 404 3490 3894
301 300 10/27/2011 4 0 10 4 2 0.47 0.457046 0.812917 0.197763 240 2419 2659
302 301 10/28/2011 4 0 10 5 2 0.330833 0.318812 0.585833 0.229479 456 3291 3747
303 302 10/29/2011 4 0 10 6 3 0.254167 0.227913 0.8825 0.351371 57 570 627
304 303 10/30/2011 4 0 10 0 1 0.319167 0.321329 0.62375 0.176617 885 2446 3331
305 304 10/31/2011 4 0 10 1 1 0.34 0.356063 0.703333 0.10635 362 3307 3669
306 305 11/1/2011 4 0 11 2 1 0.400833 0.397088 0.68375 0.135571 410 3658 4068
307 306 11/2/2011 4 0 11 3 1 0.3775 0.390133 0.71875 0.0820917 370 3816 4186
308 307 11/3/2011 4 0 11 4 1 0.408333 0.405921 0.702083 0.136817 318 3656 3974
309 308 11/4/2011 4 0 11 5 2 0.403333 0.403392 0.6225 0.271779 470 3576 4046
310 309 11/5/2011 4 0 11 6 1 0.326667 0.323854 0.519167 0.189062 1156 2770 3926
311 310 11/6/2011 4 0 11 0 1 0.348333 0.362358 0.734583 0.0920542 952 2697 3649
312 311 11/7/2011 4 0 11 1 1 0.395 0.400871 0.75875 0.057225 373 3662 4035
313 312 11/8/2011 4 0 11 2 1 0.408333 0.412246 0.721667 0.0690375 376 3829 4205
314 313 11/9/2011 4 0 11 3 1 0.4 0.409079 0.758333 0.0621958 305 3804 4109
315 314 11/10/2011 4 0 11 4 2 0.38 0.373721 0.813333 0.189067 190 2743 2933
316 315 11/11/2011 4 0 11 5 1 0.324167 0.306817 0.44625 0.314675 440 2928 3368
317 316 11/12/2011 4 0 11 6 1 0.356667 0.357942 0.552917 0.212062 1275 2792 4067
318 317 11/13/2011 4 0 11 0 1 0.440833 0.43055 0.458333 0.281721 1004 2713 3717
319 318 11/14/2011 4 0 11 1 1 0.53 0.524612 0.587083 0.306596 595 3891 4486
320 319 11/15/2011 4 0 11 2 2 0.53 0.507579 0.68875 0.199633 449 3746 4195
321 320 11/16/2011 4 0 11 3 3 0.456667 0.451988 0.93 0.136829 145 1672 1817
322 321 11/17/2011 4 0 11 4 2 0.341667 0.323221 0.575833 0.305362 139 2914 3053
323 322 11/18/2011 4 0 11 5 1 0.274167 0.272721 0.41 0.168533 245 3147 3392
324 323 11/19/2011 4 0 11 6 1 0.329167 0.324483 0.502083 0.224496 943 2720 3663
325 324 11/20/2011 4 0 11 0 2 0.463333 0.457058 0.684583 0.18595 787 2733 3520
326 325 11/21/2011 4 0 11 1 3 0.4475 0.445062 0.91 0.138054 220 2545 2765
327 326 11/22/2011 4 0 11 2 3 0.416667 0.421696 0.9625 0.118792 69 1538 1607
328 327 11/23/2011 4 0 11 3 2 0.440833 0.430537 0.757917 0.335825 112 2454 2566
329 328 11/24/2011 4 0 11 4 1 0.373333 0.372471 0.549167 0.167304 560 935 1495
330 329 11/25/2011 4 0 11 5 1 0.375 0.380671 0.64375 0.0988958 1095 1697 2792
331 330 11/26/2011 4 0 11 6 1 0.375833 0.385087 0.681667 0.0684208 1249 1819 3068
332 331 11/27/2011 4 0 11 0 1 0.459167 0.4558 0.698333 0.208954 810 2261 3071
333 332 11/28/2011 4 0 11 1 1 0.503478 0.490122 0.743043 0.142122 253 3614 3867
334 333 11/29/2011 4 0 11 2 2 0.458333 0.451375 0.830833 0.258092 96 2818 2914
335 334 11/30/2011 4 0 11 3 1 0.325 0.311221 0.613333 0.271158 188 3425 3613
336 335 12/1/2011 4 0 12 4 1 0.3125 0.305554 0.524583 0.220158 182 3545 3727
337 336 12/2/2011 4 0 12 5 1 0.314167 0.331433 0.625833 0.100754 268 3672 3940
338 337 12/3/2011 4 0 12 6 1 0.299167 0.310604 0.612917 0.0957833 706 2908 3614
339 338 12/4/2011 4 0 12 0 1 0.330833 0.3491 0.775833 0.0839583 634 2851 3485
340 339 12/5/2011 4 0 12 1 2 0.385833 0.393925 0.827083 0.0622083 233 3578 3811
341 340 12/6/2011 4 0 12 2 3 0.4625 0.4564 0.949583 0.232583 126 2468 2594
342 341 12/7/2011 4 0 12 3 3 0.41 0.400246 0.970417 0.266175 50 655 705
343 342 12/8/2011 4 0 12 4 1 0.265833 0.256938 0.58 0.240058 150 3172 3322
344 343 12/9/2011 4 0 12 5 1 0.290833 0.317542 0.695833 0.0827167 261 3359 3620
345 344 12/10/2011 4 0 12 6 1 0.275 0.266412 0.5075 0.233221 502 2688 3190
346 345 12/11/2011 4 0 12 0 1 0.220833 0.253154 0.49 0.0665417 377 2366 2743
347 346 12/12/2011 4 0 12 1 1 0.238333 0.270196 0.670833 0.06345 143 3167 3310
348 347 12/13/2011 4 0 12 2 1 0.2825 0.301138 0.59 0.14055 155 3368 3523
349 348 12/14/2011 4 0 12 3 2 0.3175 0.338362 0.66375 0.0609583 178 3562 3740
350 349 12/15/2011 4 0 12 4 2 0.4225 0.412237 0.634167 0.268042 181 3528 3709
351 350 12/16/2011 4 0 12 5 2 0.375 0.359825 0.500417 0.260575 178 3399 3577
352 351 12/17/2011 4 0 12 6 2 0.258333 0.249371 0.560833 0.243167 275 2464 2739
353 352 12/18/2011 4 0 12 0 1 0.238333 0.245579 0.58625 0.169779 220 2211 2431
354 353 12/19/2011 4 0 12 1 1 0.276667 0.280933 0.6375 0.172896 260 3143 3403
355 354 12/20/2011 4 0 12 2 2 0.385833 0.396454 0.595417 0.0615708 216 3534 3750
356 355 12/21/2011 1 0 12 3 2 0.428333 0.428017 0.858333 0.2214 107 2553 2660
357 356 12/22/2011 1 0 12 4 2 0.423333 0.426121 0.7575 0.047275 227 2841 3068
358 357 12/23/2011 1 0 12 5 1 0.373333 0.377513 0.68625 0.274246 163 2046 2209
359 358 12/24/2011 1 0 12 6 1 0.3025 0.299242 0.5425 0.190304 155 856 1011
360 359 12/25/2011 1 0 12 0 1 0.274783 0.279961 0.681304 0.155091 303 451 754
361 360 12/26/2011 1 0 12 1 1 0.321739 0.315535 0.506957 0.239465 430 887 1317
362 361 12/27/2011 1 0 12 2 2 0.325 0.327633 0.7625 0.18845 103 1059 1162
363 362 12/28/2011 1 0 12 3 1 0.29913 0.279974 0.503913 0.293961 255 2047 2302
364 363 12/29/2011 1 0 12 4 1 0.248333 0.263892 0.574167 0.119412 254 2169 2423
365 364 12/30/2011 1 0 12 5 1 0.311667 0.318812 0.636667 0.134337 491 2508 2999
366 365 12/31/2011 1 0 12 6 1 0.41 0.414121 0.615833 0.220154 665 1820 2485
367 366 1/1/2012 1 1 1 0 1 0.37 0.375621 0.6925 0.192167 686 1608 2294
368 367 1/2/2012 1 1 1 1 1 0.273043 0.252304 0.381304 0.329665 244 1707 1951
369 368 1/3/2012 1 1 1 2 1 0.15 0.126275 0.44125 0.365671 89 2147 2236
370 369 1/4/2012 1 1 1 3 2 0.1075 0.119337 0.414583 0.1847 95 2273 2368
371 370 1/5/2012 1 1 1 4 1 0.265833 0.278412 0.524167 0.129987 140 3132 3272
372 371 1/6/2012 1 1 1 5 1 0.334167 0.340267 0.542083 0.167908 307 3791 4098
373 372 1/7/2012 1 1 1 6 1 0.393333 0.390779 0.531667 0.174758 1070 3451 4521
374 373 1/8/2012 1 1 1 0 1 0.3375 0.340258 0.465 0.191542 599 2826 3425
375 374 1/9/2012 1 1 1 1 2 0.224167 0.247479 0.701667 0.0989 106 2270 2376
376 375 1/10/2012 1 1 1 2 1 0.308696 0.318826 0.646522 0.187552 173 3425 3598
377 376 1/11/2012 1 1 1 3 2 0.274167 0.282821 0.8475 0.131221 92 2085 2177
378 377 1/12/2012 1 1 1 4 2 0.3825 0.381938 0.802917 0.180967 269 3828 4097
379 378 1/13/2012 1 1 1 5 1 0.274167 0.249362 0.5075 0.378108 174 3040 3214
380 379 1/14/2012 1 1 1 6 1 0.18 0.183087 0.4575 0.187183 333 2160 2493
381 380 1/15/2012 1 1 1 0 1 0.166667 0.161625 0.419167 0.251258 284 2027 2311
382 381 1/16/2012 1 1 1 1 1 0.19 0.190663 0.5225 0.231358 217 2081 2298
383 382 1/17/2012 1 1 1 2 2 0.373043 0.364278 0.716087 0.34913 127 2808 2935
384 383 1/18/2012 1 1 1 3 1 0.303333 0.275254 0.443333 0.415429 109 3267 3376
385 384 1/19/2012 1 1 1 4 1 0.19 0.190038 0.4975 0.220158 130 3162 3292
386 385 1/20/2012 1 1 1 5 2 0.2175 0.220958 0.45 0.20275 115 3048 3163
387 386 1/21/2012 1 1 1 6 2 0.173333 0.174875 0.83125 0.222642 67 1234 1301
388 387 1/22/2012 1 1 1 0 2 0.1625 0.16225 0.79625 0.199638 196 1781 1977
389 388 1/23/2012 1 1 1 1 2 0.218333 0.243058 0.91125 0.110708 145 2287 2432
390 389 1/24/2012 1 1 1 2 1 0.3425 0.349108 0.835833 0.123767 439 3900 4339
391 390 1/25/2012 1 1 1 3 1 0.294167 0.294821 0.64375 0.161071 467 3803 4270
392 391 1/26/2012 1 1 1 4 2 0.341667 0.35605 0.769583 0.0733958 244 3831 4075
393 392 1/27/2012 1 1 1 5 2 0.425 0.415383 0.74125 0.342667 269 3187 3456
394 393 1/28/2012 1 1 1 6 1 0.315833 0.326379 0.543333 0.210829 775 3248 4023
395 394 1/29/2012 1 1 1 0 1 0.2825 0.272721 0.31125 0.24005 558 2685 3243
396 395 1/30/2012 1 1 1 1 1 0.269167 0.262625 0.400833 0.215792 126 3498 3624
397 396 1/31/2012 1 1 1 2 1 0.39 0.381317 0.416667 0.261817 324 4185 4509
398 397 2/1/2012 1 1 2 3 1 0.469167 0.466538 0.507917 0.189067 304 4275 4579
399 398 2/2/2012 1 1 2 4 2 0.399167 0.398971 0.672917 0.187187 190 3571 3761
400 399 2/3/2012 1 1 2 5 1 0.313333 0.309346 0.526667 0.178496 310 3841 4151
401 400 2/4/2012 1 1 2 6 2 0.264167 0.272725 0.779583 0.121896 384 2448 2832
402 401 2/5/2012 1 1 2 0 2 0.265833 0.264521 0.687917 0.175996 318 2629 2947
403 402 2/6/2012 1 1 2 1 1 0.282609 0.296426 0.622174 0.1538 206 3578 3784
404 403 2/7/2012 1 1 2 2 1 0.354167 0.361104 0.49625 0.147379 199 4176 4375
405 404 2/8/2012 1 1 2 3 2 0.256667 0.266421 0.722917 0.133721 109 2693 2802
406 405 2/9/2012 1 1 2 4 1 0.265 0.261988 0.562083 0.194037 163 3667 3830
407 406 2/10/2012 1 1 2 5 2 0.280833 0.293558 0.54 0.116929 227 3604 3831
408 407 2/11/2012 1 1 2 6 3 0.224167 0.210867 0.73125 0.289796 192 1977 2169
409 408 2/12/2012 1 1 2 0 1 0.1275 0.101658 0.464583 0.409212 73 1456 1529
410 409 2/13/2012 1 1 2 1 1 0.2225 0.227913 0.41125 0.167283 94 3328 3422
411 410 2/14/2012 1 1 2 2 2 0.319167 0.333946 0.50875 0.141179 135 3787 3922
412 411 2/15/2012 1 1 2 3 1 0.348333 0.351629 0.53125 0.1816 141 4028 4169
413 412 2/16/2012 1 1 2 4 2 0.316667 0.330162 0.752917 0.091425 74 2931 3005
414 413 2/17/2012 1 1 2 5 1 0.343333 0.351629 0.634583 0.205846 349 3805 4154
415 414 2/18/2012 1 1 2 6 1 0.346667 0.355425 0.534583 0.190929 1435 2883 4318
416 415 2/19/2012 1 1 2 0 2 0.28 0.265788 0.515833 0.253112 618 2071 2689
417 416 2/20/2012 1 1 2 1 1 0.28 0.273391 0.507826 0.229083 502 2627 3129
418 417 2/21/2012 1 1 2 2 1 0.287826 0.295113 0.594348 0.205717 163 3614 3777
419 418 2/22/2012 1 1 2 3 1 0.395833 0.392667 0.567917 0.234471 394 4379 4773
420 419 2/23/2012 1 1 2 4 1 0.454167 0.444446 0.554583 0.190913 516 4546 5062
421 420 2/24/2012 1 1 2 5 2 0.4075 0.410971 0.7375 0.237567 246 3241 3487
422 421 2/25/2012 1 1 2 6 1 0.290833 0.255675 0.395833 0.421642 317 2415 2732
423 422 2/26/2012 1 1 2 0 1 0.279167 0.268308 0.41 0.205229 515 2874 3389
424 423 2/27/2012 1 1 2 1 1 0.366667 0.357954 0.490833 0.268033 253 4069 4322
425 424 2/28/2012 1 1 2 2 1 0.359167 0.353525 0.395833 0.193417 229 4134 4363
426 425 2/29/2012 1 1 2 3 2 0.344348 0.34847 0.804783 0.179117 65 1769 1834
427 426 3/1/2012 1 1 3 4 1 0.485833 0.475371 0.615417 0.226987 325 4665 4990
428 427 3/2/2012 1 1 3 5 2 0.353333 0.359842 0.657083 0.144904 246 2948 3194
429 428 3/3/2012 1 1 3 6 2 0.414167 0.413492 0.62125 0.161079 956 3110 4066
430 429 3/4/2012 1 1 3 0 1 0.325833 0.303021 0.403333 0.334571 710 2713 3423
431 430 3/5/2012 1 1 3 1 1 0.243333 0.241171 0.50625 0.228858 203 3130 3333
432 431 3/6/2012 1 1 3 2 1 0.258333 0.255042 0.456667 0.200875 221 3735 3956
433 432 3/7/2012 1 1 3 3 1 0.404167 0.3851 0.513333 0.345779 432 4484 4916
434 433 3/8/2012 1 1 3 4 1 0.5275 0.524604 0.5675 0.441563 486 4896 5382
435 434 3/9/2012 1 1 3 5 2 0.410833 0.397083 0.407083 0.4148 447 4122 4569
436 435 3/10/2012 1 1 3 6 1 0.2875 0.277767 0.350417 0.22575 968 3150 4118
437 436 3/11/2012 1 1 3 0 1 0.361739 0.35967 0.476957 0.222587 1658 3253 4911
438 437 3/12/2012 1 1 3 1 1 0.466667 0.459592 0.489167 0.207713 838 4460 5298
439 438 3/13/2012 1 1 3 2 1 0.565 0.542929 0.6175 0.23695 762 5085 5847
440 439 3/14/2012 1 1 3 3 1 0.5725 0.548617 0.507083 0.115062 997 5315 6312
441 440 3/15/2012 1 1 3 4 1 0.5575 0.532825 0.579583 0.149883 1005 5187 6192
442 441 3/16/2012 1 1 3 5 2 0.435833 0.436229 0.842083 0.113192 548 3830 4378
443 442 3/17/2012 1 1 3 6 2 0.514167 0.505046 0.755833 0.110704 3155 4681 7836
444 443 3/18/2012 1 1 3 0 2 0.4725 0.464 0.81 0.126883 2207 3685 5892
445 444 3/19/2012 1 1 3 1 1 0.545 0.532821 0.72875 0.162317 982 5171 6153
446 445 3/20/2012 1 1 3 2 1 0.560833 0.538533 0.807917 0.121271 1051 5042 6093
447 446 3/21/2012 2 1 3 3 2 0.531667 0.513258 0.82125 0.0895583 1122 5108 6230
448 447 3/22/2012 2 1 3 4 1 0.554167 0.531567 0.83125 0.117562 1334 5537 6871
449 448 3/23/2012 2 1 3 5 2 0.601667 0.570067 0.694167 0.1163 2469 5893 8362
450 449 3/24/2012 2 1 3 6 2 0.5025 0.486733 0.885417 0.192783 1033 2339 3372
451 450 3/25/2012 2 1 3 0 2 0.4375 0.437488 0.880833 0.220775 1532 3464 4996
452 451 3/26/2012 2 1 3 1 1 0.445833 0.43875 0.477917 0.386821 795 4763 5558
453 452 3/27/2012 2 1 3 2 1 0.323333 0.315654 0.29 0.187192 531 4571 5102
454 453 3/28/2012 2 1 3 3 1 0.484167 0.47095 0.48125 0.291671 674 5024 5698
455 454 3/29/2012 2 1 3 4 1 0.494167 0.482304 0.439167 0.31965 834 5299 6133
456 455 3/30/2012 2 1 3 5 2 0.37 0.375621 0.580833 0.138067 796 4663 5459
457 456 3/31/2012 2 1 3 6 2 0.424167 0.421708 0.738333 0.250617 2301 3934 6235
458 457 4/1/2012 2 1 4 0 2 0.425833 0.417287 0.67625 0.172267 2347 3694 6041
459 458 4/2/2012 2 1 4 1 1 0.433913 0.427513 0.504348 0.312139 1208 4728 5936
460 459 4/3/2012 2 1 4 2 1 0.466667 0.461483 0.396667 0.100133 1348 5424 6772
461 460 4/4/2012 2 1 4 3 1 0.541667 0.53345 0.469583 0.180975 1058 5378 6436
462 461 4/5/2012 2 1 4 4 1 0.435 0.431163 0.374167 0.219529 1192 5265 6457
463 462 4/6/2012 2 1 4 5 1 0.403333 0.390767 0.377083 0.300388 1807 4653 6460
464 463 4/7/2012 2 1 4 6 1 0.4375 0.426129 0.254167 0.274871 3252 3605 6857
465 464 4/8/2012 2 1 4 0 1 0.5 0.492425 0.275833 0.232596 2230 2939 5169
466 465 4/9/2012 2 1 4 1 1 0.489167 0.476638 0.3175 0.358196 905 4680 5585
467 466 4/10/2012 2 1 4 2 1 0.446667 0.436233 0.435 0.249375 819 5099 5918
468 467 4/11/2012 2 1 4 3 1 0.348696 0.337274 0.469565 0.295274 482 4380 4862
469 468 4/12/2012 2 1 4 4 1 0.3975 0.387604 0.46625 0.290429 663 4746 5409
470 469 4/13/2012 2 1 4 5 1 0.4425 0.431808 0.408333 0.155471 1252 5146 6398
471 470 4/14/2012 2 1 4 6 1 0.495 0.487996 0.502917 0.190917 2795 4665 7460
472 471 4/15/2012 2 1 4 0 1 0.606667 0.573875 0.507917 0.225129 2846 4286 7132
473 472 4/16/2012 2 1 4 1 1 0.664167 0.614925 0.561667 0.284829 1198 5172 6370
474 473 4/17/2012 2 1 4 2 1 0.608333 0.598487 0.390417 0.273629 989 5702 6691
475 474 4/18/2012 2 1 4 3 2 0.463333 0.457038 0.569167 0.167912 347 4020 4367
476 475 4/19/2012 2 1 4 4 1 0.498333 0.493046 0.6125 0.0659292 846 5719 6565
477 476 4/20/2012 2 1 4 5 1 0.526667 0.515775 0.694583 0.149871 1340 5950 7290
478 477 4/21/2012 2 1 4 6 1 0.57 0.542921 0.682917 0.283587 2541 4083 6624
479 478 4/22/2012 2 1 4 0 3 0.396667 0.389504 0.835417 0.344546 120 907 1027
480 479 4/23/2012 2 1 4 1 2 0.321667 0.301125 0.766667 0.303496 195 3019 3214
481 480 4/24/2012 2 1 4 2 1 0.413333 0.405283 0.454167 0.249383 518 5115 5633
482 481 4/25/2012 2 1 4 3 1 0.476667 0.470317 0.427917 0.118792 655 5541 6196
483 482 4/26/2012 2 1 4 4 2 0.498333 0.483583 0.756667 0.176625 475 4551 5026
484 483 4/27/2012 2 1 4 5 1 0.4575 0.452637 0.400833 0.347633 1014 5219 6233
485 484 4/28/2012 2 1 4 6 2 0.376667 0.377504 0.489583 0.129975 1120 3100 4220
486 485 4/29/2012 2 1 4 0 1 0.458333 0.450121 0.587083 0.116908 2229 4075 6304
487 486 4/30/2012 2 1 4 1 2 0.464167 0.457696 0.57 0.171638 665 4907 5572
488 487 5/1/2012 2 1 5 2 2 0.613333 0.577021 0.659583 0.156096 653 5087 5740
489 488 5/2/2012 2 1 5 3 1 0.564167 0.537896 0.797083 0.138058 667 5502 6169
490 489 5/3/2012 2 1 5 4 2 0.56 0.537242 0.768333 0.133696 764 5657 6421
491 490 5/4/2012 2 1 5 5 1 0.6275 0.590917 0.735417 0.162938 1069 5227 6296
492 491 5/5/2012 2 1 5 6 2 0.621667 0.584608 0.756667 0.152992 2496 4387 6883
493 492 5/6/2012 2 1 5 0 2 0.5625 0.546737 0.74 0.149879 2135 4224 6359
494 493 5/7/2012 2 1 5 1 2 0.5375 0.527142 0.664167 0.230721 1008 5265 6273
495 494 5/8/2012 2 1 5 2 2 0.581667 0.557471 0.685833 0.296029 738 4990 5728
496 495 5/9/2012 2 1 5 3 2 0.575 0.553025 0.744167 0.216412 620 4097 4717
497 496 5/10/2012 2 1 5 4 1 0.505833 0.491783 0.552083 0.314063 1026 5546 6572
498 497 5/11/2012 2 1 5 5 1 0.533333 0.520833 0.360417 0.236937 1319 5711 7030
499 498 5/12/2012 2 1 5 6 1 0.564167 0.544817 0.480417 0.123133 2622 4807 7429
500 499 5/13/2012 2 1 5 0 1 0.6125 0.585238 0.57625 0.225117 2172 3946 6118
501 500 5/14/2012 2 1 5 1 2 0.573333 0.5499 0.789583 0.212692 342 2501 2843
502 501 5/15/2012 2 1 5 2 2 0.611667 0.576404 0.794583 0.147392 625 4490 5115
503 502 5/16/2012 2 1 5 3 1 0.636667 0.595975 0.697917 0.122512 991 6433 7424
504 503 5/17/2012 2 1 5 4 1 0.593333 0.572613 0.52 0.229475 1242 6142 7384
505 504 5/18/2012 2 1 5 5 1 0.564167 0.551121 0.523333 0.136817 1521 6118 7639
506 505 5/19/2012 2 1 5 6 1 0.6 0.566908 0.45625 0.083975 3410 4884 8294
507 506 5/20/2012 2 1 5 0 1 0.620833 0.583967 0.530417 0.254367 2704 4425 7129
508 507 5/21/2012 2 1 5 1 2 0.598333 0.565667 0.81125 0.233204 630 3729 4359
509 508 5/22/2012 2 1 5 2 2 0.615 0.580825 0.765833 0.118167 819 5254 6073
510 509 5/23/2012 2 1 5 3 2 0.621667 0.584612 0.774583 0.102 766 4494 5260
511 510 5/24/2012 2 1 5 4 1 0.655 0.6067 0.716667 0.172896 1059 5711 6770
512 511 5/25/2012 2 1 5 5 1 0.68 0.627529 0.747083 0.14055 1417 5317 6734
513 512 5/26/2012 2 1 5 6 1 0.6925 0.642696 0.7325 0.198992 2855 3681 6536
514 513 5/27/2012 2 1 5 0 1 0.69 0.641425 0.697083 0.215171 3283 3308 6591
515 514 5/28/2012 2 1 5 1 1 0.7125 0.6793 0.67625 0.196521 2557 3486 6043
516 515 5/29/2012 2 1 5 2 1 0.7225 0.672992 0.684583 0.2954 880 4863 5743
517 516 5/30/2012 2 1 5 3 2 0.656667 0.611129 0.67 0.134329 745 6110 6855
518 517 5/31/2012 2 1 5 4 1 0.68 0.631329 0.492917 0.195279 1100 6238 7338
519 518 6/1/2012 2 1 6 5 2 0.654167 0.607962 0.755417 0.237563 533 3594 4127
520 519 6/2/2012 2 1 6 6 1 0.583333 0.566288 0.549167 0.186562 2795 5325 8120
521 520 6/3/2012 2 1 6 0 1 0.6025 0.575133 0.493333 0.184087 2494 5147 7641
522 521 6/4/2012 2 1 6 1 1 0.5975 0.578283 0.487083 0.284833 1071 5927 6998
523 522 6/5/2012 2 1 6 2 2 0.540833 0.525892 0.613333 0.209575 968 6033 7001
524 523 6/6/2012 2 1 6 3 1 0.554167 0.542292 0.61125 0.077125 1027 6028 7055
525 524 6/7/2012 2 1 6 4 1 0.6025 0.569442 0.567083 0.15735 1038 6456 7494
526 525 6/8/2012 2 1 6 5 1 0.649167 0.597862 0.467917 0.175383 1488 6248 7736
527 526 6/9/2012 2 1 6 6 1 0.710833 0.648367 0.437083 0.144287 2708 4790 7498
528 527 6/10/2012 2 1 6 0 1 0.726667 0.663517 0.538333 0.133721 2224 4374 6598
529 528 6/11/2012 2 1 6 1 2 0.720833 0.659721 0.587917 0.207713 1017 5647 6664
530 529 6/12/2012 2 1 6 2 2 0.653333 0.597875 0.833333 0.214546 477 4495 4972
531 530 6/13/2012 2 1 6 3 1 0.655833 0.611117 0.582083 0.343279 1173 6248 7421
532 531 6/14/2012 2 1 6 4 1 0.648333 0.624383 0.569583 0.253733 1180 6183 7363
533 532 6/15/2012 2 1 6 5 1 0.639167 0.599754 0.589583 0.176617 1563 6102 7665
534 533 6/16/2012 2 1 6 6 1 0.631667 0.594708 0.504167 0.166667 2963 4739 7702
535 534 6/17/2012 2 1 6 0 1 0.5925 0.571975 0.59875 0.144904 2634 4344 6978
536 535 6/18/2012 2 1 6 1 2 0.568333 0.544842 0.777917 0.174746 653 4446 5099
537 536 6/19/2012 2 1 6 2 1 0.688333 0.654692 0.69 0.148017 968 5857 6825
538 537 6/20/2012 2 1 6 3 1 0.7825 0.720975 0.592083 0.113812 872 5339 6211
539 538 6/21/2012 3 1 6 4 1 0.805833 0.752542 0.567917 0.118787 778 5127 5905
540 539 6/22/2012 3 1 6 5 1 0.7775 0.724121 0.57375 0.182842 964 4859 5823
541 540 6/23/2012 3 1 6 6 1 0.731667 0.652792 0.534583 0.179721 2657 4801 7458
542 541 6/24/2012 3 1 6 0 1 0.743333 0.674254 0.479167 0.145525 2551 4340 6891
543 542 6/25/2012 3 1 6 1 1 0.715833 0.654042 0.504167 0.300383 1139 5640 6779
544 543 6/26/2012 3 1 6 2 1 0.630833 0.594704 0.373333 0.347642 1077 6365 7442
545 544 6/27/2012 3 1 6 3 1 0.6975 0.640792 0.36 0.271775 1077 6258 7335
546 545 6/28/2012 3 1 6 4 1 0.749167 0.675512 0.4225 0.17165 921 5958 6879
547 546 6/29/2012 3 1 6 5 1 0.834167 0.786613 0.48875 0.165417 829 4634 5463
548 547 6/30/2012 3 1 6 6 1 0.765 0.687508 0.60125 0.161071 1455 4232 5687
549 548 7/1/2012 3 1 7 0 1 0.815833 0.750629 0.51875 0.168529 1421 4110 5531
550 549 7/2/2012 3 1 7 1 1 0.781667 0.702038 0.447083 0.195267 904 5323 6227
551 550 7/3/2012 3 1 7 2 1 0.780833 0.70265 0.492083 0.126237 1052 5608 6660
552 551 7/4/2012 3 1 7 3 1 0.789167 0.732337 0.53875 0.13495 2562 4841 7403
553 552 7/5/2012 3 1 7 4 1 0.8275 0.761367 0.457917 0.194029 1405 4836 6241
554 553 7/6/2012 3 1 7 5 1 0.828333 0.752533 0.450833 0.146142 1366 4841 6207
555 554 7/7/2012 3 1 7 6 1 0.861667 0.804913 0.492083 0.163554 1448 3392 4840
556 555 7/8/2012 3 1 7 0 1 0.8225 0.790396 0.57375 0.125629 1203 3469 4672
557 556 7/9/2012 3 1 7 1 2 0.710833 0.654054 0.683333 0.180975 998 5571 6569
558 557 7/10/2012 3 1 7 2 2 0.720833 0.664796 0.6675 0.151737 954 5336 6290
559 558 7/11/2012 3 1 7 3 1 0.716667 0.650271 0.633333 0.151733 975 6289 7264
560 559 7/12/2012 3 1 7 4 1 0.715833 0.654683 0.529583 0.146775 1032 6414 7446
561 560 7/13/2012 3 1 7 5 2 0.731667 0.667933 0.485833 0.08085 1511 5988 7499
562 561 7/14/2012 3 1 7 6 2 0.703333 0.666042 0.699167 0.143679 2355 4614 6969
563 562 7/15/2012 3 1 7 0 1 0.745833 0.705196 0.717917 0.166667 1920 4111 6031
564 563 7/16/2012 3 1 7 1 1 0.763333 0.724125 0.645 0.164187 1088 5742 6830
565 564 7/17/2012 3 1 7 2 1 0.818333 0.755683 0.505833 0.114429 921 5865 6786
566 565 7/18/2012 3 1 7 3 1 0.793333 0.745583 0.577083 0.137442 799 4914 5713
567 566 7/19/2012 3 1 7 4 1 0.77 0.714642 0.600417 0.165429 888 5703 6591
568 567 7/20/2012 3 1 7 5 2 0.665833 0.613025 0.844167 0.208967 747 5123 5870
569 568 7/21/2012 3 1 7 6 3 0.595833 0.549912 0.865417 0.2133 1264 3195 4459
570 569 7/22/2012 3 1 7 0 2 0.6675 0.623125 0.7625 0.0939208 2544 4866 7410
571 570 7/23/2012 3 1 7 1 1 0.741667 0.690017 0.694167 0.138683 1135 5831 6966
572 571 7/24/2012 3 1 7 2 1 0.750833 0.70645 0.655 0.211454 1140 6452 7592
573 572 7/25/2012 3 1 7 3 1 0.724167 0.654054 0.45 0.1648 1383 6790 8173
574 573 7/26/2012 3 1 7 4 1 0.776667 0.739263 0.596667 0.284813 1036 5825 6861
575 574 7/27/2012 3 1 7 5 1 0.781667 0.734217 0.594583 0.152992 1259 5645 6904
576 575 7/28/2012 3 1 7 6 1 0.755833 0.697604 0.613333 0.15735 2234 4451 6685
577 576 7/29/2012 3 1 7 0 1 0.721667 0.667933 0.62375 0.170396 2153 4444 6597
578 577 7/30/2012 3 1 7 1 1 0.730833 0.684987 0.66875 0.153617 1040 6065 7105
579 578 7/31/2012 3 1 7 2 1 0.713333 0.662896 0.704167 0.165425 968 6248 7216
580 579 8/1/2012 3 1 8 3 1 0.7175 0.667308 0.6775 0.141179 1074 6506 7580
581 580 8/2/2012 3 1 8 4 1 0.7525 0.707088 0.659583 0.129354 983 6278 7261
582 581 8/3/2012 3 1 8 5 2 0.765833 0.722867 0.6425 0.215792 1328 5847 7175
583 582 8/4/2012 3 1 8 6 1 0.793333 0.751267 0.613333 0.257458 2345 4479 6824
584 583 8/5/2012 3 1 8 0 1 0.769167 0.731079 0.6525 0.290421 1707 3757 5464
585 584 8/6/2012 3 1 8 1 2 0.7525 0.710246 0.654167 0.129354 1233 5780 7013
586 585 8/7/2012 3 1 8 2 2 0.735833 0.697621 0.70375 0.116908 1278 5995 7273
587 586 8/8/2012 3 1 8 3 2 0.75 0.707717 0.672917 0.1107 1263 6271 7534
588 587 8/9/2012 3 1 8 4 1 0.755833 0.699508 0.620417 0.1561 1196 6090 7286
589 588 8/10/2012 3 1 8 5 2 0.715833 0.667942 0.715833 0.238813 1065 4721 5786
590 589 8/11/2012 3 1 8 6 2 0.6925 0.638267 0.732917 0.206479 2247 4052 6299
591 590 8/12/2012 3 1 8 0 1 0.700833 0.644579 0.530417 0.122512 2182 4362 6544
592 591 8/13/2012 3 1 8 1 1 0.720833 0.662254 0.545417 0.136212 1207 5676 6883
593 592 8/14/2012 3 1 8 2 1 0.726667 0.676779 0.686667 0.169158 1128 5656 6784
594 593 8/15/2012 3 1 8 3 1 0.706667 0.654037 0.619583 0.169771 1198 6149 7347
595 594 8/16/2012 3 1 8 4 1 0.719167 0.654688 0.519167 0.141796 1338 6267 7605
596 595 8/17/2012 3 1 8 5 1 0.723333 0.2424 0.570833 0.231354 1483 5665 7148
597 596 8/18/2012 3 1 8 6 1 0.678333 0.618071 0.603333 0.177867 2827 5038 7865
598 597 8/19/2012 3 1 8 0 2 0.635833 0.603554 0.711667 0.08645 1208 3341 4549
599 598 8/20/2012 3 1 8 1 2 0.635833 0.595967 0.734167 0.129979 1026 5504 6530
600 599 8/21/2012 3 1 8 2 1 0.649167 0.601025 0.67375 0.0727708 1081 5925 7006
601 600 8/22/2012 3 1 8 3 1 0.6675 0.621854 0.677083 0.0702833 1094 6281 7375
602 601 8/23/2012 3 1 8 4 1 0.695833 0.637008 0.635833 0.0845958 1363 6402 7765
603 602 8/24/2012 3 1 8 5 2 0.7025 0.6471 0.615 0.0721458 1325 6257 7582
604 603 8/25/2012 3 1 8 6 2 0.661667 0.618696 0.712917 0.244408 1829 4224 6053
605 604 8/26/2012 3 1 8 0 2 0.653333 0.595996 0.845833 0.228858 1483 3772 5255
606 605 8/27/2012 3 1 8 1 1 0.703333 0.654688 0.730417 0.128733 989 5928 6917
607 606 8/28/2012 3 1 8 2 1 0.728333 0.66605 0.62 0.190925 935 6105 7040
608 607 8/29/2012 3 1 8 3 1 0.685 0.635733 0.552083 0.112562 1177 6520 7697
609 608 8/30/2012 3 1 8 4 1 0.706667 0.652779 0.590417 0.0771167 1172 6541 7713
610 609 8/31/2012 3 1 8 5 1 0.764167 0.6894 0.5875 0.168533 1433 5917 7350
611 610 9/1/2012 3 1 9 6 2 0.753333 0.702654 0.638333 0.113187 2352 3788 6140
612 611 9/2/2012 3 1 9 0 2 0.696667 0.649 0.815 0.0640708 2613 3197 5810
613 612 9/3/2012 3 1 9 1 1 0.7075 0.661629 0.790833 0.151121 1965 4069 6034
614 613 9/4/2012 3 1 9 2 1 0.725833 0.686888 0.755 0.236321 867 5997 6864
615 614 9/5/2012 3 1 9 3 1 0.736667 0.708983 0.74125 0.187808 832 6280 7112
616 615 9/6/2012 3 1 9 4 2 0.696667 0.655329 0.810417 0.142421 611 5592 6203
617 616 9/7/2012 3 1 9 5 1 0.703333 0.657204 0.73625 0.171646 1045 6459 7504
618 617 9/8/2012 3 1 9 6 2 0.659167 0.611121 0.799167 0.281104 1557 4419 5976
619 618 9/9/2012 3 1 9 0 1 0.61 0.578925 0.5475 0.224496 2570 5657 8227
620 619 9/10/2012 3 1 9 1 1 0.583333 0.565654 0.50375 0.258713 1118 6407 7525
621 620 9/11/2012 3 1 9 2 1 0.5775 0.554292 0.52 0.0920542 1070 6697 7767
622 621 9/12/2012 3 1 9 3 1 0.599167 0.570075 0.577083 0.131846 1050 6820 7870
623 622 9/13/2012 3 1 9 4 1 0.6125 0.579558 0.637083 0.0827208 1054 6750 7804
624 623 9/14/2012 3 1 9 5 1 0.633333 0.594083 0.6725 0.103863 1379 6630 8009
625 624 9/15/2012 3 1 9 6 1 0.608333 0.585867 0.501667 0.247521 3160 5554 8714
626 625 9/16/2012 3 1 9 0 1 0.58 0.563125 0.57 0.0901833 2166 5167 7333
627 626 9/17/2012 3 1 9 1 2 0.580833 0.55305 0.734583 0.151742 1022 5847 6869
628 627 9/18/2012 3 1 9 2 2 0.623333 0.565067 0.8725 0.357587 371 3702 4073
629 628 9/19/2012 3 1 9 3 1 0.5525 0.540404 0.536667 0.215175 788 6803 7591
630 629 9/20/2012 3 1 9 4 1 0.546667 0.532192 0.618333 0.118167 939 6781 7720
631 630 9/21/2012 3 1 9 5 1 0.599167 0.571971 0.66875 0.154229 1250 6917 8167
632 631 9/22/2012 3 1 9 6 1 0.65 0.610488 0.646667 0.283583 2512 5883 8395
633 632 9/23/2012 4 1 9 0 1 0.529167 0.518933 0.467083 0.223258 2454 5453 7907
634 633 9/24/2012 4 1 9 1 1 0.514167 0.502513 0.492917 0.142404 1001 6435 7436
635 634 9/25/2012 4 1 9 2 1 0.55 0.544179 0.57 0.236321 845 6693 7538
636 635 9/26/2012 4 1 9 3 1 0.635 0.596613 0.630833 0.2444 787 6946 7733
637 636 9/27/2012 4 1 9 4 2 0.65 0.607975 0.690833 0.134342 751 6642 7393
638 637 9/28/2012 4 1 9 5 2 0.619167 0.585863 0.69 0.164179 1045 6370 7415
639 638 9/29/2012 4 1 9 6 1 0.5425 0.530296 0.542917 0.227604 2589 5966 8555
640 639 9/30/2012 4 1 9 0 1 0.526667 0.517663 0.583333 0.134958 2015 4874 6889
641 640 10/1/2012 4 1 10 1 2 0.520833 0.512 0.649167 0.0908042 763 6015 6778
642 641 10/2/2012 4 1 10 2 3 0.590833 0.542333 0.871667 0.104475 315 4324 4639
643 642 10/3/2012 4 1 10 3 2 0.6575 0.599133 0.79375 0.0665458 728 6844 7572
644 643 10/4/2012 4 1 10 4 2 0.6575 0.607975 0.722917 0.117546 891 6437 7328
645 644 10/5/2012 4 1 10 5 1 0.615 0.580187 0.6275 0.10635 1516 6640 8156
646 645 10/6/2012 4 1 10 6 1 0.554167 0.538521 0.664167 0.268025 3031 4934 7965
647 646 10/7/2012 4 1 10 0 2 0.415833 0.419813 0.708333 0.141162 781 2729 3510
648 647 10/8/2012 4 1 10 1 2 0.383333 0.387608 0.709583 0.189679 874 4604 5478
649 648 10/9/2012 4 1 10 2 2 0.446667 0.438112 0.761667 0.1903 601 5791 6392
650 649 10/10/2012 4 1 10 3 1 0.514167 0.503142 0.630833 0.187821 780 6911 7691
651 650 10/11/2012 4 1 10 4 1 0.435 0.431167 0.463333 0.181596 834 6736 7570
652 651 10/12/2012 4 1 10 5 1 0.4375 0.433071 0.539167 0.235092 1060 6222 7282
653 652 10/13/2012 4 1 10 6 1 0.393333 0.391396 0.494583 0.146142 2252 4857 7109
654 653 10/14/2012 4 1 10 0 1 0.521667 0.508204 0.640417 0.278612 2080 4559 6639
655 654 10/15/2012 4 1 10 1 2 0.561667 0.53915 0.7075 0.296037 760 5115 5875
656 655 10/16/2012 4 1 10 2 1 0.468333 0.460846 0.558333 0.182221 922 6612 7534
657 656 10/17/2012 4 1 10 3 1 0.455833 0.450108 0.692917 0.101371 979 6482 7461
658 657 10/18/2012 4 1 10 4 2 0.5225 0.512625 0.728333 0.236937 1008 6501 7509
659 658 10/19/2012 4 1 10 5 2 0.563333 0.537896 0.815 0.134954 753 4671 5424
660 659 10/20/2012 4 1 10 6 1 0.484167 0.472842 0.572917 0.117537 2806 5284 8090
661 660 10/21/2012 4 1 10 0 1 0.464167 0.456429 0.51 0.166054 2132 4692 6824
662 661 10/22/2012 4 1 10 1 1 0.4875 0.482942 0.568333 0.0814833 830 6228 7058
663 662 10/23/2012 4 1 10 2 1 0.544167 0.530304 0.641667 0.0945458 841 6625 7466
664 663 10/24/2012 4 1 10 3 1 0.5875 0.558721 0.63625 0.0727792 795 6898 7693
665 664 10/25/2012 4 1 10 4 2 0.55 0.529688 0.800417 0.124375 875 6484 7359
666 665 10/26/2012 4 1 10 5 2 0.545833 0.52275 0.807083 0.132467 1182 6262 7444
667 666 10/27/2012 4 1 10 6 2 0.53 0.515133 0.72 0.235692 2643 5209 7852
668 667 10/28/2012 4 1 10 0 2 0.4775 0.467771 0.694583 0.398008 998 3461 4459
669 668 10/29/2012 4 1 10 1 3 0.44 0.4394 0.88 0.3582 2 20 22
670 669 10/30/2012 4 1 10 2 2 0.318182 0.309909 0.825455 0.213009 87 1009 1096
671 670 10/31/2012 4 1 10 3 2 0.3575 0.3611 0.666667 0.166667 419 5147 5566
672 671 11/1/2012 4 1 11 4 2 0.365833 0.369942 0.581667 0.157346 466 5520 5986
673 672 11/2/2012 4 1 11 5 1 0.355 0.356042 0.522083 0.266175 618 5229 5847
674 673 11/3/2012 4 1 11 6 2 0.343333 0.323846 0.49125 0.270529 1029 4109 5138
675 674 11/4/2012 4 1 11 0 1 0.325833 0.329538 0.532917 0.179108 1201 3906 5107
676 675 11/5/2012 4 1 11 1 1 0.319167 0.308075 0.494167 0.236325 378 4881 5259
677 676 11/6/2012 4 1 11 2 1 0.280833 0.281567 0.567083 0.173513 466 5220 5686
678 677 11/7/2012 4 1 11 3 2 0.295833 0.274621 0.5475 0.304108 326 4709 5035
679 678 11/8/2012 4 1 11 4 1 0.352174 0.341891 0.333478 0.347835 340 4975 5315
680 679 11/9/2012 4 1 11 5 1 0.361667 0.355413 0.540833 0.214558 709 5283 5992
681 680 11/10/2012 4 1 11 6 1 0.389167 0.393937 0.645417 0.0578458 2090 4446 6536
682 681 11/11/2012 4 1 11 0 1 0.420833 0.421713 0.659167 0.1275 2290 4562 6852
683 682 11/12/2012 4 1 11 1 1 0.485 0.475383 0.741667 0.173517 1097 5172 6269
684 683 11/13/2012 4 1 11 2 2 0.343333 0.323225 0.662917 0.342046 327 3767 4094
685 684 11/14/2012 4 1 11 3 1 0.289167 0.281563 0.552083 0.199625 373 5122 5495
686 685 11/15/2012 4 1 11 4 2 0.321667 0.324492 0.620417 0.152987 320 5125 5445
687 686 11/16/2012 4 1 11 5 1 0.345 0.347204 0.524583 0.171025 484 5214 5698
688 687 11/17/2012 4 1 11 6 1 0.325 0.326383 0.545417 0.179729 1313 4316 5629
689 688 11/18/2012 4 1 11 0 1 0.3425 0.337746 0.692917 0.227612 922 3747 4669
690 689 11/19/2012 4 1 11 1 2 0.380833 0.375621 0.623333 0.235067 449 5050 5499
691 690 11/20/2012 4 1 11 2 2 0.374167 0.380667 0.685 0.082725 534 5100 5634
692 691 11/21/2012 4 1 11 3 1 0.353333 0.364892 0.61375 0.103246 615 4531 5146
693 692 11/22/2012 4 1 11 4 1 0.34 0.350371 0.580417 0.0528708 955 1470 2425
694 693 11/23/2012 4 1 11 5 1 0.368333 0.378779 0.56875 0.148021 1603 2307 3910
695 694 11/24/2012 4 1 11 6 1 0.278333 0.248742 0.404583 0.376871 532 1745 2277
696 695 11/25/2012 4 1 11 0 1 0.245833 0.257583 0.468333 0.1505 309 2115 2424
697 696 11/26/2012 4 1 11 1 1 0.313333 0.339004 0.535417 0.04665 337 4750 5087
698 697 11/27/2012 4 1 11 2 2 0.291667 0.281558 0.786667 0.237562 123 3836 3959
699 698 11/28/2012 4 1 11 3 1 0.296667 0.289762 0.50625 0.210821 198 5062 5260
700 699 11/29/2012 4 1 11 4 1 0.28087 0.298422 0.555652 0.115522 243 5080 5323
701 700 11/30/2012 4 1 11 5 1 0.298333 0.323867 0.649583 0.0584708 362 5306 5668
702 701 12/1/2012 4 1 12 6 2 0.298333 0.316904 0.806667 0.0597042 951 4240 5191
703 702 12/2/2012 4 1 12 0 2 0.3475 0.359208 0.823333 0.124379 892 3757 4649
704 703 12/3/2012 4 1 12 1 1 0.4525 0.455796 0.7675 0.0827208 555 5679 6234
705 704 12/4/2012 4 1 12 2 1 0.475833 0.469054 0.73375 0.174129 551 6055 6606
706 705 12/5/2012 4 1 12 3 1 0.438333 0.428012 0.485 0.324021 331 5398 5729
707 706 12/6/2012 4 1 12 4 1 0.255833 0.258204 0.50875 0.174754 340 5035 5375
708 707 12/7/2012 4 1 12 5 2 0.320833 0.321958 0.764167 0.1306 349 4659 5008
709 708 12/8/2012 4 1 12 6 2 0.381667 0.389508 0.91125 0.101379 1153 4429 5582
710 709 12/9/2012 4 1 12 0 2 0.384167 0.390146 0.905417 0.157975 441 2787 3228
711 710 12/10/2012 4 1 12 1 2 0.435833 0.435575 0.925 0.190308 329 4841 5170
712 711 12/11/2012 4 1 12 2 2 0.353333 0.338363 0.596667 0.296037 282 5219 5501
713 712 12/12/2012 4 1 12 3 2 0.2975 0.297338 0.538333 0.162937 310 5009 5319
714 713 12/13/2012 4 1 12 4 1 0.295833 0.294188 0.485833 0.174129 425 5107 5532
715 714 12/14/2012 4 1 12 5 1 0.281667 0.294192 0.642917 0.131229 429 5182 5611
716 715 12/15/2012 4 1 12 6 1 0.324167 0.338383 0.650417 0.10635 767 4280 5047
717 716 12/16/2012 4 1 12 0 2 0.3625 0.369938 0.83875 0.100742 538 3248 3786
718 717 12/17/2012 4 1 12 1 2 0.393333 0.4015 0.907083 0.0982583 212 4373 4585
719 718 12/18/2012 4 1 12 2 1 0.410833 0.409708 0.66625 0.221404 433 5124 5557
720 719 12/19/2012 4 1 12 3 1 0.3325 0.342162 0.625417 0.184092 333 4934 5267
721 720 12/20/2012 4 1 12 4 2 0.33 0.335217 0.667917 0.132463 314 3814 4128
722 721 12/21/2012 1 1 12 5 2 0.326667 0.301767 0.556667 0.374383 221 3402 3623
723 722 12/22/2012 1 1 12 6 1 0.265833 0.236113 0.44125 0.407346 205 1544 1749
724 723 12/23/2012 1 1 12 0 1 0.245833 0.259471 0.515417 0.133083 408 1379 1787
725 724 12/24/2012 1 1 12 1 2 0.231304 0.2589 0.791304 0.0772304 174 746 920
726 725 12/25/2012 1 1 12 2 2 0.291304 0.294465 0.734783 0.168726 440 573 1013
727 726 12/26/2012 1 1 12 3 3 0.243333 0.220333 0.823333 0.316546 9 432 441
728 727 12/27/2012 1 1 12 4 2 0.254167 0.226642 0.652917 0.350133 247 1867 2114
729 728 12/28/2012 1 1 12 5 2 0.253333 0.255046 0.59 0.155471 644 2451 3095
730 729 12/29/2012 1 1 12 6 2 0.253333 0.2424 0.752917 0.124383 159 1182 1341
731 730 12/30/2012 1 1 12 0 1 0.255833 0.2317 0.483333 0.350754 364 1432 1796
732 731 12/31/2012 1 1 12 1 2 0.215833 0.223487 0.5775 0.154846 439 2290 2729

Some files were not shown because too many files have changed in this diff Show More