Compare commits

...

1126 Commits

Author SHA1 Message Date
amlrelsa-ms
08b0ba7854 update samples from Release-129 as a part of SDK release 2022-03-29 18:28:35 +00:00
Harneet Virk
ceaf82acc6 Merge pull request #1720 from Azure/release_update/Release-128
update samples from Release-128 as a part of  SDK release
2022-03-21 17:56:06 -07:00
amlrelsa-ms
dadc93cfe5 update samples from Release-128 as a part of SDK release 2022-03-22 00:51:19 +00:00
Harneet Virk
c7076bf95c Merge pull request #1715 from Azure/release_update/Release-127
update samples from Release-127 as a part of  SDK release
2022-03-15 17:02:41 -07:00
amlrelsa-ms
ebdffd5626 update samples from Release-127 as a part of SDK release 2022-03-16 00:00:00 +00:00
Harneet Virk
d123880562 Merge pull request #1711 from Azure/release_update/Release-126
update samples from Release-126 as a part of  SDK release
2022-03-11 16:53:06 -08:00
amlrelsa-ms
4864e8ea60 update samples from Release-126 as a part of SDK release 2022-03-12 00:47:46 +00:00
Harneet Virk
c86db0d7fd Merge pull request #1707 from Azure/release_update/Release-124
update samples from Release-124 as a part of  SDK release
2022-03-08 09:15:45 -08:00
amlrelsa-ms
ccfbbb3b14 update samples from Release-124 as a part of SDK release 2022-03-08 00:37:35 +00:00
Harneet Virk
c42ba64b15 Merge pull request #1700 from Azure/release_update/Release-123
update samples from Release-123 as a part of  SDK release
2022-03-01 16:33:02 -08:00
amlrelsa-ms
6d8bf32243 update samples from Release-123 as a part of SDK release 2022-02-28 17:20:57 +00:00
Harneet Virk
9094da4085 Merge pull request #1684 from Azure/release_update/Release-122
update samples from Release-122 as a part of  SDK release
2022-02-14 11:38:49 -08:00
amlrelsa-ms
ebf9d2855c update samples from Release-122 as a part of SDK release 2022-02-14 19:24:27 +00:00
v-pbavanari
1bbd78eb33 update samples from Release-121 as a part of SDK release (#1678)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-02-02 12:28:49 -05:00
v-pbavanari
77f5a69e04 update samples from Release-120 as a part of SDK release (#1676)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-28 12:51:49 -05:00
raja7592
ce82af2ab0 update samples from Release-118 as a part of SDK release (#1673)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-24 20:07:35 -05:00
Harneet Virk
2a2d2efa17 Merge pull request #1658 from Azure/release_update/Release-117
Update samples from Release sdk 1.37.0 as a part of  SDK release
2021-12-13 10:36:08 -08:00
amlrelsa-ms
dd494e9cac update samples from Release-117 as a part of SDK release 2021-12-13 16:57:22 +00:00
Harneet Virk
352adb7487 Merge pull request #1629 from Azure/release_update/Release-116
Update samples from Release as a part of SDK release 1.36.0
2021-11-08 09:48:25 -08:00
amlrelsa-ms
aebe34b4e8 update samples from Release-116 as a part of SDK release 2021-11-08 16:09:41 +00:00
Harneet Virk
c7e1241e20 Merge pull request #1612 from Azure/release_update/Release-115
Update samples from Release-115 as a part of  SDK release
2021-10-11 12:01:59 -07:00
amlrelsa-ms
6529298c24 update samples from Release-115 as a part of SDK release 2021-10-11 16:09:57 +00:00
Harneet Virk
e2dddfde85 Merge pull request #1601 from Azure/release_update/Release-114
update samples from Release-114 as a part of  SDK release
2021-09-29 14:21:59 -07:00
amlrelsa-ms
36d96f96ec update samples from Release-114 as a part of SDK release 2021-09-29 20:16:51 +00:00
Harneet Virk
7ebcfea5a3 Merge pull request #1600 from Azure/release_update/Release-113
update samples from Release-113 as a part of  SDK release
2021-09-28 12:53:57 -07:00
amlrelsa-ms
b20bfed33a update samples from Release-113 as a part of SDK release 2021-09-28 19:44:58 +00:00
Harneet Virk
a66a92e338 Merge pull request #1597 from Azure/release_update/Release-112
update samples from Release-112 as a part of  SDK release
2021-09-24 14:44:53 -07:00
amlrelsa-ms
c56c2c3525 update samples from Release-112 as a part of SDK release 2021-09-24 21:40:44 +00:00
Harneet Virk
4cac072fa4 Merge pull request #1588 from Azure/release_update/Release-111
Update samples from Release-111 as a part of SDK 1.34.0 release
2021-09-09 09:02:38 -07:00
amlrelsa-ms
aeab6b3e28 update samples from Release-111 as a part of SDK release 2021-09-07 17:32:15 +00:00
Harneet Virk
015e261f29 Merge pull request #1581 from Azure/release_update/Release-110
update samples from Release-110 as a part of  SDK release
2021-08-20 09:21:08 -07:00
amlrelsa-ms
d2a423dde9 update samples from Release-110 as a part of SDK release 2021-08-20 00:28:42 +00:00
Harneet Virk
3ecbfd6532 Merge pull request #1578 from Azure/release_update/Release-109
update samples from Release-109 as a part of  SDK release
2021-08-18 18:16:31 -07:00
amlrelsa-ms
02ecb2d755 update samples from Release-109 as a part of SDK release 2021-08-18 22:07:12 +00:00
Harneet Virk
122df6e846 Merge pull request #1576 from Azure/release_update/Release-108
update samples from Release-108 as a part of  SDK release
2021-08-18 09:47:34 -07:00
amlrelsa-ms
7d6a0a2051 update samples from Release-108 as a part of SDK release 2021-08-18 00:33:54 +00:00
Harneet Virk
6cc8af80a2 Merge pull request #1565 from Azure/release_update/Release-107
update samples from Release-107 as a part of  SDK release 1.33
2021-08-02 13:14:30 -07:00
amlrelsa-ms
f61898f718 update samples from Release-107 as a part of SDK release 2021-08-02 18:01:38 +00:00
Harneet Virk
5cb465171e Merge pull request #1556 from Azure/update-spark-notebook
updating spark notebook
2021-07-26 17:09:42 -07:00
Shivani Santosh Sambare
0ce37dd18f updating spark notebook 2021-07-26 15:51:54 -07:00
Cody
d835b183a5 update README.md (#1552) 2021-07-15 10:43:22 -07:00
Cody
d3cafebff9 add code of conduct (#1551) 2021-07-15 08:08:44 -07:00
Harneet Virk
354b194a25 Merge pull request #1543 from Azure/release_update/Release-106
update samples from Release-106 as a part of  SDK release
2021-07-06 11:05:55 -07:00
amlrelsa-ms
a52d67bb84 update samples from Release-106 as a part of SDK release 2021-07-06 17:17:27 +00:00
Harneet Virk
421ea3d920 Merge pull request #1530 from Azure/release_update/Release-105
update samples from Release-105 as a part of  SDK release
2021-06-25 09:58:05 -07:00
amlrelsa-ms
24f53f1aa1 update samples from Release-105 as a part of SDK release 2021-06-24 23:00:13 +00:00
Harneet Virk
6fc5d11de2 Merge pull request #1518 from Azure/release_update/Release-104
update samples from Release-104 as a part of  SDK release
2021-06-21 10:29:53 -07:00
amlrelsa-ms
d17547d890 update samples from Release-104 as a part of SDK release 2021-06-21 17:16:09 +00:00
Harneet Virk
928e0d4327 Merge pull request #1510 from Azure/release_update/Release-103
update samples from Release-103 as a part of  SDK release
2021-06-14 10:33:34 -07:00
amlrelsa-ms
05327cfbb9 update samples from Release-103 as a part of SDK release 2021-06-14 17:30:30 +00:00
Harneet Virk
8f7717014b Merge pull request #1506 from Azure/release_update/Release-102
update samples from Release-102 as a part of  SDK release 1.30.0
2021-06-07 11:14:02 -07:00
amlrelsa-ms
a47e50b79a update samples from Release-102 as a part of SDK release 2021-06-07 17:34:51 +00:00
Harneet Virk
8f89d88def Merge pull request #1505 from Azure/release_update/Release-101
update samples from Release-101 as a part of  SDK release
2021-06-04 19:54:53 -07:00
amlrelsa-ms
ec97207bb1 update samples from Release-101 as a part of SDK release 2021-06-05 02:54:13 +00:00
Harneet Virk
a2d20b0f47 Merge pull request #1493 from Azure/release_update/Release-98
update samples from Release-98 as a part of  SDK release
2021-05-28 08:04:58 -07:00
amlrelsa-ms
8180cebd75 update samples from Release-98 as a part of SDK release 2021-05-28 03:44:25 +00:00
Harneet Virk
700ab2d782 Merge pull request #1489 from Azure/release_update/Release-97
update samples from Release-97 as a part of  SDK  1.29.0 release
2021-05-25 07:43:14 -07:00
amlrelsa-ms
ec9a5a061d update samples from Release-97 as a part of SDK release 2021-05-24 17:39:23 +00:00
Harneet Virk
467630f955 Merge pull request #1466 from Azure/release_update/Release-96
update samples from Release-96 as a part of  SDK release 1.28.0
2021-05-10 22:48:19 -07:00
amlrelsa-ms
eac6b69bae update samples from Release-96 as a part of SDK release 2021-05-10 18:38:34 +00:00
Harneet Virk
441a5b0141 Merge pull request #1440 from Azure/release_update/Release-95
update samples from Release-95 as a part of  SDK 1.27 release
2021-04-19 11:51:21 -07:00
amlrelsa-ms
70902df6da update samples from Release-95 as a part of SDK release 2021-04-19 18:42:58 +00:00
nikAI77
6f893ff0b4 update samples from Release-94 as a part of SDK release (#1418)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2021-04-06 12:36:12 -04:00
Harneet Virk
bda592a236 Merge pull request #1406 from Azure/release_update/Release-93
update samples from Release-93 as a part of  SDK release
2021-03-24 11:25:00 -07:00
amlrelsa-ms
8b32e8d5ad update samples from Release-93 as a part of SDK release 2021-03-24 16:45:36 +00:00
Harneet Virk
54a065c698 Merge pull request #1386 from yunjie-hub/master
Add synapse sample notebooks
2021-03-09 18:05:10 -08:00
yunjie-hub
b9718678b3 Add files via upload 2021-03-09 18:02:27 -08:00
Harneet Virk
3fa40d2c6d Merge pull request #1385 from Azure/release_update/Release-92
update samples from Release-92 as a part of  SDK release
2021-03-09 17:51:27 -08:00
amlrelsa-ms
883e4a4c59 update samples from Release-92 as a part of SDK release 2021-03-10 01:48:54 +00:00
Harneet Virk
e90826b331 Merge pull request #1384 from yunjie-hub/master
Add synapse sample notebooks
2021-03-09 12:40:33 -08:00
yunjie-hub
ac04172f6d Add files via upload 2021-03-09 12:38:23 -08:00
Harneet Virk
8c0000beb4 Merge pull request #1382 from Azure/release_update/Release-91
update samples from Release-91 as a part of  SDK release
2021-03-08 21:43:10 -08:00
amlrelsa-ms
35287ab0d8 update samples from Release-91 as a part of SDK release 2021-03-09 05:36:08 +00:00
Harneet Virk
3fe4f8b038 Merge pull request #1375 from Azure/release_update/Release-90
update samples from Release-90 as a part of  SDK release
2021-03-01 09:15:14 -08:00
amlrelsa-ms
1722678469 update samples from Release-90 as a part of SDK release 2021-03-01 17:13:25 +00:00
Harneet Virk
17da7e8706 Merge pull request #1364 from Azure/release_update/Release-89
update samples from Release-89 as a part of  SDK release
2021-02-23 17:27:27 -08:00
amlrelsa-ms
d2e7213ff3 update samples from Release-89 as a part of SDK release 2021-02-24 01:26:17 +00:00
mx-iao
882cb76e8a Merge pull request #1361 from Azure/minxia/distr-pytorch
Update distributed pytorch example
2021-02-23 12:07:20 -08:00
mx-iao
37f37a46c1 Delete pytorch_mnist.py 2021-02-23 11:19:39 -08:00
mx-iao
0cd1412421 Delete distributed-pytorch-with-nccl-gloo.ipynb 2021-02-23 11:19:33 -08:00
mx-iao
c3ae9f00f6 Add files via upload 2021-02-23 11:19:02 -08:00
mx-iao
11b02c650c Rename how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-distributeddataparallel.ipynb to how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-distributeddataparallel/distributed-pytorch-with-distributeddataparallel.ipynb 2021-02-23 11:18:43 -08:00
mx-iao
606048c71f Add files via upload 2021-02-23 11:18:10 -08:00
Harneet Virk
cb1c354d44 Merge pull request #1353 from Azure/release_update/Release-88
update samples from Release-88 as a part of  SDK release 1.23.0
2021-02-22 11:49:02 -08:00
amlrelsa-ms
c868fff5a2 update samples from Release-88 as a part of SDK release 2021-02-22 19:23:04 +00:00
Harneet Virk
bc4e6611c4 Merge pull request #1342 from Azure/release_update/Release-87
update samples from Release-87 as a part of  SDK release
2021-02-16 18:43:49 -08:00
amlrelsa-ms
0a58881b70 update samples from Release-87 as a part of SDK release 2021-02-17 02:13:51 +00:00
Harneet Virk
2544e85c5f Merge pull request #1333 from Azure/release_update/Release-85
SDK release 1.22.0
2021-02-10 07:59:22 -08:00
amlrelsa-ms
7fe27501d1 update samples from Release-85 as a part of SDK release 2021-02-10 15:27:28 +00:00
Harneet Virk
624c46e7f9 Merge pull request #1321 from Azure/release_update/Release-84
update samples from Release-84 as a part of  SDK release
2021-02-05 19:10:29 -08:00
amlrelsa-ms
40fbadd85c update samples from Release-84 as a part of SDK release 2021-02-06 03:09:22 +00:00
Harneet Virk
0c1fc25542 Merge pull request #1317 from Azure/release_update/Release-83
update samples from Release-83 as a part of  SDK release
2021-02-03 14:31:31 -08:00
amlrelsa-ms
e8e1357229 update samples from Release-83 as a part of SDK release 2021-02-03 05:22:32 +00:00
Harneet Virk
ad44f8fa2b Merge pull request #1313 from zronaghi/contrib-rapids
Update RAPIDS README
2021-01-29 10:33:47 -08:00
Zahra Ronaghi
ee63e759f0 Update RAPIDS README 2021-01-28 22:19:27 -06:00
Harneet Virk
b81d97ebbf Merge pull request #1303 from Azure/release_update/Release-82
update samples from Release-82 as a part of  SDK release 1.21.0
2021-01-25 11:09:12 -08:00
amlrelsa-ms
249fb6bbb5 update samples from Release-82 as a part of SDK release 2021-01-25 19:03:14 +00:00
Harneet Virk
cda1f3e4cf Merge pull request #1289 from Azure/release_update/Release-81
update samples from Release-81 as a part of  SDK release
2021-01-11 12:52:48 -07:00
amlrelsa-ms
1d05efaac2 update samples from Release-81 as a part of SDK release 2021-01-11 19:35:54 +00:00
Harneet Virk
3adebd1127 Merge pull request #1262 from Azure/release_update/Release-80
update samples from Release-80 as a part of  SDK release
2020-12-11 16:49:33 -08:00
amlrelsa-ms
a6817063df update samples from Release-80 as a part of SDK release 2020-12-12 00:45:42 +00:00
Harneet Virk
a79f8c254a Merge pull request #1255 from Azure/release_update/Release-79
update samples from Release-79 as a part of  SDK release
2020-12-07 11:11:32 -08:00
amlrelsa-ms
fb4f287458 update samples from Release-79 as a part of SDK release 2020-12-07 19:09:59 +00:00
Harneet Virk
41366a4af0 Merge pull request #1238 from Azure/release_update/Release-78
update samples from Release-78 as a part of  SDK release
2020-11-11 13:00:22 -08:00
amlrelsa-ms
74deb14fac update samples from Release-78 as a part of SDK release 2020-11-11 19:32:32 +00:00
Harneet Virk
4ed1d445ae Merge pull request #1236 from Azure/release_update/Release-77
update samples from Release-77 as a part of  SDK release
2020-11-10 10:52:23 -08:00
amlrelsa-ms
b5c15db0b4 update samples from Release-77 as a part of SDK release 2020-11-10 18:46:23 +00:00
Harneet Virk
91d43bade6 Merge pull request #1235 from Azure/release_update_stablev2/Release-44
update samples from Release-44 as a part of 1.18.0 SDK stable release
2020-11-10 08:52:24 -08:00
amlrelsa-ms
bd750f5817 update samples from Release-44 as a part of 1.18.0 SDK stable release 2020-11-10 03:42:03 +00:00
mx-iao
637bcc5973 Merge pull request #1229 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 15:18:37 -10:00
Cody
ba741fb18d Update README.md 2020-11-03 17:16:28 -08:00
Harneet Virk
ac0ad8d487 Merge pull request #1228 from Azure/release_update/Release-76
update samples from Release-76 as a part of  SDK release
2020-11-03 16:12:15 -08:00
amlrelsa-ms
5019ad6c5a update samples from Release-76 as a part of SDK release 2020-11-03 22:31:02 +00:00
Cody
41a2ebd2b3 Merge pull request #1226 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 11:25:10 -08:00
Cody
53e3283d1d Update README.md 2020-11-03 11:17:41 -08:00
Harneet Virk
ba9c4c5465 Merge pull request #1225 from Azure/release_update/Release-75
update samples from Release-75 as a part of  SDK release
2020-11-03 11:11:11 -08:00
amlrelsa-ms
a6c65f00ec update samples from Release-75 as a part of SDK release 2020-11-03 19:07:12 +00:00
Cody
95072eabc2 Merge pull request #1221 from Azure/lostmygithubaccount-patch-2
Update README.md
2020-11-02 11:52:05 -08:00
Cody
12905ef254 Update README.md 2020-11-02 06:59:44 -08:00
Harneet Virk
4cf56eee91 Merge pull request #1217 from Azure/release_update/Release-74
update samples from Release-74 as a part of  SDK release
2020-10-30 17:27:02 -07:00
amlrelsa-ms
d345ff6c37 update samples from Release-74 as a part of SDK release 2020-10-30 22:20:10 +00:00
Harneet Virk
560dcac0a0 Merge pull request #1214 from Azure/release_update/Release-73
update samples from Release-73 as a part of  SDK release
2020-10-29 23:38:02 -07:00
amlrelsa-ms
322087a58c update samples from Release-73 as a part of SDK release 2020-10-30 06:37:05 +00:00
Harneet Virk
e255c000ab Merge pull request #1211 from Azure/release_update/Release-72
update samples from Release-72 as a part of  SDK release
2020-10-28 14:30:50 -07:00
amlrelsa-ms
7871e37ec0 update samples from Release-72 as a part of SDK release 2020-10-28 21:24:40 +00:00
Cody
58e584e7eb Update README.md (#1209) 2020-10-27 21:00:38 -04:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
Cody
4aad830f1c update readme 2020-10-22 09:13:20 -07:00
Harneet Virk
c1b125025a Merge pull request #1198 from harneetvirk/master
Fixing/Removing broken links
2020-10-20 12:30:46 -07:00
Harneet Virk
9f364f7638 Update README.md 2020-10-20 12:30:03 -07:00
Harneet Virk
4beb749a76 Fixing/Removing the broken links 2020-10-20 12:28:45 -07:00
Harneet Virk
04fe8c4580 Merge pull request #1191 from savitamittal1/patch-4
Update README.md
2020-10-17 08:48:20 -07:00
Harneet Virk
498018451a Merge pull request #1193 from savitamittal1/patch-6
Update automl-databricks-local-with-deployment.ipynb
2020-10-17 08:47:54 -07:00
savitamittal1
04305e33f0 Update automl-databricks-local-with-deployment.ipynb 2020-10-16 23:58:12 -07:00
savitamittal1
d22e76d5e0 Update README.md 2020-10-16 23:53:41 -07:00
Harneet Virk
d71c482f75 Merge pull request #1184 from Azure/release_update/Release-70
update samples from Release-70 as a part of  SDK 1.16.0 release
2020-10-12 22:24:25 -07:00
amlrelsa-ms
5775f8a78f update samples from Release-70 as a part of SDK release 2020-10-13 05:19:49 +00:00
Cody
aae823ecd8 Merge pull request #1181 from samuel100/quickstart-notebook
quickstart nb added
2020-10-09 10:54:32 -07:00
Sam Kemp
f1126e07f9 quickstart nb added 2020-10-09 10:35:19 +01:00
Harneet Virk
0e4b27a233 Merge pull request #1171 from savitamittal1/patch-2
Update automl-databricks-local-01.ipynb
2020-10-02 09:41:14 -07:00
Harneet Virk
0a3d5f68a1 Merge pull request #1172 from savitamittal1/patch-3
Update automl-databricks-local-with-deployment.ipynb
2020-10-02 09:41:02 -07:00
savitamittal1
a6fe2affcb Update automl-databricks-local-with-deployment.ipynb
fixed link to readme
2020-10-01 19:38:11 -07:00
savitamittal1
ce469ddf6a Update automl-databricks-local-01.ipynb
fixed link for readme
2020-10-01 19:36:06 -07:00
mx-iao
9fe459be79 Merge pull request #1166 from Azure/minxia/patch
patch for resume training notebook
2020-09-29 17:30:24 -07:00
mx-iao
89c35c8ed6 Update train-tensorflow-resume-training.ipynb 2020-09-29 17:28:17 -07:00
mx-iao
33168c7f5d Update train-tensorflow-resume-training.ipynb 2020-09-29 17:27:23 -07:00
Cody
1d0766bd46 Merge pull request #1165 from samuel100/quickstart-add
quickstart added
2020-09-29 13:13:36 -07:00
Sam Kemp
9903e56882 quickstart added 2020-09-29 21:09:55 +01:00
Harneet Virk
a039166b90 Merge pull request #1162 from Azure/release_update/Release-69
update samples from Release-69 as a part of  SDK 1.15.0 release
2020-09-28 23:54:05 -07:00
amlrelsa-ms
4e4bf48013 update samples from Release-69 as a part of SDK release 2020-09-29 06:48:31 +00:00
Harneet Virk
0a2408300a Merge pull request #1158 from Azure/release_update/Release-68
update samples from Release-68 as a part of  SDK release
2020-09-25 09:23:59 -07:00
amlrelsa-ms
d99c3f5470 update samples from Release-68 as a part of SDK release 2020-09-25 16:10:59 +00:00
Harneet Virk
3f62fe7d47 Merge pull request #1157 from Azure/release_update/Release-67
update samples from Release-67 as a part of  SDK release
2020-09-23 15:51:20 -07:00
amlrelsa-ms
6059c1dc0c update samples from Release-67 as a part of SDK release 2020-09-23 22:48:56 +00:00
Harneet Virk
8e2032fcde Merge pull request #1153 from Azure/release_update/Release-66
update samples from Release-66 as a part of  SDK release
2020-09-21 16:04:23 -07:00
amlrelsa-ms
824d844cd7 update samples from Release-66 as a part of SDK release 2020-09-21 23:02:01 +00:00
Harneet Virk
bb1c7db690 Merge pull request #1148 from Azure/release_update/Release-65
update samples from Release-65 as a part of  SDK release
2020-09-16 18:23:12 -07:00
amlrelsa-ms
8dad09a42f update samples from Release-65 as a part of SDK release 2020-09-17 01:14:32 +00:00
Harneet Virk
db2bf8ae93 Merge pull request #1137 from Azure/release_update/Release-64
update samples from Release-64 as a part of  SDK release
2020-09-09 15:31:51 -07:00
amlrelsa-ms
820c09734f update samples from Release-64 as a part of SDK release 2020-09-09 22:30:45 +00:00
Cody
a2a33c70a6 Merge pull request #1123 from oliverw1/patch-2
docs: bring docs in line with code
2020-09-02 11:12:31 -07:00
Cody
2ff791968a Merge pull request #1122 from oliverw1/patch-1
docs: Move unintended side columns below the main rows
2020-09-02 11:11:58 -07:00
Harneet Virk
7186127804 Merge pull request #1128 from Azure/release_update/Release-63
update samples from Release-63 as a part of  SDK release
2020-08-31 13:23:08 -07:00
amlrelsa-ms
b01c52bfd6 update samples from Release-63 as a part of SDK release 2020-08-31 20:00:07 +00:00
Oliver W
28be7bcf58 docs: bring docs in line with code
A non-existant name was being referred to, which only serves confusion.
2020-08-28 10:24:24 +02:00
Oliver W
37a9350fde Properly format markdown table
Remove the unintended two columns that appeared on the right side
2020-08-28 09:29:46 +02:00
Harneet Virk
5080053a35 Merge pull request #1120 from Azure/release_update/Release-62
update samples from Release-62 as a part of  SDK release
2020-08-27 17:12:05 -07:00
amlrelsa-ms
3c02102691 update samples from Release-62 as a part of SDK release 2020-08-27 23:28:05 +00:00
Sheri Gilley
07e1676762 Merge pull request #1010 from GinSiuCheng/patch-1
Include additional details on user authentication
2020-08-25 11:45:58 -05:00
Sheri Gilley
919a3c078f fix code blocks 2020-08-25 11:13:24 -05:00
Sheri Gilley
9b53c924ed add code block for better formatting 2020-08-25 11:09:56 -05:00
Sheri Gilley
04ad58056f fix quotes 2020-08-25 11:06:18 -05:00
Sheri Gilley
576bf386b5 fix quotes 2020-08-25 11:05:25 -05:00
Cody
7e62d1cfd6 Merge pull request #891 from Fokko/patch-1
Don't print the access token
2020-08-22 18:28:33 -07:00
Cody
ec67a569af Merge pull request #804 from omartin2010/patch-3
typo
2020-08-17 14:35:55 -07:00
Cody
6d1e80bcef Merge pull request #1031 from hyoshioka0128/patch-1
Typo "Mircosoft"→"Microsoft"
2020-08-17 14:32:44 -07:00
mx-iao
db00d9ad3c Merge pull request #1100 from Azure/lostmygithubaccount-patch-1
fix minor typo in how-to-use-azureml/README.md
2020-08-17 14:30:18 -07:00
Harneet Virk
d33c75abc3 Merge pull request #1104 from Azure/release_update/Release-61
update samples from Release-61 as a part of  SDK release
2020-08-17 10:59:39 -07:00
amlrelsa-ms
d0dc4836ae update samples from Release-61 as a part of SDK release 2020-08-17 17:45:26 +00:00
Cody
982f8fcc1d Update README.md 2020-08-14 15:25:39 -07:00
Akshaya Annavajhala
79739b5e1b Remove broken links (#1095)
* Remove broken links

* Update README.md
2020-08-10 19:35:41 -04:00
Harneet Virk
aac4fa1fb9 Merge pull request #1081 from Azure/release_update/Release-60
update samples from Release-60 as a part of  SDK 1.11.0 release
2020-08-04 00:04:38 -07:00
amlrelsa-ms
5b684070e1 update samples from Release-60 as a part of SDK release 2020-08-04 06:12:06 +00:00
Harneet Virk
0ab8b141ee Merge pull request #1078 from Azure/release_update/Release-59
update samples from Release-59 as a part of  SDK release
2020-07-31 10:52:22 -07:00
amlrelsa-ms
b9ef23ad4b update samples from Release-59 as a part of SDK release 2020-07-31 17:23:17 +00:00
Harneet Virk
7e2c1ca152 Merge pull request #1063 from Azure/release_update/Release-58
update samples from Release-58 as a part of  SDK release
2020-07-20 13:46:37 -07:00
amlrelsa-ms
d096535e48 update samples from Release-58 as a part of SDK release 2020-07-20 20:44:42 +00:00
Harneet Virk
f80512a6db Merge pull request #1056 from wchill/wchill-patch-1
Update README.md with KeyError: brand workaround
2020-07-15 10:22:18 -07:00
Eric Ahn
b54111620e Update README.md 2020-07-14 17:47:23 -07:00
Harneet Virk
8dd52ee2df Merge pull request #1036 from Azure/release_update/Release-57
update samples from Release-57 as a part of  SDK release
2020-07-06 15:06:14 -07:00
amlrelsa-ms
6c629f1eda update samples from Release-57 as a part of SDK release 2020-07-06 22:05:24 +00:00
Hiroshi Yoshioka
9c32ca9db5 Typo "Mircosoft"→"Microsoft"
https://docs.microsoft.com/en-us/samples/azure/machinelearningnotebooks/azure-machine-learning-service-example-notebooks/
2020-06-29 12:21:23 +09:00
Harneet Virk
053efde8c9 Merge pull request #1022 from Azure/release_update/Release-56
update samples from Release-56 as a part of  SDK release
2020-06-22 11:12:31 -07:00
amlrelsa-ms
5189691f06 update samples from Release-56 as a part of SDK release 2020-06-22 18:11:40 +00:00
Gin
745b4f0624 Include additional details on user authentication
Additional details should be included for user authentication esp. for enterprise users who may have more than one single aad tenant linked to a user.
2020-06-13 21:24:56 -04:00
Harneet Virk
fb900916e3 Update README.md 2020-06-11 13:26:04 -07:00
Harneet Virk
738347f3da Merge pull request #996 from Azure/release_update/Release-55
update samples from Release-55 as a part of  SDK release
2020-06-08 15:31:35 -07:00
amlrelsa-ms
34a67c1f8b update samples from Release-55 as a part of SDK release 2020-06-08 22:28:25 +00:00
Harneet Virk
34898828be Merge pull request #992 from Azure/release_update/Release-54
update samples from Release-54 as a part of  SDK release
2020-06-02 14:42:02 -07:00
vizhur
a7c3a0fdb8 update samples from Release-54 as a part of SDK release 2020-06-02 21:34:10 +00:00
Harneet Virk
6d11cdfa0a Merge pull request #984 from Azure/release_update/Release-53
update samples from Release-53 as a part of  SDK release
2020-05-26 19:59:58 -07:00
vizhur
11e8ed2bab update samples from Release-53 as a part of SDK release 2020-05-27 02:45:07 +00:00
Harneet Virk
12c06a4168 Merge pull request #978 from ahcan76/patch-1
Fix image paths in tutorial-1st-experiment-sdk-train.ipynb
2020-05-18 12:58:21 -07:00
ahcan76
1f75dc9725 Update tutorial-1st-experiment-sdk-train.ipynb
Fix the image path
2020-05-18 22:40:54 +03:00
Harneet Virk
1a1a42d525 Merge pull request #977 from Azure/release_update/Release-52
update samples from Release-52 as a part of  SDK release
2020-05-18 12:22:48 -07:00
vizhur
879a272a8d update samples from Release-52 as a part of SDK release 2020-05-18 19:21:05 +00:00
Harneet Virk
bc65bde097 Merge pull request #971 from Azure/release_update/Release-51
update samples from Release-51 as a part of  SDK release
2020-05-13 22:17:45 -07:00
vizhur
690bdfbdbe update samples from Release-51 as a part of SDK release 2020-05-14 05:03:47 +00:00
Harneet Virk
3c02bd8782 Merge pull request #967 from Azure/release_update/Release-50
update samples from Release-50 as a part of  SDK release
2020-05-12 19:57:40 -07:00
vizhur
5c14610a1c update samples from Release-50 as a part of SDK release 2020-05-13 02:45:40 +00:00
Harneet Virk
4e3afae6fb Merge pull request #965 from Azure/release_update/Release-49
update samples from Release-49 as a part of  SDK release
2020-05-11 19:25:28 -07:00
vizhur
a2144aa083 update samples from Release-49 as a part of SDK release 2020-05-12 02:24:34 +00:00
Harneet Virk
0e6334178f Merge pull request #963 from Azure/release_update/Release-46
update samples from Release-46 as a part of  SDK release
2020-05-11 14:49:34 -07:00
vizhur
4ec9178d22 update samples from Release-46 as a part of SDK release 2020-05-11 21:48:31 +00:00
Harneet Virk
2aa7c53b0c Merge pull request #962 from Azure/release_update_stablev2/Release-11
update samples from Release-11 as a part of 1.5.0 SDK stable release
2020-05-11 12:42:32 -07:00
vizhur
553fa43e17 update samples from Release-11 as a part of 1.5.0 SDK stable release 2020-05-11 18:59:22 +00:00
Harneet Virk
e98131729e Merge pull request #949 from Azure/release_update_stablev2/Release-8
update samples from Release-8 as a part of 1.4.0 SDK stable release
2020-04-27 11:00:37 -07:00
vizhur
fd2b09e2c2 update samples from Release-8 as a part of 1.4.0 SDK stable release 2020-04-27 17:44:41 +00:00
Harneet Virk
7970209069 Merge pull request #930 from Azure/release_update/Release-44
update samples from Release-44 as a part of  SDK release
2020-04-17 12:46:29 -07:00
vizhur
24f8651bb5 update samples from Release-44 as a part of SDK release 2020-04-17 19:45:37 +00:00
Harneet Virk
b881f78e46 Merge pull request #918 from Azure/release_update_stablev2/Release-6
update samples from Release-6 as a part of 1.3.0 SDK stable release
2020-04-13 09:23:38 -07:00
vizhur
057e22b253 update samples from Release-6 as a part of 1.3.0 SDK stable release 2020-04-13 16:22:23 +00:00
Fokko Driesprong
119fd0a8f6 Don't print the access token
That's never a good idea, no exceptions :)
2020-03-31 08:14:05 +02:00
Harneet Virk
c520bd1d41 Merge pull request #884 from Azure/release_update/Release-43
update samples from Release-43 as a part of  SDK release
2020-03-23 16:49:27 -07:00
vizhur
d3f1212440 update samples from Release-43 as a part of SDK release 2020-03-23 23:39:45 +00:00
Harneet Virk
b95a65eef4 Merge pull request #883 from Azure/release_update_stablev2/Release-3
update samples from Release-3 as a part of 1.2.0 SDK stable release
2020-03-23 16:21:53 -07:00
vizhur
2218af619f update samples from Release-3 as a part of 1.2.0 SDK stable release 2020-03-23 23:11:53 +00:00
Harneet Virk
0401128638 Merge pull request #878 from Azure/release_update/Release-42
update samples from Release-42 as a part of  SDK release
2020-03-20 11:14:02 -07:00
vizhur
59fcb54998 update samples from Release-42 as a part of SDK release 2020-03-20 18:10:08 +00:00
Harneet Virk
e0ea99a6bb Merge pull request #862 from Azure/release_update/Release-41
update samples from Release-41 as a part of  SDK release
2020-03-13 14:57:58 -07:00
vizhur
b06f5ce269 update samples from Release-41 as a part of SDK release 2020-03-13 21:57:04 +00:00
Harneet Virk
ed0ce9e895 Merge pull request #856 from Azure/release_update/Release-40
update samples from Release-40 as a part of  SDK release
2020-03-12 12:28:18 -07:00
vizhur
71053d705b update samples from Release-40 as a part of SDK release 2020-03-12 19:25:26 +00:00
Harneet Virk
77f98bf75f Merge pull request #852 from Azure/release_update_stable/Release-6
update samples from Release-6 as a part of 1.1.5 SDK stable release
2020-03-11 15:37:59 -06:00
vizhur
e443fd1342 update samples from Release-6 as a part of 1.1.5rc0 SDK stable release 2020-03-11 19:51:02 +00:00
Harneet Virk
2165cf308e update samples from Release-25 as a part of 1.1.2rc0 SDK experimental release (#829)
Co-authored-by: vizhur <vizhur@live.com>
2020-03-02 15:42:04 -05:00
Olivier Martin
d4a486827d typo 2020-02-17 17:16:47 -05:00
Harneet Virk
3d6caa10a3 Merge pull request #801 from Azure/release_update/Release-39
update samples from Release-39 as a part of  SDK release
2020-02-13 19:03:36 -07:00
vizhur
4df079db1c update samples from Release-39 as a part of SDK release 2020-02-14 02:01:41 +00:00
Sander Vanhove
67d0b02ef9 Fix broken link in README (#797) 2020-02-13 08:20:28 -05:00
Harneet Virk
4e7b3784d5 Merge pull request #788 from Azure/release_update/Release-38
update samples from Release-38 as a part of  SDK release
2020-02-11 13:16:15 -07:00
vizhur
ed91e39d7e update samples from Release-38 as a part of SDK release 2020-02-11 20:00:16 +00:00
Harneet Virk
a09a1a16a7 Merge pull request #780 from Azure/release_update/Release-37
update samples from Release-37 as a part of  SDK release
2020-02-07 21:52:34 -07:00
vizhur
9662505517 update samples from Release-37 as a part of SDK release 2020-02-08 04:49:27 +00:00
Harneet Virk
8e103c02ff Merge pull request #779 from Azure/release_update/Release-36
update samples from Release-36 as a part of  SDK release
2020-02-07 21:40:57 -07:00
vizhur
ecb5157add update samples from Release-36 as a part of SDK release 2020-02-08 04:35:14 +00:00
Shané Winner
d7d23d5e7c Update index.md 2020-02-05 22:41:22 -08:00
Harneet Virk
83a21ba53a update samples from Release-35 as a part of SDK release (#765)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-05 20:03:41 -05:00
Harneet Virk
3c9cb89c1a update samples from Release-18 as a part of 1.1.0rc0 SDK experimental release (#760)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-04 22:19:52 -05:00
Sheri Gilley
cca7c2e26f add cell metadata 2020-02-04 11:31:07 -06:00
Harneet Virk
e895d7c2bf update samples - test (#758)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-31 15:19:58 -05:00
Shané Winner
3588eb9665 Update index.md 2020-01-23 15:46:43 -08:00
Harneet Virk
a09e726f31 update samples - test (#748)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-23 16:50:29 -05:00
Shané Winner
4fb1d9ee5b Update index.md 2020-01-22 11:38:24 -08:00
Harneet Virk
b05ff80e9d update samples from Release-169 as a part of 1.0.85 SDK release (#742)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-21 18:00:15 -05:00
Shané Winner
512630472b Update index.md 2020-01-08 14:52:23 -08:00
vizhur
ae1337fe70 Merge pull request #724 from Azure/release_update/Release-167
update samples from Release-167 as a part of 1.0.83 SDK release
2020-01-06 15:38:25 -05:00
vizhur
c95f970dc8 update samples from Release-167 as a part of 1.0.83 SDK release 2020-01-06 20:16:21 +00:00
Shané Winner
9b9d112719 Update index.md 2019-12-24 07:40:48 -08:00
vizhur
fe8fcd4b48 Merge pull request #712 from Azure/release_update/Release-31
update samples - test
2019-12-23 20:28:02 -05:00
vizhur
296ae01587 update samples - test 2019-12-24 00:42:48 +00:00
Shané Winner
8f4efe15eb Update index.md 2019-12-10 09:05:23 -08:00
vizhur
d179080467 Merge pull request #690 from Azure/release_update/Release-163
update samples from Release-163 as a part of 1.0.79 SDK release
2019-12-09 15:41:03 -05:00
vizhur
0040644e7a update samples from Release-163 as a part of 1.0.79 SDK release 2019-12-09 20:09:30 +00:00
Shané Winner
8aa04307fb Update index.md 2019-12-03 10:24:18 -08:00
Shané Winner
a525da4488 Update index.md 2019-11-27 13:08:21 -08:00
Shané Winner
e149565a8a Merge pull request #679 from Azure/release_update/Release-30
update samples - test
2019-11-27 13:05:00 -08:00
vizhur
75610ec31c update samples - test 2019-11-27 21:02:21 +00:00
Shané Winner
0c2c450b6b Update index.md 2019-11-25 14:34:48 -08:00
Shané Winner
0d548eabff Merge pull request #677 from Azure/release_update/Release-29
update samples - test
2019-11-25 14:31:50 -08:00
vizhur
e4029801e6 update samples - test 2019-11-25 22:24:09 +00:00
Shané Winner
156974ee7b Update index.md 2019-11-25 11:42:53 -08:00
Shané Winner
1f05157d24 Merge pull request #676 from Azure/release_update/Release-160
update samples from Release-160 as a part of 1.0.76 SDK release
2019-11-25 11:39:27 -08:00
vizhur
2214ea8616 update samples from Release-160 as a part of 1.0.76 SDK release 2019-11-25 19:28:19 +00:00
Sheri Gilley
b54b2566de Merge pull request #667 from Azure/sdk-codetest
remove deprecated auto_prepare_environment
2019-11-21 09:25:15 -06:00
Sheri Gilley
57b0f701f8 remove deprecated auto_prepare_environment 2019-11-20 17:28:44 -06:00
Shané Winner
d658c85208 Update index.md 2019-11-12 14:59:15 -08:00
vizhur
a5f627a9b6 Merge pull request #655 from Azure/release_update/Release-28
update samples - test
2019-11-12 17:11:45 -05:00
vizhur
a8b08bdff0 update samples - test 2019-11-12 21:53:12 +00:00
Shané Winner
0dc3f34b86 Update index.md 2019-11-11 14:49:44 -08:00
Shané Winner
9ba7d5e5bb Update index.md 2019-11-11 14:48:05 -08:00
Shané Winner
c6ad2f8ec0 Merge pull request #654 from Azure/release_update/Release-158
update samples from Release-158 as a part of 1.0.74 SDK release
2019-11-11 10:25:18 -08:00
vizhur
33d6def8c3 update samples from Release-158 as a part of 1.0.74 SDK release 2019-11-11 16:57:02 +00:00
Shané Winner
69d4344dff Update index.md 2019-11-04 10:09:41 -08:00
Shané Winner
34aeec1439 Update index.md 2019-11-04 10:08:10 -08:00
Shané Winner
a9b9ebbf7d Merge pull request #641 from Azure/release_update/Release-27
update samples - test
2019-11-04 10:02:25 -08:00
vizhur
41fa508d53 update samples - test 2019-11-04 17:57:28 +00:00
Shané Winner
e1bfa98844 Update index.md 2019-11-04 08:41:15 -08:00
Shané Winner
2bcee9aa20 Update index.md 2019-11-04 08:40:29 -08:00
Shané Winner
37541b1071 Merge pull request #638 from Azure/release_update/Release-26
update samples - test
2019-11-04 08:31:59 -08:00
Shané Winner
4aff1310a7 Merge branch 'master' into release_update/Release-26 2019-11-04 08:31:37 -08:00
Shané Winner
51ecb7c54f Update index.md 2019-11-01 10:38:46 -07:00
Shané Winner
4e7fc7c82c Update index.md 2019-11-01 10:36:02 -07:00
vizhur
4ed3f0767a update samples - test 2019-11-01 14:48:01 +00:00
vizhur
46ec74f8df Merge pull request #627 from jingyanwangms/jingywa/lightgbm-notebook
add Lightgbm Estimator notebook
2019-10-22 20:54:33 -04:00
Jingyan Wang
8d2e362a10 add Lightgbm notebook 2019-10-22 17:40:32 -07:00
vizhur
86c1b3d760 adding missing files for rapids 2019-10-21 12:20:15 -04:00
Shané Winner
41dc05952f Update index.md 2019-10-15 16:37:53 -07:00
vizhur
df2e08e4a3 Merge pull request #622 from Azure/release_update/Release-25
update samples - test
2019-10-15 18:34:28 -04:00
vizhur
828a976907 update samples - test 2019-10-15 22:01:55 +00:00
vizhur
1a373f11a0 Merge pull request #621 from Azure/ak/revert-db-overwrite
Revert automatic overwrite of databricks content
2019-10-15 16:07:37 -04:00
Akshaya Annavajhala (AK)
60de701207 revert overwrites 2019-10-15 12:33:31 -07:00
Akshaya Annavajhala (AK)
5841fa4a42 revert overwrites 2019-10-15 12:27:56 -07:00
Shané Winner
659fb7abc3 Merge pull request #619 from Azure/release_update/Release-153
update samples from Release-153 as a part of 1.0.69 SDK release
2019-10-14 15:39:40 -07:00
vizhur
2e404cfc3a update samples from Release-153 as a part of 1.0.69 SDK release 2019-10-14 22:30:58 +00:00
Shané Winner
5fcf4887bc Update index.md 2019-10-06 11:44:35 -07:00
Shané Winner
1e7f3117ae Update index.md 2019-10-06 11:44:01 -07:00
Shané Winner
bbb3f85da9 Update README.md 2019-10-06 11:33:56 -07:00
Shané Winner
c816dfb479 Update index.md 2019-10-06 11:29:58 -07:00
Shané Winner
8c128640b1 Update index.md 2019-10-06 11:28:34 -07:00
vizhur
4d2b937846 Merge pull request #600 from Azure/release_update/Release-24
Fix for Tensorflow 2.0 related Notebook Failures
2019-10-02 16:27:31 -04:00
vizhur
5492f52faf update samples - test 2019-10-02 20:23:54 +00:00
Shané Winner
735db9ebe7 Update index.md 2019-10-01 09:59:10 -07:00
Shané Winner
573030b990 Update README.md 2019-10-01 09:52:10 -07:00
Shané Winner
392a059000 Update index.md 2019-10-01 09:44:37 -07:00
Shané Winner
3580e54fbb Update index.md 2019-10-01 09:42:20 -07:00
Shané Winner
2017bcd716 Update index.md 2019-10-01 09:41:33 -07:00
Roope Astala
4a3f8e7025 Merge pull request #594 from Azure/release_update/Release-149
update samples from Release-149 as a part of 1.0.65 SDK release
2019-09-30 13:29:57 -04:00
vizhur
45880114db update samples from Release-149 as a part of 1.0.65 SDK release 2019-09-30 17:08:52 +00:00
Roope Astala
314bad72a4 Merge pull request #588 from skaarthik/rapids
updating to use AML base image and system managed dependencies
2019-09-25 07:44:31 -04:00
Kaarthik Sivashanmugam
f252308005 updating to use AML base image and system managed dependencies 2019-09-24 20:47:15 -07:00
Kaarthik Sivashanmugam
6622a6c5f2 Merge pull request #1 from Azure/master
merge latest changes from Azure/MLNB repo
2019-09-24 20:40:43 -07:00
Roope Astala
6b19e2f263 Merge pull request #587 from Azure/akshaya-a-patch-3
Update README.md to remove confusing reference
2019-09-24 16:13:16 -04:00
Akshaya Annavajhala
42fd4598cb Update README.md 2019-09-24 15:28:30 -04:00
Roope Astala
476d945439 Merge pull request #580 from akshaya-a/master
Add documentation on the preview ADB linking experience
2019-09-24 09:31:45 -04:00
Shané Winner
e96bb9bef2 Delete manage-runs.yml 2019-09-22 20:37:17 -07:00
Shané Winner
2be4a5e54d Delete manage-runs.ipynb 2019-09-22 20:37:07 -07:00
Shané Winner
247a25f280 Delete hello_with_delay.py 2019-09-22 20:36:50 -07:00
Shané Winner
5d9d8eade6 Delete hello_with_children.py 2019-09-22 20:36:39 -07:00
Shané Winner
dba978e42a Delete hello.py 2019-09-22 20:36:29 -07:00
Shané Winner
7f4101c33e Delete run_details.PNG 2019-09-22 20:36:12 -07:00
Shané Winner
62b0d5df69 Delete run_history.png 2019-09-22 20:36:01 -07:00
Shané Winner
f10b55a1bc Delete logging-api.ipynb 2019-09-22 20:35:47 -07:00
Shané Winner
da9e86635e Delete logging-api.yml 2019-09-22 20:35:36 -07:00
Shané Winner
9ca6388996 Delete datasets-diff.ipynb 2019-09-19 14:14:59 -07:00
Akshaya Annavajhala
3ce779063b address PR feedback 2019-09-18 15:48:42 -04:00
Akshaya Annavajhala
ce635ce4fe add the word mlflow 2019-09-18 13:25:41 -04:00
Akshaya Annavajhala
f08e68c8e9 add linking docs 2019-09-18 11:08:46 -04:00
Shané Winner
93a1d232db Update index.md 2019-09-17 10:00:57 -07:00
Shané Winner
923483528c Update index.md 2019-09-17 09:59:23 -07:00
Shané Winner
cbeacb2ab2 Delete sklearn_regression_model.pkl 2019-09-17 09:37:44 -07:00
Shané Winner
c928c50707 Delete score.py 2019-09-17 09:37:34 -07:00
Shané Winner
efb42bacf9 Delete register-model-deploy-local.ipynb 2019-09-17 09:37:26 -07:00
Shané Winner
d8f349a1ae Delete register-model-deploy-local-advanced.ipynb 2019-09-17 09:37:17 -07:00
Shané Winner
96a61fdc78 Delete myenv.yml 2019-09-17 09:37:08 -07:00
Shané Winner
ff8128f023 Delete helloworld.txt 2019-09-17 09:36:59 -07:00
Shané Winner
8260302a68 Delete dockerSharedDrive.JPG 2019-09-17 09:36:50 -07:00
Shané Winner
fbd7f4a55b Delete README.md 2019-09-17 09:36:41 -07:00
Shané Winner
d4e4206179 Delete helloworld.txt 2019-09-17 09:35:38 -07:00
Shané Winner
a98b918feb Delete model-register-and-deploy.ipynb 2019-09-17 09:35:29 -07:00
Shané Winner
890490ec70 Delete model-register-and-deploy.yml 2019-09-17 09:35:17 -07:00
Shané Winner
c068c9b979 Delete myenv.yml 2019-09-17 09:34:54 -07:00
Shané Winner
f334a3516f Delete score.py 2019-09-17 09:34:44 -07:00
Shané Winner
96248d8dff Delete sklearn_regression_model.pkl 2019-09-17 09:34:27 -07:00
Shané Winner
c42e865700 Delete README.md 2019-09-17 09:29:20 -07:00
vizhur
9233ce089a Merge pull request #577 from Azure/release_update/Release-146
update samples from Release-146 as a part of 1.0.62 SDK release
2019-09-16 19:44:43 -04:00
vizhur
6bb1e2a3e3 update samples from Release-146 as a part of 1.0.62 SDK release 2019-09-16 23:21:57 +00:00
Shané Winner
e1724c8a89 Merge pull request #573 from lostmygithubaccount/master
adding timeseries dataset example notebook
2019-09-16 11:00:30 -07:00
Shané Winner
446e0768cc Delete datasets-diff.ipynb 2019-09-16 10:53:16 -07:00
Cody Peterson
8a2f114a16 adding timeseries dataset example notebook 2019-09-13 08:30:26 -07:00
Shané Winner
80c0d4d30f Merge pull request #570 from trevorbye/master
new pipeline tutorial
2019-09-11 09:28:40 -07:00
Trevor Bye
e8f4708a5a adding index metadata 2019-09-11 09:24:41 -07:00
Trevor Bye
fbaeb84204 adding tutorial 2019-09-11 09:02:06 -07:00
Trevor Bye
da1fab0a77 removing dprep file from old deleted tutorial 2019-09-10 12:31:57 -07:00
Shané Winner
94d2890bb5 Update index.md 2019-09-06 06:37:35 -07:00
Shané Winner
4d1ec4f7d4 Update index.md 2019-09-06 06:30:54 -07:00
Shané Winner
ace3153831 Update index.md 2019-09-06 06:28:50 -07:00
Shané Winner
58bbfe57b2 Update index.md 2019-09-06 06:15:36 -07:00
vizhur
11ea00b1d9 Update index.md 2019-09-06 09:14:30 -04:00
Shané Winner
b81efca3e5 Update index.md 2019-09-06 06:13:03 -07:00
vizhur
d7ceb9bca2 Update index.md 2019-09-06 09:08:02 -04:00
Shané Winner
17730dc69a Merge pull request #564 from MayMSFT/patch-1
Update file-dataset-img-classification.ipynb
2019-09-04 13:31:08 -07:00
May Hu
3a029d48a2 Update file-dataset-img-classification.ipynb
made edit on the sdk version
2019-09-04 13:25:10 -07:00
vizhur
06d43956f3 Merge pull request #558 from Azure/release_update/Release-144
update samples from Release-144 as a part of 1.0.60 SDK release
2019-09-03 22:09:33 -04:00
vizhur
a1cb9b33a5 update samples from Release-144 as a part of 1.0.60 SDK release 2019-09-03 22:39:55 +00:00
Shané Winner
fdc3fe2a53 Delete README.md 2019-08-29 10:22:24 -07:00
Shané Winner
628b35912c Delete train-remote.yml 2019-08-29 10:22:15 -07:00
Shané Winner
3f4cc22e94 Delete train-remote.ipynb 2019-08-29 10:22:07 -07:00
Shané Winner
18d7afb707 Delete train_diabetes.py 2019-08-29 10:21:59 -07:00
Shané Winner
cd35ca30d4 Delete train-local.ipynb 2019-08-29 10:21:48 -07:00
Shané Winner
30eae0b46c Delete train-local.yml 2019-08-29 10:21:40 -07:00
Shané Winner
f16951387f Delete train.py 2019-08-29 10:21:27 -07:00
Shané Winner
0d8de29147 Delete train-and-deploy-pytorch.ipynb 2019-08-29 10:21:16 -07:00
Shané Winner
836354640c Delete train-and-deploy-pytorch.yml 2019-08-29 10:21:08 -07:00
Shané Winner
6162e80972 Delete deploy-model.yml 2019-08-29 10:20:55 -07:00
Shané Winner
fe9fe3392d Delete deploy-model.ipynb 2019-08-29 10:20:46 -07:00
Shané Winner
5ec6d8861b Delete auto-ml-dataprep-remote-execution.yml 2019-08-27 11:19:38 -07:00
Shané Winner
ae188f324e Delete auto-ml-dataprep-remote-execution.ipynb 2019-08-27 11:19:27 -07:00
Shané Winner
4c30c2bdb9 Delete auto-ml-dataprep.yml 2019-08-27 11:19:00 -07:00
Shané Winner
b891440e2d Delete auto-ml-dataprep.ipynb 2019-08-27 11:18:50 -07:00
Shané Winner
784827cdd2 Update README.md 2019-08-27 09:23:40 -07:00
vizhur
0957af04ca Merge pull request #545 from Azure/imatiach-msft-patch-1
add dataprep dependency to notebook
2019-08-23 13:14:30 -04:00
Ilya Matiach
a3bdd193d1 add dataprep dependency to notebook
add dataprep dependency to train-explain-model-on-amlcompute-and-deploy.ipynb notebook for azureml-explain-model package
2019-08-23 13:11:36 -04:00
Shané Winner
dff09970ac Update README.md 2019-08-23 08:38:01 -07:00
Shané Winner
abc7d21711 Update README.md 2019-08-23 05:28:45 +00:00
Shané Winner
ec12ef635f Delete azure-ml-datadrift.ipynb 2019-08-21 10:32:40 -07:00
Shané Winner
81b3e6f09f Delete azure-ml-datadrift.yml 2019-08-21 10:32:32 -07:00
Shané Winner
cc167dceda Delete score.py 2019-08-21 10:32:23 -07:00
Shané Winner
bc52a6d8ee Delete datasets-diff.ipynb 2019-08-21 10:31:50 -07:00
Shané Winner
5bbbdbe73c Delete Titanic.csv 2019-08-21 10:31:38 -07:00
Shané Winner
fd4de05ddd Delete train.py 2019-08-21 10:31:26 -07:00
Shané Winner
9eaab2189d Delete datasets-tutorial.ipynb 2019-08-21 10:31:15 -07:00
Shané Winner
12147754b2 Delete datasets-diff.ipynb 2019-08-21 10:31:05 -07:00
Shané Winner
90ef263823 Delete README.md 2019-08-21 10:30:54 -07:00
Shané Winner
143590cfb4 Delete new-york-taxi_scale-out.ipynb 2019-08-21 10:30:39 -07:00
Shané Winner
40379014ad Delete new-york-taxi.ipynb 2019-08-21 10:30:29 -07:00
Shané Winner
f7b0e99fa1 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-08-21 10:30:18 -07:00
Shané Winner
7a7ac48411 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-08-21 10:30:04 -07:00
Shané Winner
50107c5b1e Delete part-00007-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:51 -07:00
Shané Winner
e41d7e6819 Delete part-00006-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:36 -07:00
Shané Winner
691e038e84 Delete part-00005-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:18 -07:00
Shané Winner
426e79d635 Delete part-00004-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:02 -07:00
Shané Winner
326677e87f Delete part-00003-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:45 -07:00
Shané Winner
44988e30ae Delete part-00002-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:31 -07:00
Shané Winner
646ae37384 Delete part-00001-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:18 -07:00
Shané Winner
457e29a663 Delete part-00000-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:03 -07:00
Shané Winner
2771edfb2c Delete _SUCCESS 2019-08-21 10:27:45 -07:00
Shané Winner
f0001ec322 Delete adls-dpreptestfiles.crt 2019-08-21 10:27:31 -07:00
Shané Winner
d3e02a017d Delete chicago-aldermen-2015.csv 2019-08-21 10:27:05 -07:00
Shané Winner
a0ebed6876 Delete crime-dirty.csv 2019-08-21 10:26:55 -07:00
Shané Winner
dc0ab6db47 Delete crime-spring.csv 2019-08-21 10:26:45 -07:00
Shané Winner
ea7900f82c Delete crime-winter.csv 2019-08-21 10:26:35 -07:00
Shané Winner
0cb3fd180d Delete crime.parquet 2019-08-21 10:26:26 -07:00
Shané Winner
b05c3e46bb Delete crime.txt 2019-08-21 10:26:17 -07:00
Shané Winner
a1b7d298d3 Delete crime.xlsx 2019-08-21 10:25:41 -07:00
Shané Winner
cc5516c3b3 Delete crime_duplicate_headers.csv 2019-08-21 10:25:32 -07:00
Shané Winner
4fb6070b89 Delete crime.zip 2019-08-21 10:25:23 -07:00
Shané Winner
1b926cdf53 Delete crime-full.csv 2019-08-21 10:25:13 -07:00
Shané Winner
72fc00fb65 Delete crime.dprep 2019-08-21 10:24:56 -07:00
Shané Winner
ddc6b57253 Delete ADLSgen2-datapreptest.crt 2019-08-21 10:24:47 -07:00
Shané Winner
e8b3b98338 Delete crime_fixed_width_file.txt 2019-08-21 10:24:38 -07:00
Shané Winner
66325a1405 Delete crime_multiple_separators.csv 2019-08-21 10:24:29 -07:00
Shané Winner
0efbeaf4b8 Delete json.json 2019-08-21 10:24:12 -07:00
Shané Winner
11d487fb28 Merge pull request #542 from Azure/sgilley/update-deploy
change deployment to model-centric approach
2019-08-21 10:22:13 -07:00
Shané Winner
073e319ef9 Delete large_dflow.json 2019-08-21 10:21:41 -07:00
Shané Winner
3ed75f28d1 Delete map_func.py 2019-08-21 10:21:23 -07:00
Shané Winner
bfc0367f54 Delete median_income.csv 2019-08-21 10:21:14 -07:00
Shané Winner
075eeb583f Delete median_income_transformed.csv 2019-08-21 10:21:05 -07:00
Shané Winner
b7531d3b9e Delete parquet.parquet 2019-08-21 10:20:55 -07:00
Shané Winner
41dc3bd1cf Delete secrets.dprep 2019-08-21 10:20:45 -07:00
Shané Winner
b790b385a4 Delete stream-path.csv 2019-08-21 10:20:36 -07:00
Shané Winner
8700328fe9 Delete summarize.ipynb 2019-08-21 10:17:21 -07:00
Shané Winner
adbd2c8200 Delete subsetting-sampling.ipynb 2019-08-21 10:17:12 -07:00
Shané Winner
7d552effb0 Delete split-column-by-example.ipynb 2019-08-21 10:17:01 -07:00
Shané Winner
bc81d2a5a7 Delete semantic-types.ipynb 2019-08-21 10:16:52 -07:00
Shané Winner
7620de2d91 Delete secrets.ipynb 2019-08-21 10:16:42 -07:00
Shané Winner
07a43a0444 Delete replace-fill-error.ipynb 2019-08-21 10:16:33 -07:00
Shané Winner
f4d5874e09 Delete replace-datasource-replace-reference.ipynb 2019-08-21 10:16:23 -07:00
Shané Winner
8a0b4d24bd Delete random-split.ipynb 2019-08-21 10:16:14 -07:00
Shané Winner
636f19be1f Delete quantile-transformation.ipynb 2019-08-21 10:16:04 -07:00
Shané Winner
0fd7f7d9b2 Delete open-save-dataflows.ipynb 2019-08-21 10:15:54 -07:00
Shané Winner
ab6c66534f Delete one-hot-encoder.ipynb 2019-08-21 10:15:45 -07:00
Shané Winner
faccf13759 Delete min-max-scaler.ipynb 2019-08-21 10:15:36 -07:00
Shané Winner
4c6a28e4ed Delete label-encoder.ipynb 2019-08-21 10:15:25 -07:00
Shané Winner
64ad88e2cb Delete join.ipynb 2019-08-21 10:15:17 -07:00
Shané Winner
969ac90d39 Delete impute-missing-values.ipynb 2019-08-21 10:12:12 -07:00
Shané Winner
fb977c1e95 Delete fuzzy-group.ipynb 2019-08-21 10:12:03 -07:00
Shané Winner
d5ba3916f7 Delete filtering.ipynb 2019-08-21 10:11:53 -07:00
Shané Winner
f7f1087337 Delete external-references.ipynb 2019-08-21 10:11:43 -07:00
Shané Winner
47ea2dbc03 Delete derive-column-by-example.ipynb 2019-08-21 10:11:33 -07:00
Shané Winner
bd2cf534e5 Delete datastore.ipynb 2019-08-21 10:11:24 -07:00
Shané Winner
65f1668d69 Delete data-profile.ipynb 2019-08-21 10:11:16 -07:00
Shané Winner
e0fb7df0aa Delete data-ingestion.ipynb 2019-08-21 10:11:06 -07:00
Shané Winner
7047f76299 Delete custom-python-transforms.ipynb 2019-08-21 10:10:56 -07:00
Shané Winner
c39f2d5eb6 Delete column-type-transforms.ipynb 2019-08-21 10:10:45 -07:00
Shané Winner
5fda69a388 Delete column-manipulations.ipynb 2019-08-21 10:10:36 -07:00
Shané Winner
87ce954eef Delete cache.ipynb 2019-08-21 10:10:26 -07:00
Shané Winner
ebbeac413a Delete auto-read-file.ipynb 2019-08-21 10:10:15 -07:00
Shané Winner
a68bbaaab4 Delete assertions.ipynb 2019-08-21 10:10:05 -07:00
Shané Winner
8784dc979f Delete append-columns-and-rows.ipynb 2019-08-21 10:09:55 -07:00
Shané Winner
f8047544fc Delete add-column-using-expression.ipynb 2019-08-21 10:09:44 -07:00
Shané Winner
eeb2a05e4f Delete working-with-file-streams.ipynb 2019-08-21 10:09:33 -07:00
Shané Winner
6db9d7bd8b Delete writing-data.ipynb 2019-08-21 10:09:19 -07:00
Shané Winner
80e2fde734 Delete getting-started.ipynb 2019-08-21 10:09:04 -07:00
Shané Winner
ae4f5d40ee Delete README.md 2019-08-21 10:08:53 -07:00
Shané Winner
5516edadfd Delete README.md 2019-08-21 10:08:13 -07:00
Sheri Gilley
475afbf44b change deployment to model-centric approach 2019-08-21 09:50:49 -05:00
Shané Winner
197eaf1aab Merge pull request #541 from Azure/sdgilley/update-tutorial
Update img-classification-part1-training.ipynb
2019-08-20 15:59:24 -07:00
Sheri Gilley
184680f1d2 Update img-classification-part1-training.ipynb
updated explanation of datastore
2019-08-20 17:52:45 -05:00
Shané Winner
474f58bd0b Merge pull request #540 from trevorbye/master
removing tutorials for single combined tutorial
2019-08-20 15:22:47 -07:00
Trevor Bye
22c8433897 removing tutorials for single combined tutorial 2019-08-20 12:09:21 -07:00
Josée Martens
822cdd0f01 Update issue templates 2019-08-20 08:35:00 -05:00
Josée Martens
6e65d42986 Update issue templates 2019-08-20 08:26:45 -05:00
Harneet Virk
4c0cbac834 Merge pull request #537 from Azure/release_update/Release-141
update samples from Release-141 as a part of 1.0.57 SDK release
2019-08-19 18:32:44 -07:00
vizhur
44a7481ed1 update samples from Release-141 as a part of 1.0.57 SDK release 2019-08-19 23:33:44 +00:00
Ilya Matiach
8f418b216d Merge pull request #526 from imatiach-msft/ilmat/remove-old-explain-dirs
removing old explain model directories
2019-08-13 12:37:00 -04:00
Ilya Matiach
2d549ecad3 removing old directories 2019-08-13 12:31:51 -04:00
Josée Martens
4dbb024529 Update issue templates 2019-08-11 18:02:17 -05:00
Josée Martens
142a1a510e Update issue templates 2019-08-11 18:00:12 -05:00
vizhur
2522486c26 Merge pull request #519 from wamartin-aml/master
Add dataprep dependency
2019-08-08 09:34:36 -04:00
Walter Martin
6d5226e47c Add dataprep dependency 2019-08-08 09:31:18 -04:00
Shané Winner
e7676d7cdc Delete README.md 2019-08-07 13:14:39 -07:00
Shané Winner
a84f6636f1 Delete README.md 2019-08-07 13:14:24 -07:00
Roope Astala
41be10d1c1 Delete authentication-in-azure-ml.ipynb 2019-08-07 10:12:48 -04:00
vizhur
429eb43914 Merge pull request #513 from Azure/release_update/Release-139
update samples from Release-139 as a part of 1.0.55 SDK release
2019-08-05 16:22:25 -04:00
vizhur
c0dae0c645 update samples from Release-139 as a part of 1.0.55 SDK release 2019-08-05 18:39:19 +00:00
Shané Winner
e4d9a2b4c5 Delete score.py 2019-07-29 09:33:11 -07:00
Shané Winner
7648e8f516 Delete readme.md 2019-07-29 09:32:55 -07:00
Shané Winner
b5ed94b4eb Delete azure-ml-datadrift.ipynb 2019-07-29 09:32:47 -07:00
Shané Winner
85e487f74f Delete new-york-taxi_scale-out.ipynb 2019-07-28 00:38:05 -07:00
Shané Winner
c0a5b2de79 Delete new-york-taxi.ipynb 2019-07-28 00:37:56 -07:00
Shané Winner
0a9e076e5f Delete stream-path.csv 2019-07-28 00:37:44 -07:00
Shané Winner
e3b974811d Delete secrets.dprep 2019-07-28 00:37:33 -07:00
Shané Winner
381d1a6f35 Delete parquet.parquet 2019-07-28 00:37:20 -07:00
Shané Winner
adaa55675e Delete median_income_transformed.csv 2019-07-28 00:37:12 -07:00
Shané Winner
5e3c592d4b Delete median_income.csv 2019-07-28 00:37:02 -07:00
Shané Winner
9c6f1e2571 Delete map_func.py 2019-07-28 00:36:52 -07:00
Shané Winner
bd1bedd563 Delete large_dflow.json 2019-07-28 00:36:43 -07:00
Shané Winner
9716f3614e Delete json.json 2019-07-28 00:36:30 -07:00
Shané Winner
d2c72ca149 Delete crime_multiple_separators.csv 2019-07-28 00:36:19 -07:00
Shané Winner
4f62f64207 Delete crime_fixed_width_file.txt 2019-07-28 00:36:10 -07:00
Shané Winner
16473eb33e Delete crime_duplicate_headers.csv 2019-07-28 00:36:01 -07:00
Shané Winner
d10474c249 Delete crime.zip 2019-07-28 00:35:51 -07:00
Shané Winner
6389cc16f9 Delete crime.xlsx 2019-07-28 00:35:41 -07:00
Shané Winner
bc0a8e0152 Delete crime.txt 2019-07-28 00:35:30 -07:00
Shané Winner
39384aea52 Delete crime.parquet 2019-07-28 00:35:20 -07:00
Shané Winner
5bf4b0bafe Delete crime.dprep 2019-07-28 00:35:11 -07:00
Shané Winner
f22adb7949 Delete crime-winter.csv 2019-07-28 00:35:00 -07:00
Shané Winner
8409ab7133 Delete crime-spring.csv 2019-07-28 00:34:50 -07:00
Shané Winner
32acd55774 Delete crime-full.csv 2019-07-28 00:34:39 -07:00
Shané Winner
7f65c1a255 Delete crime-dirty.csv 2019-07-28 00:34:27 -07:00
Shané Winner
bc7ccc7ef3 Delete chicago-aldermen-2015.csv 2019-07-28 00:34:17 -07:00
Shané Winner
1cc79a71e9 Delete adls-dpreptestfiles.crt 2019-07-28 00:34:05 -07:00
Shané Winner
c0bec5f110 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-07-28 00:33:51 -07:00
Shané Winner
77e5664482 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-07-28 00:33:38 -07:00
Shané Winner
e2eb64372a Delete part-00007-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:23 -07:00
Shané Winner
03cbb6a3a2 Delete part-00006-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:12 -07:00
Shané Winner
44d3d998a8 Delete part-00005-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:00 -07:00
Shané Winner
c626f37057 Delete part-00004-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:48 -07:00
Shané Winner
0175574864 Delete part-00003-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:37 -07:00
Shané Winner
f6e8d57da3 Delete part-00002-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:25 -07:00
Shané Winner
01cd31ce44 Delete part-00001-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:13 -07:00
Shané Winner
eb2024b3e0 Delete part-00000-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:01 -07:00
Shané Winner
6bce41b3d7 Delete _SUCCESS 2019-07-28 00:31:49 -07:00
Shané Winner
bbdabbb552 Delete writing-data.ipynb 2019-07-28 00:31:32 -07:00
Shané Winner
65343fc263 Delete working-with-file-streams.ipynb 2019-07-28 00:31:22 -07:00
Shané Winner
b6b27fded6 Delete summarize.ipynb 2019-07-28 00:26:56 -07:00
Shané Winner
7e492cbeb6 Delete subsetting-sampling.ipynb 2019-07-28 00:26:41 -07:00
Shané Winner
4cc8f4c6af Delete split-column-by-example.ipynb 2019-07-28 00:26:25 -07:00
Shané Winner
9fba46821b Delete semantic-types.ipynb 2019-07-28 00:26:11 -07:00
Shané Winner
a45954a58f Delete secrets.ipynb 2019-07-28 00:25:58 -07:00
Shané Winner
f16dfb0e5b Delete replace-fill-error.ipynb 2019-07-28 00:25:45 -07:00
Shané Winner
edabbf9031 Delete replace-datasource-replace-reference.ipynb 2019-07-28 00:25:32 -07:00
Shané Winner
63d1d57dfb Delete random-split.ipynb 2019-07-28 00:25:21 -07:00
Shané Winner
10f7004161 Delete quantile-transformation.ipynb 2019-07-28 00:25:10 -07:00
Shané Winner
86ba4e7406 Delete open-save-dataflows.ipynb 2019-07-28 00:24:54 -07:00
Shané Winner
33bda032b8 Delete one-hot-encoder.ipynb 2019-07-28 00:24:43 -07:00
Shané Winner
0fd4bfbc56 Delete min-max-scaler.ipynb 2019-07-28 00:24:32 -07:00
Shané Winner
3fe08c944e Delete label-encoder.ipynb 2019-07-28 00:24:21 -07:00
Shané Winner
d587ea5676 Delete join.ipynb 2019-07-28 00:24:08 -07:00
Shané Winner
edd8562102 Delete impute-missing-values.ipynb 2019-07-28 00:23:55 -07:00
Shané Winner
5ac2c63336 Delete fuzzy-group.ipynb 2019-07-28 00:23:41 -07:00
Shané Winner
1f4e4cdda2 Delete filtering.ipynb 2019-07-28 00:23:28 -07:00
Shané Winner
2e245c1691 Delete external-references.ipynb 2019-07-28 00:23:11 -07:00
Shané Winner
e1b09f71fa Delete derive-column-by-example.ipynb 2019-07-28 00:22:54 -07:00
Shané Winner
8e2220d397 Delete datastore.ipynb 2019-07-28 00:22:43 -07:00
Shané Winner
f74ccf5048 Delete data-profile.ipynb 2019-07-28 00:22:32 -07:00
Shané Winner
97a6d9ca43 Delete data-ingestion.ipynb 2019-07-28 00:22:21 -07:00
Shané Winner
a0ff1c6b64 Delete custom-python-transforms.ipynb 2019-07-28 00:22:11 -07:00
Shané Winner
08f15ef4cf Delete column-type-transforms.ipynb 2019-07-28 00:21:58 -07:00
Shané Winner
7160416c0b Delete column-manipulations.ipynb 2019-07-28 00:21:47 -07:00
Shané Winner
218fed3d65 Delete cache.ipynb 2019-07-28 00:21:35 -07:00
Shané Winner
b8499dfb98 Delete auto-read-file.ipynb 2019-07-28 00:21:22 -07:00
Shané Winner
6bfd472cc2 Delete assertions.ipynb 2019-07-28 00:20:55 -07:00
Shané Winner
ecefb229e9 Delete append-columns-and-rows.ipynb 2019-07-28 00:20:40 -07:00
Shané Winner
883ad806ba Delete add-column-using-expression.ipynb 2019-07-28 00:20:22 -07:00
Shané Winner
848b5bc302 Delete getting-started.ipynb 2019-07-28 00:19:59 -07:00
Shané Winner
58087b53a0 Delete README.md 2019-07-28 00:19:45 -07:00
Shané Winner
ff4d5450a7 Delete README.md 2019-07-28 00:19:29 -07:00
Shané Winner
e2b2b89842 Delete datasets-tutorial.ipynb 2019-07-28 00:19:13 -07:00
Shané Winner
390be2ba24 Delete train.py 2019-07-28 00:19:00 -07:00
Shané Winner
cd1258f81d Delete Titanic.csv 2019-07-28 00:18:41 -07:00
Shané Winner
8a0b48ea48 Delete README.md 2019-07-28 00:18:14 -07:00
Roope Astala
b0dc904189 Merge pull request #502 from msdavx/patch-1
Add demo notebook for datasets diff attribute.
2019-07-26 19:16:13 -04:00
msdavx
82bede239a Add demo notebook for datasets diff attribute. 2019-07-26 11:10:37 -07:00
vizhur
774517e173 Merge pull request #500 from Azure/release_update/Release-137
update samples from Release-137 as a part of 1.0.53 SDK release
2019-07-25 16:36:25 -04:00
Shané Winner
c3ce2bc7fe Delete README.md 2019-07-25 13:28:15 -07:00
Shané Winner
5dd09a1f7c Delete README.md 2019-07-25 13:28:01 -07:00
vizhur
ee1da0ee19 update samples from Release-137 as a part of 1.0.53 SDK release 2019-07-24 22:37:36 +00:00
Paula Ledgerwood
ddfce6b24c Merge pull request #498 from Azure/revert-461-master
Revert "Finetune SSD VGG"
2019-07-24 14:25:43 -07:00
Paula Ledgerwood
31dfc3dc55 Revert "Finetune SSD VGG" 2019-07-24 14:08:00 -07:00
Paula Ledgerwood
168c45b188 Merge pull request #461 from borisneal/master
Finetune SSD VGG
2019-07-24 14:07:15 -07:00
fierval
159948db67 moving notice.txt 2019-07-24 08:50:41 -07:00
fierval
d842731a3b remove tf prereq item 2019-07-23 14:58:51 -07:00
fierval
7822fd4c13 notice + attribution for anchors 2019-07-23 14:49:20 -07:00
fierval
d9fbe4cd87 new folder structure 2019-07-22 10:31:22 -07:00
Shané Winner
a64f4d331a Merge pull request #488 from trevorbye/master
adding new notebook
2019-07-18 10:40:36 -07:00
Trevor Bye
c41f449208 adding new notebook 2019-07-18 10:27:21 -07:00
vizhur
4fe8c1702d Merge pull request #486 from Azure/release_update/Release-22
Fix for automl remote env
2019-07-12 19:18:13 -04:00
vizhur
18cd152591 update samples - test 2019-07-12 22:51:17 +00:00
vizhur
4170a394ed Merge pull request #474 from Azure/release_update/Release-132
update samples from Release-132 as a part of 1.0.48 SDK release
2019-07-09 19:14:29 -04:00
vizhur
475ea36106 update samples from Release-132 as a part of 1.0.48 SDK release 2019-07-09 22:02:57 +00:00
Roope Astala
9e0fc4f0e7 Merge pull request #459 from datashinobi/yassine/datadrift2
fix link to config nb & settingwithcopywarning
2019-07-03 12:41:31 -04:00
fierval
b025816c92 remove config.json 2019-07-02 17:32:56 -07:00
fierval
c75e820107 ssd vgg 2019-07-02 17:23:56 -07:00
Yassine Khelifi
e97e4742ba fix link to config nb & settingwithcopywarning 2019-07-02 16:56:21 +00:00
Roope Astala
14ecfb0bf3 Merge pull request #448 from jeff-shepherd/master
Update new notebooks to use dataprep and add sql files
2019-06-27 09:07:47 -04:00
Jeff Shepherd
61b396be4f Added sql files 2019-06-26 14:26:01 -07:00
Jeff Shepherd
3d2552174d Updated notebooks to use dataprep 2019-06-26 14:23:20 -07:00
Roope Astala
cd3c980a6e Merge pull request #447 from Azure/release-1.0.45
Merged notebook changes from release 1.0.45
2019-06-26 16:32:09 -04:00
Heather Shapiro
249bcac3c7 Merged notebook changes from release 1.0.45 2019-06-26 14:39:09 -04:00
Roope Astala
4a6bcebccc Update configuration.ipynb 2019-06-21 09:35:13 -04:00
Roope Astala
56e0ebc5ac Merge pull request #438 from rastala/master
add pipeline scripts
2019-06-19 18:56:42 -04:00
rastala
2aa39f2f4a add pipeline scripts 2019-06-19 18:55:32 -04:00
Roope Astala
4d247c1877 Merge pull request #437 from rastala/master
pytorch with mlflow
2019-06-19 17:23:06 -04:00
rastala
f6682f6f6d pytorch with mlflow 2019-06-19 17:21:52 -04:00
Roope Astala
26ecf25233 Merge pull request #436 from rastala/master
Update readme
2019-06-19 11:52:23 -04:00
Roope Astala
44c3a486c0 update readme 2019-06-19 11:49:49 -04:00
Roope Astala
c574f429b8 update readme 2019-06-19 11:48:52 -04:00
Roope Astala
77d557a5dc Merge pull request #435 from ganzhi/jamgan/drift
Add demo notebook for AML Data Drift
2019-06-17 16:39:46 -04:00
James Gan
13dedec4a4 Make it in same folder as internal repo 2019-06-17 13:38:27 -07:00
James Gan
6f5c52676f Add notebook to demo data drift 2019-06-17 13:33:30 -07:00
James Gan
90c105537c Add demo notebook for AML Data Drift 2019-06-17 13:31:08 -07:00
Roope Astala
ef264b1073 Merge pull request #434 from rastala/master
update pytorch
2019-06-17 11:57:29 -04:00
Roope Astala
824ac5e021 update pytorch 2019-06-17 11:56:42 -04:00
Roope Astala
e9a7b95716 Merge pull request #421 from csteegz/csteegz-add-warning
Add warning for using prediction client on azure notebooks
2019-06-13 20:27:34 -04:00
Roope Astala
789ee26357 Merge pull request #431 from jeff-shepherd/master
Fixed path for auto-ml-remote-amlcompute notebook
2019-06-13 16:56:25 -04:00
Jeff Shepherd
fc541706e7 Fixed path for auto-ml-remote-amlcompute 2019-06-13 13:12:32 -07:00
Roope Astala
64b8aa2a55 Merge pull request #429 from jeff-shepherd/master
Removed deprecated notebooks from readme
2019-06-13 14:40:57 -04:00
Jeff Shepherd
d3dc35dbb6 Removed deprecated notebooks from readme 2019-06-13 11:03:25 -07:00
Roope Astala
b55ac368e7 Merge pull request #428 from rastala/master
update cluster creation
2019-06-13 12:16:30 -04:00
Roope Astala
de162316d7 update cluster creation 2019-06-13 12:14:58 -04:00
Roope Astala
4ecc58dfe2 Merge pull request #427 from rastala/master
dockerfile
2019-06-12 10:24:34 -04:00
Roope Astala
daf27a76e4 dockerfile 2019-06-12 10:23:34 -04:00
Roope Astala
a05444845b Merge pull request #426 from rastala/master
version 1.0.43
2019-06-12 10:09:08 -04:00
Roope Astala
79c9f50c15 version 1.0.43 2019-06-12 10:08:35 -04:00
Roope Astala
67e10e0f6b Merge pull request #417 from lan-tang/patch-1
Create readme.md in data-drift
2019-06-11 13:47:55 -04:00
Roope Astala
1ef0331a0f Merge pull request #423 from rastala/master
add sklearn estimator
2019-06-11 11:30:37 -04:00
Roope Astala
5e91c836b9 add sklearn estimator 2019-06-11 11:29:56 -04:00
Colin Versteeg
661762854a add warning to training 2019-06-10 16:51:33 -07:00
Colin Versteeg
fbc90ba74f add to quickstart 2019-06-10 16:50:59 -07:00
Colin Versteeg
0d9c83d0a8 Update accelerated-models-object-detection.ipynb 2019-06-10 16:48:17 -07:00
Colin Versteeg
ca4cab1de9 Merge pull request #1 from Azure/master
pull from master
2019-06-10 16:45:12 -07:00
Roope Astala
ddbb3c45f6 Merge pull request #420 from rastala/master
mlflow integration preview
2019-06-10 15:12:36 -04:00
rastala
8eed4e39d0 mlflow integration preview 2019-06-10 15:10:57 -04:00
Lan Tang
b37c0297db Create readme.md 2019-06-07 12:32:32 -07:00
Roope Astala
968cc798d0 Update README.md 2019-06-05 12:15:33 -04:00
Roope Astala
5c9ca452fb Create README.md 2019-06-05 12:15:19 -04:00
Shané Winner
5e82680272 Update README.md 2019-05-31 10:58:39 -07:00
Roope Astala
41841fc8c0 Update README.md 2019-05-31 13:00:41 -04:00
Roope Astala
896bf63736 Merge pull request #397 from rastala/master
dockerfile
2019-05-29 11:05:18 -04:00
Roope Astala
d4751bf6ec dockerfile 2019-05-29 11:04:19 -04:00
Roope Astala
3531fe8a21 Merge pull request #396 from rastala/master
version 1.0.41
2019-05-29 11:01:15 -04:00
Roope Astala
db6ae67940 version 1.0.41 2019-05-29 10:59:59 -04:00
Shané Winner
2a479bb01e Merge pull request #395 from imatiach-msft/ilmat/fix-typo
fix typo
2019-05-28 14:02:33 -07:00
Ilya Matiach
d05eec92af fix typo 2019-05-28 16:59:59 -04:00
Josée Martens
70fdab0a28 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:45:04 -05:00
Josée Martens
7ce5a43b58 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:44:35 -05:00
Josée Martens
d2a9dbb582 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:43:38 -05:00
Roope Astala
a5d774683d Merge pull request #390 from rastala/master
fix default cluster creation in config notebook
2019-05-23 12:30:09 -04:00
Roope Astala
0e850f0917 fix default cluster creation in config notebook 2019-05-23 12:27:53 -04:00
Shané Winner
59f34b7179 Delete configtest.ipynb 2019-05-22 10:47:50 -07:00
Shané Winner
2a3cb69004 Create configtest.ipynb 2019-05-22 10:41:16 -07:00
Shané Winner
42894ff81a Delete LICENSE.txt 2019-05-22 10:22:05 -07:00
Shané Winner
2163cab50b Delete LICENSE.txt 2019-05-22 10:21:42 -07:00
Shané Winner
255edb04c0 Rename LICENSE.txt to LICENSE 2019-05-22 10:13:08 -07:00
Shané Winner
cfce079278 Rename LICENSES to LICENSE.txt 2019-05-22 10:06:31 -07:00
Shané Winner
ae6f067c81 Deleted index.html
cleaning up root directory
2019-05-22 10:04:23 -07:00
Shané Winner
1b7ff724f3 Deleted pr.md
Contents of this file moved to the README in the root directory.
2019-05-22 10:03:40 -07:00
Shané Winner
8bba850db1 moved the content in the pr.md file
moved the content in the pr.md file to under 'Projects using Azure Machine Learning'
2019-05-21 07:51:28 -07:00
Shané Winner
b9e35ea0cb Create LICENSE 2019-05-21 07:44:10 -07:00
Shané Winner
ffa28aa89c Delete sdk 2019-05-21 07:43:06 -07:00
Shané Winner
6ab85a20e3 Create LICENSES 2019-05-21 07:42:07 -07:00
Shané Winner
486c44d157 Create sdk 2019-05-21 07:39:43 -07:00
Shané Winner
cd80040dd8 Delete Licenses 2019-05-21 07:39:03 -07:00
Shané Winner
465a5b13b1 Create Licenses 2019-05-21 07:38:52 -07:00
Shané Winner
dcd2d58880 Added notice on the data/telemetry 2019-05-20 14:44:43 -07:00
Roope Astala
93bf4393f2 Merge pull request #381 from jeff-shepherd/master
Revert change to default amlcompute cluster
2019-05-16 15:35:43 -04:00
Jeff Shepherd
d6ebb484a6 Revert change to default amlcomputecluster to support existing resource
groups
2019-05-16 12:27:23 -07:00
Roope Astala
35afd43193 Merge pull request #372 from rogerhe/master
adding macOS specific yml. Install nomkl to workaround openmp issue
2019-05-14 19:07:42 -04:00
Roope Astala
2d68535de2 Merge pull request #376 from rastala/master
version 1.0.39
2019-05-14 16:04:09 -04:00
Roope Astala
0d448892a3 version check 2019-05-14 16:03:39 -04:00
Roope Astala
2d41c00488 version 1.0.39 2019-05-14 16:01:14 -04:00
Roger He
22597ac684 adding macOS specific yml. Install nomkl to workaround openmp issue 2019-05-09 16:51:51 -07:00
Josée Martens
8b1bffc200 Update README.md 2019-05-08 12:36:49 -05:00
Josée Martens
a240ac319f Update README.md 2019-05-08 12:27:57 -05:00
Josée Martens
83cfe3b9b3 Update README.md 2019-05-08 12:25:41 -05:00
Paula Ledgerwood
dcce6f227f Merge pull request #360 from Azure/paledger/update-readme
Update readme/cluster location from PM's instructions
2019-05-06 10:08:22 -07:00
Paula Ledgerwood
5328186d68 Update python kernel version 2019-05-06 09:45:20 -07:00
Paula Ledgerwood
7ccaa2cf57 Update readme from PM's instructions 2019-05-06 09:41:54 -07:00
Shané Winner
56b0664b6b Update img-classification-part1-training.ipynb 2019-05-05 17:47:31 -07:00
Shané Winner
4c1167edc4 Update img-classification-part1-training.ipynb 2019-05-05 17:45:48 -07:00
Shané Winner
eb643fe213 Update README.md 2019-05-05 17:26:29 -07:00
Shané Winner
5faa9d293c Update README.md 2019-05-05 15:34:27 -07:00
Shané Winner
32e2b5f647 Update train-hyperparameter-tune-deploy-with-tensorflow.ipynb 2019-05-05 15:32:19 -07:00
Shané Winner
ae25654882 Update train-hyperparameter-tune-deploy-with-pytorch.ipynb 2019-05-05 15:29:42 -07:00
Shané Winner
0ca05093bd Update train-hyperparameter-tune-deploy-with-keras.ipynb 2019-05-05 15:28:16 -07:00
Shané Winner
5e39582de3 Update train-hyperparameter-tune-deploy-with-chainer.ipynb 2019-05-05 15:24:14 -07:00
Shané Winner
6b6a6da9dc Update tensorboard.ipynb 2019-05-05 15:22:28 -07:00
Shané Winner
cba2c6b9e2 Update how-to-use-estimator.ipynb 2019-05-05 15:20:50 -07:00
Shané Winner
58557abd20 Update export-run-history-to-tensorboard.ipynb 2019-05-05 15:18:48 -07:00
Shané Winner
59452a3141 Update distributed-tensorflow-with-parameter-server.ipynb 2019-05-05 15:17:15 -07:00
Shané Winner
463718e26b Update distributed-tensorflow-with-horovod.ipynb 2019-05-05 15:15:13 -07:00
Shané Winner
9ea0ba5131 Update distributed-pytorch-with-horovod.ipynb 2019-05-05 15:13:28 -07:00
Shané Winner
2804a8d859 Update distributed-cntk-with-custom-docker.ipynb 2019-05-05 15:11:51 -07:00
Shané Winner
4761b668ff Update distributed-chainer.ipynb 2019-05-05 15:09:28 -07:00
Shané Winner
c4163017c2 Update using-environments.ipynb 2019-05-05 00:11:40 -07:00
Shané Winner
71e8e9bd23 Update train-within-notebook.ipynb 2019-05-05 00:09:26 -07:00
Shané Winner
6ff06dd137 Update train-on-remote-vm.ipynb 2019-05-05 00:06:23 -07:00
Shané Winner
73db8ae04d Update train-on-local.ipynb 2019-05-04 23:52:01 -07:00
Shané Winner
3637dce58a Update train-on-amlcompute.ipynb 2019-05-04 23:48:16 -07:00
Shané Winner
23771fc599 added tracking pixel and edited config text 2019-05-04 21:08:10 -07:00
Shané Winner
5f04a467b7 added tracking pixel 2019-05-04 21:03:08 -07:00
Shané Winner
532f65c998 added tracking pixel and edited config text 2019-05-04 20:59:50 -07:00
Shané Winner
f36dda0c2d added tracking pixel and edited the config text 2019-05-04 20:54:32 -07:00
Shané Winner
c7b56929bc added tracking pixel and edited config text 2019-05-04 20:50:57 -07:00
Shané Winner
5f19d75a42 added tracking pixel and edited the config text 2019-05-04 20:48:04 -07:00
Shané Winner
a1968aafa2 updated config text and added tracking pixel 2019-05-04 20:43:54 -07:00
Shané Winner
6b82991017 edited config text and added tracking pixel 2019-05-04 20:40:23 -07:00
Shané Winner
725013511e added tracking pixel 2019-05-04 20:34:58 -07:00
Shané Winner
6a20160173 added tracking pixel 2019-05-04 20:02:01 -07:00
Shané Winner
137db8aec0 added tracking pixel 2019-05-04 19:49:50 -07:00
Shané Winner
b7b10c394b added tracking pixel 2019-05-04 19:47:28 -07:00
Shané Winner
46206716a4 added tracking pixel 2019-05-04 19:44:23 -07:00
Shané Winner
92bb98ac62 added tracking pixel 2019-05-04 19:41:33 -07:00
Shané Winner
b398c24262 added tracking pixel 2019-05-04 19:38:28 -07:00
Shané Winner
e0618302e3 added tracking pixel 2019-05-04 19:35:57 -07:00
Shané Winner
b6cddafa3e edited config text and added the pixel tracker 2019-05-04 19:31:59 -07:00
Shané Winner
4188bd2474 updated the config text and added the tracking pixel 2019-05-04 19:25:26 -07:00
Shané Winner
69126edfcb update config text and added tracking pixel 2019-05-04 19:20:46 -07:00
Shané Winner
4e14c35b9b added pixel tracker 2019-05-04 16:31:07 -07:00
Shané Winner
1608c19aa6 updated tracking pixel and and config text 2019-05-04 15:12:53 -07:00
Shané Winner
46b8611b74 tracking pixel and edited config text 2019-05-04 15:08:57 -07:00
Shané Winner
fbb01bde70 update the config text and added pixel tracker server 2019-05-04 15:01:35 -07:00
Shané Winner
cefe2f0811 updated the config text and added the tracking pixel 2019-05-04 14:58:45 -07:00
Shané Winner
42e0a31f88 updated the config text and the tracking pixel 2019-05-04 14:54:37 -07:00
Shané Winner
8b0998ac9f updated the config text and the tracking pixel 2019-05-04 14:49:29 -07:00
Shané Winner
046c6051fb updated config text and added tracking pixel 2019-05-04 14:38:39 -07:00
Shané Winner
bdb7db15ef updated tracking pixel and the config text 2019-05-04 14:35:28 -07:00
Shané Winner
b13139f103 update the config text and the tracking pixel 2019-05-04 14:31:25 -07:00
Shané Winner
8adb206ae3 updated config text and pixel tracker 2019-05-04 13:56:09 -07:00
Shané Winner
484b6bbb7a updated the config text and pixel server 2019-05-04 13:51:12 -07:00
Shané Winner
55ef0bda6a updated config text 2019-05-04 13:46:43 -07:00
Shané Winner
1401cdef33 updated config text 2019-05-04 13:41:34 -07:00
Shané Winner
5d02206cbd updated with tracking pixel 2019-05-04 13:34:11 -07:00
Shané Winner
c24b65d4ae updated with tracking pixel 2019-05-04 13:32:14 -07:00
Shané Winner
57c5ef318f updated with pixel tracker 2019-05-04 13:25:11 -07:00
Shané Winner
ba033d72f8 Update train-in-spark.ipynb 2019-05-04 09:33:07 -07:00
Shané Winner
aa657ac528 Update manage-runs.ipynb 2019-05-04 09:29:00 -07:00
Shané Winner
7d8289679d added the tracking pixel and the edited the config text 2019-05-04 08:40:18 -07:00
Shané Winner
a7c3db0560 Update model-register-and-deploy.ipynb 2019-05-03 23:21:58 -07:00
Shané Winner
e548847881 pixel text and config text update 2019-05-03 23:20:57 -07:00
Shané Winner
08c6b1f4ed tracking pixel test 2019-05-03 23:15:28 -07:00
Shané Winner
78abb65f5e updated configuration text 2019-05-03 23:08:55 -07:00
Shané Winner
3c6c090732 Update README.md 2019-05-03 22:54:31 -07:00
Shané Winner
513e36d9b2 updated the config verbiage and tracking pixel 2019-05-03 22:54:02 -07:00
Ilya Matiach
9db91a7fb8 Merge pull request #351 from imatiach-msft/ilmat/update-raw-features-notebook
Update raw features explanation notebook
2019-05-03 12:47:28 -04:00
Roope Astala
d9b26b655b Merge pull request #356 from rastala/master
how to use environments
2019-05-03 10:27:33 -04:00
Roope Astala
cb8dc41766 how to use environments 2019-05-03 10:25:39 -04:00
Ilya Matiach
9c9b4bb122 Update raw features explanation notebook 2019-05-02 14:29:53 -04:00
Roope Astala
f5c896c70f Merge pull request #345 from csteegz/add-gpu-deploy
Create production-deploy-to-aks-gpu.ipynb
2019-05-02 14:13:50 -04:00
Colleen Forbes
3b572eddb2 Merge pull request #350 from MayMSFT/master
add dataset tutorial
2019-05-02 09:33:25 -07:00
May Hu
51523db294 add dataset tutorial 2019-05-02 09:07:11 -07:00
Ilya Matiach
3b4998941c Merge pull request #348 from imatiach-msft/ilmat/update-explain-model-nb
updating model explanation notebooks
2019-04-30 17:27:44 -04:00
Ilya Matiach
6cdbfb8722 updating model explanation notebooks 2019-04-30 17:12:54 -04:00
Colin Versteeg
c086bd69c7 Create production-deploy-to-aks-gpu.ipynb
Add deploy to aks GPU notebook
2019-04-29 16:26:42 -07:00
Shané Winner
279c9b8dc4 Pixel Tracker 2019-04-29 11:27:03 -07:00
Shané Winner
98589fe335 Testing Pixel Tracker 2019-04-29 11:16:08 -07:00
Shané Winner
77f21058a2 Testing Pixel Tracker 2019-04-29 11:04:05 -07:00
Roope Astala
baa65d0886 Merge pull request #343 from Azure/paledger/add-accel-models
Initial commit to add AccelModels notebooks from AzureMlCli repo
2019-04-29 13:56:06 -04:00
Paula Ledgerwood
0fffa11b2a Update links and code formatting 2019-04-29 10:20:55 -07:00
Paula Ledgerwood
20ec225343 Initial commit to add notebooks from AzureMlCli repo 2019-04-26 11:16:33 -07:00
Roope Astala
845e9d653e Merge pull request #342 from rastala/master
dockerfile 1.0.33
2019-04-26 14:01:55 -04:00
Roope Astala
639ef81636 dockerfile 1.0.33 2019-04-26 13:57:46 -04:00
Roope Astala
60158bf41a Merge pull request #341 from rastala/master
version 1.0.33
2019-04-26 13:45:47 -04:00
Roope Astala
8dbbb01b8a version 1.0.33 2019-04-26 13:44:15 -04:00
Roope Astala
6e6b2b0c48 Merge pull request #340 from rastala/master
add readme
2019-04-26 09:41:49 -04:00
Roope Astala
85f5721bf8 add readme 2019-04-26 09:40:24 -04:00
Shané Winner
6a7dd741e7 Pixel server added 2019-04-23 13:48:23 -07:00
Shané Winner
314218fc89 Added pixel server 2019-04-23 13:47:06 -07:00
Shané Winner
b50d2725c7 Added pixel server 2019-04-23 13:46:06 -07:00
Shané Winner
9a2f448792 Added pixel server 2019-04-23 13:45:05 -07:00
Shané Winner
dd620f19fd Pixel server added 2019-04-23 13:43:41 -07:00
Shané Winner
8116d31da4 Pixel Server added 2019-04-23 13:40:26 -07:00
Shané Winner
ef29dc1fa5 Added Pixel Server 2019-04-23 13:39:18 -07:00
Shané Winner
97b345cb33 Implemented Pixel Server 2019-04-23 13:37:41 -07:00
Shané Winner
282250e670 Implementing Pixel Server 2019-04-23 13:36:24 -07:00
Shané Winner
acef60c5b3 Testing pixel web app 2019-04-23 13:15:04 -07:00
Shané Winner
bfb444eb15 Testing Pixel Tracker 2019-04-23 13:07:48 -07:00
Shané Winner
6277659bf2 Testing Pixel Server 2019-04-23 11:48:55 -07:00
Shané Winner
1645e12712 Testing Tracking Pixel 2019-04-23 11:15:53 -07:00
Roope Astala
cc4a32e70b Merge pull request #337 from jeff-shepherd/master
Updated automl_setup scripts
2019-04-23 13:50:09 -04:00
Jeff Shepherd
997a35aed5 Updated automl_setup scripts 2019-04-23 10:40:33 -07:00
Roope Astala
dd6317a4a0 Merge pull request #336 from rastala/master
adding work-with-data
2019-04-23 10:05:08 -04:00
Roope Astala
82d8353d54 adding work-with-data 2019-04-23 10:04:32 -04:00
Shané Winner
59a01c17a0 Testing the pixel tracker 2019-04-22 14:45:09 -07:00
Shané Winner
e31e1d9af3 Implemented a test pixel tracker 2019-04-22 14:41:32 -07:00
Roope Astala
d38b9db255 Merge pull request #334 from rastala/master
docker update
2019-04-22 15:43:28 -04:00
Roope Astala
761ad88c93 docker update 2019-04-22 15:43:02 -04:00
Roope Astala
644729e5db Merge pull request #333 from rastala/master
version 1.0.30
2019-04-22 15:40:11 -04:00
Roope Astala
e2b1b3fcaa version 1.0.30 2019-04-22 15:39:18 -04:00
Roope Astala
dc692589a9 Merge pull request #326 from rastala/master
update aks notebook
2019-04-18 16:19:51 -04:00
Roope Astala
624b4595b5 update aks notebook 2019-04-18 16:18:33 -04:00
Roope Astala
0ed85c33c2 Delete release.json 2019-04-18 10:01:50 -04:00
Roope Astala
5b01de605f Merge pull request #318 from savitamittal1/hdinotebook
Sample HDI notebook
2019-04-18 10:01:26 -04:00
Savitam
c351ac988a Sample HDI notebook
sample HDI notebook
2019-04-15 12:35:34 -07:00
Josée Martens
759ec3934c Delete yt_cover.png 2019-04-15 12:06:25 -05:00
Josée Martens
b499b88a85 Delete python36.png 2019-04-15 12:06:16 -05:00
Josée Martens
5f4edac3c1 Update NBSETUP.md 2019-04-15 12:00:31 -05:00
Josée Martens
edfce0d936 Update README.md 2019-04-12 17:28:16 -05:00
Josée Martens
1516c7fc24 Update README.md
testing for search
2019-04-12 17:19:55 -05:00
Roope Astala
389fb668ce Add files via upload 2019-04-10 11:12:55 -04:00
Josée Martens
647d5e72a5 Merge pull request #307 from Azure/vizhur-patch-2
Create googled8147fb6c0788258.html
2019-04-09 15:21:51 -05:00
vizhur
43ac4c84bb Create googled8147fb6c0788258.html 2019-04-09 16:19:47 -04:00
Roope Astala
8a1a82b50a Merge pull request #303 from rastala/master
dockerfile and missing config update
2019-04-08 15:38:13 -04:00
Roope Astala
72f386298c dockerfile and missing config update 2019-04-08 15:37:48 -04:00
Roope Astala
41d697e298 Merge pull request #302 from rastala/master
version 1.0.23
2019-04-08 15:35:50 -04:00
Roope Astala
c3ce932029 version 1.0.23 2019-04-08 15:34:51 -04:00
Roope Astala
a956162114 Merge pull request #290 from rastala/master
update aks deployment notebook
2019-04-03 10:53:51 -04:00
Roope Astala
cb5a178e40 Merge branch 'master' of github.com:rastala/MachineLearningNotebooks 2019-04-03 10:52:40 -04:00
Roope Astala
d81c336c59 update production deploy to aks 2019-04-03 10:52:15 -04:00
Roope Astala
4244a24d81 Merge pull request #287 from jeff-shepherd/master
Fixed line termination on automl_setup_linux.sh
2019-04-03 09:21:35 -04:00
Jeff Shepherd
3b488555e5 Added back automl_setup_linux.sh with correct line termination 2019-04-02 16:24:05 -07:00
Jeff Shepherd
6abc478f33 Removed automl_setup_linux.sh 2019-04-02 16:23:11 -07:00
Roope Astala
666c2579eb Merge pull request #285 from jeff-shepherd/master
Corrected line termination for automl_setup_mac.sh
2019-04-02 09:19:53 -04:00
Jeff Shepherd
5af3aa4231 Fixed line termination 2019-04-01 16:19:00 -07:00
Jeff Shepherd
e48d828ab0 Removed automl_setup_mac.sh 2019-04-01 16:17:56 -07:00
Jeff Shepherd
44aa636c21 Merge branch 'master' of https://github.com/Azure/MachineLearningNotebooks 2019-04-01 16:07:11 -07:00
Jeff Shepherd
4678f9adc3 Merge branch 'master' of https://github.com/jeff-shepherd/MachineLearningNotebooks 2019-04-01 16:04:46 -07:00
Jeff Shepherd
5bf85edade Added automl_setup_mac.sh with correct line termination 2019-04-01 16:03:39 -07:00
Jeff Shepherd
94f381e884 Removed automl_setup_mac.sh 2019-04-01 16:02:53 -07:00
Roope Astala
ea1b7599c3 Merge pull request #267 from rastala/master
add automl files
2019-03-25 19:26:07 -04:00
Roope Astala
6b8a6befde add automl files 2019-03-25 19:25:38 -04:00
Roope Astala
c1511b7b74 Merge pull request #266 from rastala/master
1.0.21 dockerfile
2019-03-25 15:10:05 -04:00
Roope Astala
8f007a3333 1.0.21 dockerfile 2019-03-25 15:09:39 -04:00
Roope Astala
5ad3ca00e8 Merge pull request #265 from rastala/master
version 1.0.21
2019-03-25 15:07:09 -04:00
Roope Astala
556a41e223 version 1.0.21 2019-03-25 15:06:08 -04:00
Roope Astala
407b8929d0 Merge pull request #259 from jeff-shepherd/master
Added example of printing model hyperparameters
2019-03-19 09:40:25 -04:00
Jeff Shepherd
18a11bbd8d Added model printing example 2019-03-18 16:31:48 -07:00
Roope Astala
8b439a9f7c Merge pull request #256 from rastala/master
update RAPIDS 2
2019-03-18 12:09:33 -04:00
rastala
75c393a221 update RAPIDS 2 2019-03-18 12:08:10 -04:00
Roope Astala
be7176fe06 Merge pull request #255 from rastala/master
update RAPIDS sample
2019-03-18 11:42:51 -04:00
rastala
7b41675355 update RAPIDS sample 2019-03-18 11:40:43 -04:00
Jeff Shepherd
fa7685f6fa Added example of printing model hyperparameters 2019-03-15 13:18:17 -07:00
Roope Astala
6b444b1467 Merge pull request #248 from rastala/master
dockerfile 1.0.18
2019-03-11 15:33:07 -04:00
Roope Astala
c9767473ae dockerfile 1.0.18 2019-03-11 15:32:30 -04:00
Roope Astala
648b48fc0c Merge pull request #247 from rastala/master
version 1.0.18
2019-03-11 15:23:44 -04:00
Roope Astala
04db5d93e2 version 1.0.18 2019-03-11 15:22:38 -04:00
Roope Astala
4e10935701 version 1.0.18 2019-03-11 15:21:35 -04:00
Roope Astala
f737db499d Delete googleade5d7141b3f2910.html 2019-03-05 17:01:36 -05:00
Roope Astala
6b66da1558 Merge pull request #238 from rastala/master
fix link in configuration notebook
2019-03-05 17:00:31 -05:00
Roope Astala
8647aea9d9 fix link in configuration notebook 2019-03-05 16:59:38 -05:00
Roope Astala
3ee2dc3258 Merge pull request #233 from jeff-shepherd/master
Setup updated to fix remote run
2019-02-26 15:34:15 -05:00
Jeff Shepherd
9f7c4ce668 Setup updated to fix remote run 2019-02-26 11:59:20 -08:00
hning86
036ca6ac75 dockerfile 1.0.17 2019-02-26 10:57:07 -05:00
Roope Astala
0b8817ee1c Merge pull request #229 from rastala/master
version 1.0.17
2019-02-25 16:12:51 -05:00
Roope Astala
b7b5576b15 version 1.0.17 2019-02-25 16:12:02 -05:00
Hai Ning
c082b72b71 Update pr.md 2019-02-23 21:55:59 -05:00
Hai Ning
673e76d431 Merge pull request #186 from gison93/master
Fix typos
2019-02-20 23:18:15 -05:00
Hai Ning
c518a04a19 Merge pull request #203 from davidefiocco/patch-1
Typo fix
2019-02-20 23:17:14 -05:00
Hai Ning
2f34888716 Update README.md 2019-02-20 07:52:14 -05:00
Roope Astala
6ca0088991 Merge pull request #218 from jeff-shepherd/master
Fixed broken links to configuration notebook
2019-02-15 14:47:49 -05:00
Jeff Shepherd
40e3856786 Removed subsampling reference, which is not published yet 2019-02-15 11:35:45 -08:00
Jeff Shepherd
ddd025e83e Fixed links to configuration notebook. 2019-02-15 11:31:10 -08:00
Hai Ning
ece4242c8f Update README.md 2019-02-15 12:57:08 -05:00
Hai Ning
4bca2bd7db Merge pull request #217 from nishankgu/patch-1
Update README.md
2019-02-15 12:52:59 -05:00
Nishank
a927dbfa31 Update README.md 2019-02-14 14:22:05 -08:00
hning86
280c718f53 keras sample 2019-02-14 16:59:08 -05:00
Hai Ning
bf1ac2b26a Update NBSETUP.md 2019-02-14 11:02:01 -05:00
Roope Astala
954c2afbce Merge pull request #214 from rongduan-zhu/master
Updated Azure Databricks Automated ML notebook from master
2019-02-13 14:06:48 -05:00
Rongduan Zhu
fbf1ea5f1a updated notebook from latest master 2019-02-13 11:02:27 -08:00
Roope Astala
84b72d904b Merge pull request #210 from rastala/master
tutorial update
2019-02-11 16:07:47 -05:00
Roope Astala
82bb9fcac3 tutorial update 2019-02-11 16:07:10 -05:00
Roope Astala
5c6bbacd47 Merge pull request #209 from rastala/master
adb readme update
2019-02-11 15:52:34 -05:00
Roope Astala
90aaeea113 adb readme update 2019-02-11 15:51:50 -05:00
Roope Astala
eeab7284c9 Merge pull request #208 from rastala/master
few missing files
2019-02-11 15:48:22 -05:00
Roope Astala
02fd9b685c few missing files 2019-02-11 15:47:37 -05:00
hning86
d5c923b446 dockerfile updated 2019-02-11 15:21:56 -05:00
Roope Astala
f16bf27e26 Merge pull request #207 from rastala/master
release 1.0.15
2019-02-11 15:18:00 -05:00
Roope Astala
c7bec58593 update version 2019-02-11 15:17:40 -05:00
Roope Astala
cca3996eb4 release 1.0.15 2019-02-11 15:12:30 -05:00
Davide Fiocco
210efe022a Typo fix 2019-02-08 20:23:12 +01:00
Roope Astala
5fd14bac30 Merge pull request #199 from rastala/master
update automl databricks
2019-02-06 11:53:35 -05:00
Roope Astala
3fa409543b update automl databricks 2019-02-06 11:53:00 -05:00
Josée Martens
42f2822b61 Adding file to enable search performance tracking.
@rastala
2019-02-04 14:36:40 -06:00
Roope Astala
48afbe1cab Delete release.json 2019-01-31 16:07:08 -05:00
Roope Astala
1298c55dd4 Merge pull request #193 from rastala/master
fix broken link
2019-01-31 15:45:01 -05:00
Roope Astala
0aa1b248f4 fix broken link 2019-01-31 15:44:22 -05:00
Roope Astala
3012b8f5a8 Merge pull request #192 from rastala/master
add authentication notebook
2019-01-31 15:41:40 -05:00
Roope Astala
501c55bcaf add authentication notebook 2019-01-31 15:40:51 -05:00
hning86
1a38f50221 docker instructions 2019-01-31 15:16:36 -05:00
hning86
cc64be8d6f text update 2019-01-31 14:29:31 -05:00
hning86
a0127a2a64 dockerfile instruction 2019-01-31 11:46:06 -05:00
Hai Ning
7eb966bf79 Merge pull request #191 from Azure/dockerfiles
Dockerfiles
2019-01-31 10:54:55 -05:00
Roope Astala
9118f2c7ce Merge pull request #190 from rastala/master
fix NBSETUP
2019-01-31 09:33:17 -05:00
Roope Astala
0e3198f311 fix NBSETUP 2019-01-31 09:32:30 -05:00
hning86
0fdab91b97 dockefile reorg 2019-01-31 09:21:06 -05:00
hning86
b54be912d8 dockerfiles added 2019-01-30 17:04:18 -05:00
Roope Astala
3d0c7990ff Merge pull request #189 from rastala/master
update tutorial readme
2019-01-30 14:28:24 -05:00
Roope Astala
6e1ce29a94 Merge remote-tracking branch 'upstream/master' 2019-01-30 14:26:25 -05:00
Roope Astala
0d26c9986a update tutorials README 2019-01-30 14:25:17 -05:00
gison93
100ab10797 add pipeline validation 2019-01-29 14:50:00 +01:00
gison93
1307efe7bc fix typo
remove trailing \u00c2\u00a0 from variable and notebook_path
2019-01-29 14:34:07 +01:00
gison93
08d0b8cf08 fix typo
Bloband -> Blob and
2019-01-29 12:42:48 +01:00
Roope Astala
0514eee64b Merge pull request #182 from rastala/master
version 1.0.10
2019-01-28 18:10:20 -05:00
Roope Astala
4b6e34fdc0 Update train-within-notebook.ipynb 2019-01-28 18:09:36 -05:00
Roope Astala
e01216d85b Update configuration.ipynb 2019-01-28 18:08:41 -05:00
Roope Astala
b00f75edd8 version 1.0.10 2019-01-28 15:30:17 -05:00
Hai Ning
06aba388c6 Update azure-ml-with-nvidia-rapids.ipynb 2019-01-24 10:09:31 -05:00
Sheri Gilley
7db93bcb1d update comments 2019-01-22 17:18:19 -06:00
Roope Astala
3018461dfc Merge pull request #176 from rastala/master
update tutorials
2019-01-22 14:25:28 -05:00
Roope Astala
0d91f2d697 update tutorials 2019-01-22 14:24:31 -05:00
Roope Astala
a14cb635f0 Merge pull request #175 from rastala/master
RAPIDS sample
2019-01-22 13:44:55 -05:00
Roope Astala
88f6a966cc RAPIDS sample 2019-01-22 13:32:59 -05:00
Hai Ning
4f76a844c6 Update README.md 2019-01-18 01:18:44 -05:00
Hai Ning
c1573ff949 Update NBSETUP.md 2019-01-18 01:15:53 -05:00
Hai Ning
d1b18b3771 Update NBSETUP.md 2019-01-18 01:09:13 -05:00
Roope Astala
e1a948f4cd Merge pull request #168 from rastala/master
version 1.0.8
2019-01-14 12:14:02 -08:00
Roope Astala
3ca40c0817 version 1.0.8 2019-01-14 15:13:30 -05:00
Roope Astala
f724cb4d9b Merge pull request #166 from jeff-shepherd/master
Fixed broken links in tutorials
2019-01-08 12:01:50 -08:00
Jeff Shepherd
094b4b3b13 Fixed broken links in tutorials 2019-01-08 11:58:03 -08:00
Roope Astala
d09942f521 Merge pull request #165 from rastala/master
databricks update
2019-01-08 09:24:11 -08:00
Roope Astala
0c9e527174 databricks update 2019-01-08 12:23:15 -05:00
Sheri Gilley
fcbe925640 Merge branch 'sdk-codetest' of https://github.com/Azure/MachineLearningNotebooks into sdk-codetest 2019-01-07 13:06:12 -06:00
Sheri Gilley
bedfbd649e fix files 2019-01-07 13:06:02 -06:00
Sheri Gilley
fb760f648d Delete temp.py 2019-01-07 12:58:32 -06:00
Sheri Gilley
a9a0713d2f Delete donotupload.py 2019-01-07 12:57:58 -06:00
Sheri Gilley
c9d018b52c remove prepare environment 2019-01-07 12:56:54 -06:00
Sheri Gilley
53dbd0afcf hdi run config code 2019-01-07 11:29:40 -06:00
Sheri Gilley
e3a64b1f16 code for remote vm 2019-01-04 12:51:11 -06:00
Sheri Gilley
732eecfc7c update names 2019-01-04 12:45:28 -06:00
Sheri Gilley
6995c086ff change snippet names 2019-01-03 22:39:06 -06:00
Sheri Gilley
80bba4c7ae code for amlcompute section 2019-01-03 18:55:31 -06:00
Sheri Gilley
3c581b533f for local computer 2019-01-03 18:07:12 -06:00
Sheri Gilley
cc688caa4e change names 2019-01-03 08:53:49 -06:00
Sheri Gilley
da225e116e new code 2019-01-03 08:02:35 -06:00
Roope Astala
e2640e54da Merge pull request #160 from rastala/master
Create aml-pipelines-concept.png
2019-01-02 12:03:13 -08:00
Roope Astala
d348baf8a1 Create aml-pipelines-concept.png 2019-01-02 15:02:25 -05:00
Roope Astala
b41e11e30d Merge pull request #159 from jeff-shepherd/master
Removed databricks notebook link
2019-01-02 11:56:15 -08:00
Jeff Shepherd
c1aa951867 Removed databricks notebook link 2019-01-02 11:45:52 -08:00
Roope Astala
5fe5f06e07 Merge pull request #158 from rastala/master
Create Databricks_AMLSDK_1-4_6.dbc
2019-01-02 10:52:24 -08:00
Roope Astala
e8a09c49b1 Create Databricks_AMLSDK_1-4_6.dbc 2019-01-02 13:51:29 -05:00
Roope Astala
fb6a73a790 Merge pull request #145 from rastala/master
fix databricks
2018-12-20 13:11:17 -08:00
Roope Astala
c2968b6526 fix databricks 2018-12-20 16:10:27 -05:00
Roope Astala
2ac62ae1ed Merge pull request #144 from rastala/master
Version 1.0.6
2018-12-20 12:43:17 -08:00
Roope Astala
cad5d5c97c more files 2018-12-20 15:42:03 -05:00
Roope Astala
d8cf73503e update to version 1.0.6 2018-12-20 15:40:20 -05:00
Sheri Gilley
73c5d02880 Update quickstart.py 2018-12-17 12:23:03 -06:00
Sheri Gilley
e472b54f1b Update quickstart.py 2018-12-17 12:22:40 -06:00
Hai Ning
4a2d6d637a Update README.md 2018-12-13 11:28:02 -05:00
Hai Ning
0348e54e21 Update README.md 2018-12-13 11:27:41 -05:00
Hai Ning
a97d147d01 Create README.md 2018-12-13 11:26:58 -05:00
Hai Ning
c4ceac032b Update README.md 2018-12-08 10:11:11 -05:00
Hai Ning
1d3dff5634 Update README.md 2018-12-08 10:10:37 -05:00
Roope Astala
096dd424db Merge pull request #125 from jeff-shepherd/master
Added to troubleshooting section and fixed paths
2018-12-07 11:03:59 -08:00
Jeff Shepherd
fdefea5e82 Added to troubleshooting section 2018-12-07 10:55:45 -08:00
Roope Astala
15fb283b78 Merge pull request #124 from rastala/master
add NBSETUP
2018-12-07 10:29:46 -08:00
Roope Astala
142514a255 add NBSETUP 2018-12-07 13:29:08 -05:00
Roope Astala
26678677b3 Merge pull request #122 from rastala/master
Update distributed pytorch notebook
2018-12-07 07:00:04 -08:00
Roope Astala
2e2a943f5b Update distributed pytorch notebook 2018-12-07 09:40:29 -05:00
Roope Astala
94ba3f3665 Merge pull request #117 from rastala/master
update automl notebooks
2018-12-04 16:12:49 -08:00
rastala
da02f93fc2 update automl notebooks 2018-12-04 19:11:42 -05:00
Roope Astala
d39af379f8 Merge pull request #116 from rastala/master
add adb readme
2018-12-04 15:34:49 -08:00
rastala
6304fb1eb1 add adb readme 2018-12-04 18:34:10 -05:00
Hai Ning
b35d14ab72 Update img-classification-part1-training.ipynb 2018-12-04 13:27:19 -05:00
Roope Astala
e791456f34 Merge pull request #115 from rastala/master
Fix broken link
2018-12-04 08:52:17 -08:00
rastala
9b93e13426 another patch 3 2018-12-04 11:51:05 -05:00
rastala
3d640393aa another patch 2 2018-12-04 11:47:06 -05:00
Roope Astala
34d50b0427 Merge pull request #114 from rastala/master
another patch
2018-12-04 08:43:29 -08:00
rastala
060a53d256 another patch 2018-12-04 11:42:51 -05:00
Roope Astala
78fba3ceea Merge pull request #113 from rastala/master
notebook patches
2018-12-04 08:26:00 -08:00
rastala
01dc3d0a5b notebook patches 2018-12-04 11:24:50 -05:00
Heather Spetalnick (Shapiro)
ce7ca94a9a fix aks link 2018-12-04 10:33:07 -05:00
Hai Ning
b8877f1f92 Merge pull request #111 from Azure/jamiemaclennan-patch-1
Fix link
2018-12-04 10:30:22 -05:00
Hai Ning
b19ec15601 Merge pull request #112 from Azure/jamiemaclennan-patch-2
fix links
2018-12-04 10:29:50 -05:00
Jamie MacLennan
e99de11c25 fix links 2018-12-04 10:28:19 -05:00
Jamie MacLennan
212c2e8bf0 Fix link 2018-12-04 10:24:10 -05:00
Hai Ning
2aed0a32bf Update README.md 2018-12-04 08:55:46 -05:00
Hai Ning
25456b84f0 Update train-within-notebook.ipynb 2018-12-04 08:52:23 -05:00
Hai Ning
897ae13de1 Update train-on-remote-vm.ipynb 2018-12-04 08:52:06 -05:00
Hai Ning
e329729e0f Update train-on-local.ipynb 2018-12-04 08:51:45 -05:00
Hai Ning
aacdede890 Update logging-api.ipynb 2018-12-04 08:51:00 -05:00
Hai Ning
0471f2f8db Update logging-api.ipynb 2018-12-04 08:49:46 -05:00
Hai Ning
a8a525e704 Update logging-api.ipynb 2018-12-04 08:48:56 -05:00
Hai Ning
e37f3fa206 Update README.md 2018-12-04 08:18:27 -05:00
Roope Astala
2c4d6b5188 Merge pull request #110 from rastala/master
fix azure-databricks
2018-12-03 19:21:31 -08:00
rastala
008befcce2 fix azure-databricks 2018-12-03 22:20:06 -05:00
Roope Astala
b4895bf1f8 Merge pull request #109 from jeff-shepherd/master
Added setup and configuration files
2018-12-03 16:46:42 -08:00
Jeff Shepherd
a27cdbd478 Added setup and configuration files 2018-12-03 16:36:14 -08:00
Roope Astala
a408f3f2cb Merge pull request #108 from rastala/master
big update
2018-12-03 17:47:29 -05:00
rastala
ab2de17978 one more file 2018-12-03 17:38:46 -05:00
rastala
a63f5084e0 big update 2 2018-12-03 17:38:20 -05:00
rastala
d26a4b0323 big update 2018-12-02 21:50:53 -05:00
rastala
3c49d861df license updates 2018-12-02 21:47:13 -05:00
Roope Astala
613db3158d Merge pull request #93 from yanrez/master
Make pipeline notebooks links in readme
2018-11-28 12:33:12 -05:00
yanrez
c3a8c36297 Make pipeline notebooks links in readme 2018-11-22 17:13:31 -08:00
Roope Astala
e7ce245674 Merge pull request #92 from dipankar-ray/master
updated pipeline notebooks with expanded tutorial
2018-11-22 10:15:55 -05:00
Dipankar Ray
ef5844fffd updated pipeline notebooks with expanded tutorial 2018-11-21 20:00:07 -08:00
Roope Astala
e039b98ee6 Merge pull request #91 from rastala/master
automl notebook update
2018-11-21 20:46:21 -05:00
rastala
05713689e0 automl notebook update 2018-11-21 20:45:17 -05:00
Roope Astala
7bb906b53c Merge pull request #87 from rastala/master
Update to version 0.1.80
2018-11-20 11:02:28 -05:00
rastala
5726fe3ddb Version 0.1.80 2018-11-20 11:00:48 -05:00
rastala
d10b1fa796 Revert "Updated notebook folders"
This reverts commit 06728004b6.
2018-11-20 10:39:48 -05:00
rastala
d7127de03c Revert "Update tutorials/README.md"
This reverts commit 50787f4ccc.
2018-11-20 10:39:34 -05:00
Roope Astala
50787f4ccc Update tutorials/README.md 2018-11-19 13:35:11 -05:00
Roope Astala
06728004b6 Updated notebook folders 2018-11-19 13:28:49 -05:00
Roope Astala
f5bcc55fe3 Merge pull request #74 from yueguoguo/master
Typo in README
2018-11-09 09:51:01 -05:00
Roope Astala
f23fb58200 Merge pull request #77 from rastala/master
Fix autoscale
2018-11-09 09:47:46 -05:00
Roope Astala
dbce7b8db2 Fix autoscase 2018-11-09 09:47:01 -05:00
Roope Astala
303090adf6 Merge pull request #76 from rastala/master
Update 00.configuration.ipynb
2018-11-09 09:33:07 -05:00
Roope Astala
b091d1f5f1 Update 00.configuration.ipynb
Create computes in 00.configuration, and link to tutorial
2018-11-09 09:31:25 -05:00
Hai Ning
803d69c539 Update 03.train-hyperparameter-tune-deploy-with-tensorflow.ipynb 2018-11-07 13:54:11 -05:00
Zhang Le
37848e9686 Merge pull request #1 from yueguoguo/yueguoguo-patch-1
Typo in README
2018-11-07 13:18:31 +08:00
Zhang Le
7d9227441e Typo in README
Typo of `psutil`.
2018-11-07 13:17:53 +08:00
Roope Astala
21c454b0f2 Merge pull request #72 from rastala/master
Add logging API notebook
2018-11-06 12:46:39 -05:00
Roope Astala
c7b0960ae4 Add logging API notebook 2018-11-06 12:46:05 -05:00
Roope Astala
14e11fefd6 Delete .gitignore 2018-11-06 12:31:53 -05:00
Roope Astala
4deaeb04cf Delete 05.train-in-spark-checkpoint.ipynb 2018-11-06 12:31:32 -05:00
Roope Astala
ee78323df2 Delete 03.train-on-aci-checkpoint.ipynb 2018-11-06 12:31:18 -05:00
Roope Astala
89c2622938 Delete 02.train-on-local-checkpoint.ipynb 2018-11-06 12:31:03 -05:00
Roope Astala
96b352e3be Delete 04.train-on-remote-vm-checkpoint.ipynb 2018-11-06 12:30:43 -05:00
Sheri Gilley
716c6d8bb1 add quickstart code 2018-11-06 11:27:58 -06:00
Roope Astala
5280201f93 Merge pull request #70 from wchill/fix_macos_sigsegv
Fix segfault under certain conditions when running AutoML pipelines on MacOS
2018-11-05 19:04:14 -05:00
Eric Ahn
3825fd2c10 Fix segfault under certain conditions on MacOS 2018-11-05 15:06:38 -08:00
Roope Astala
b936dd3505 Merge pull request #69 from rastala/master
New SDK version 0.1.74
2018-11-05 15:28:40 -05:00
Roope Astala
7339c95ea0 New SDK version 2018-11-05 15:27:36 -05:00
Hai Ning
32102e2aac Update pipeline-batch-scoring.ipynb 2018-11-02 14:18:38 -04:00
Hai Ning
a043769197 Update pr.md 2018-10-29 22:23:49 -04:00
Hai Ning
a0f3727cf4 Update pr.md 2018-10-29 22:23:39 -04:00
Roope Astala
0e8b42f8c7 Delete snowleopardgaze.jpg 2018-10-26 16:53:47 -04:00
hning86
2daafdbca1 logging api sample 2018-10-26 14:02:05 -04:00
Roope Astala
fec2e97310 Merge pull request #62 from rastala/master
Fix link in 01 getting started
2018-10-26 10:27:42 -04:00
Roope Astala
1a79e53935 Fix link in 01 getting started 2018-10-26 10:26:38 -04:00
Hai Ning
900cc7a76b remove json.loads 2018-10-25 13:03:10 -04:00
Roope Astala
3148e52258 Merge pull request #60 from rastala/master
fix json output
2018-10-25 12:48:28 -04:00
Roope Astala
dda402db83 fix json output 2018-10-25 12:47:38 -04:00
Roope Astala
603f4a6434 Merge pull request #58 from rastala/master
Tutorial fixes
2018-10-24 13:47:05 -04:00
Roope Astala
114449dd9b Tutorial fixes 2018-10-24 13:45:15 -04:00
Roope Astala
de20b6c40e Merge pull request #55 from Azure/sdgilley-patch-1
Update 03.auto-train-models.ipynb
2018-10-22 12:43:20 -04:00
Hai Ning
886ece1089 Update pr.md 2018-10-22 11:23:49 -04:00
Sheri Gilley
0dfe00d05a Update 03.auto-train-models.ipynb
fix link
2018-10-22 10:04:46 -05:00
hning86
7a6fb8067f auto updated from HaiGPU 2018-10-22 01:50:11 -04:00
hning86
bb439ab2fd removed empty folder 2018-10-22 01:41:05 -04:00
hning86
ea3abdde4f auto updated from HaiGPU 2018-10-22 01:39:38 -04:00
Hai Ning
2e4eb8785c Update pr.md 2018-10-18 15:29:26 -04:00
Hai Ning
bfccb07dae Update pr.md 2018-10-18 15:27:36 -04:00
Hai Ning
94cd37e9fb Update README.md 2018-10-18 14:49:28 -04:00
Hai Ning
cdeb4dddab Update README.md 2018-10-18 14:47:44 -04:00
Hai Ning
e12637098a Update README.md 2018-10-18 14:47:19 -04:00
Hai Ning
d5f8811f4f YT cover 2018-10-18 14:46:08 -04:00
Hai Ning
92d36a2db4 Delete ytimg_png.PNG 2018-10-18 14:45:53 -04:00
Hai Ning
c5c76e8187 Update pr.md 2018-10-18 14:45:12 -04:00
Hai Ning
833d1d0f4e Update pr.md 2018-10-18 14:44:59 -04:00
Hai Ning
dd0c0264a2 Update README.md 2018-10-18 14:43:15 -04:00
Hai Ning
52368bad81 Update README.md 2018-10-18 14:42:48 -04:00
Hai Ning
604f6c18be Update README.md 2018-10-18 14:42:23 -04:00
Hai Ning
829bc297f2 Update README.md 2018-10-18 14:41:45 -04:00
Hai Ning
9e5101ea8c Update README.md 2018-10-18 14:41:34 -04:00
Hai Ning
37e96f2ad6 youtube cover 2018-10-18 14:40:17 -04:00
Roope Astala
d0c9bb330a Merge pull request #39 from cforbe/master
Adding dataprep notebook
2018-10-18 12:39:01 -04:00
Colleen Forbes
b4c7932640 Update README.md 2018-10-17 15:44:30 -07:00
Roope Astala
8fed628390 Merge pull request #53 from rastala/master
Update automl setup
2018-10-17 17:38:28 -04:00
rastala
d940aca06d Update automl setup 2018-10-17 17:37:01 -04:00
Sheri Gilley
23189c6f40 move folder 2018-10-17 16:24:46 -05:00
Sheri Gilley
361b57ed29 change all names to camelCase 2018-10-17 11:47:09 -05:00
Sheri Gilley
3f531fd211 try camelCase 2018-10-17 11:09:46 -05:00
Hai Ning
beb97b1d9f Update README.md 2018-10-17 12:00:37 -04:00
Sheri Gilley
111f5e8d73 playing around 2018-10-17 10:46:33 -05:00
Sheri Gilley
96c59d5c2b testing 2018-10-17 09:56:04 -05:00
Sheri Gilley
ce3214b7c6 fix name 2018-10-16 17:33:24 -05:00
Sheri Gilley
53199d17de add delete 2018-10-16 16:54:08 -05:00
Sheri Gilley
54c883412c add test service 2018-10-16 16:49:41 -05:00
Roope Astala
d58d57ca44 Merge pull request #48 from rastala/master
Update notebooks with new version
2018-10-12 14:44:10 -04:00
Roope Astala
b3cc1b61a2 more updates 2018-10-12 14:43:18 -04:00
Roope Astala
a4792d95ac Update notebooks 2018-10-12 14:39:33 -04:00
Hai Ning
216aa8b6a1 Update pr.md 2018-10-12 11:37:33 -04:00
Hai Ning
9814955b37 Update pr.md 2018-10-12 11:34:51 -04:00
Hai Ning
c96e9fdd5a Update pr.md 2018-10-12 11:33:25 -04:00
Hai Ning
47bd530c6b Update pr.md 2018-10-12 11:32:24 -04:00
Hai Ning
7e53333af6 Update pr.md 2018-10-12 11:02:06 -04:00
Hai Ning
0888050389 Update pr.md 2018-10-12 10:04:52 -04:00
Hai Ning
fb567152a4 pr 2018-10-11 23:56:26 -04:00
Josée Martens
6d50401af4 Update README.md 2018-10-11 12:03:27 -05:00
Josée Martens
b1bde7328b Update README.md 2018-10-11 10:58:49 -05:00
Josée Martens
7fc6b29de8 Update README.md 2018-10-11 10:58:02 -05:00
Roope Astala
cff9606bf9 Merge pull request #47 from rastala/master
Update project-brainwave/project-brainwave-quickstart.ipynb
2018-10-09 17:00:34 -04:00
Roope Astala
532799a22c Update project-brainwave/project-brainwave-quickstart.ipynb 2018-10-09 16:58:22 -04:00
Roope Astala
90454d5a32 Merge pull request #42 from mx-iao/master
Add readme for training/ notebooks
2018-10-09 15:24:20 -04:00
mx-iao
076b206515 Update readme.md 2018-10-09 12:22:11 -07:00
Roope Astala
b8b660e5a8 Merge pull request #46 from rastala/master
Update automl examples
2018-10-09 14:38:33 -04:00
Roope Astala
6005c0987d Update automl examples 2018-10-09 14:35:45 -04:00
mx-iao
34eec6abc2 Create readme.md 2018-10-08 12:08:47 -07:00
Hai Ning
208c36b903 Update README.md 2018-10-06 10:49:50 -04:00
Hai Ning
80e8a5e323 Update 04.train-on-remote-vm.ipynb 2018-10-04 11:47:48 -04:00
Colleen
e7e9923cfb updating README.md 2018-10-03 16:46:51 -07:00
Roope Astala
989511c581 Merge pull request #40 from rastala/master
Update automl readme
2018-10-03 14:24:20 -04:00
Roope Astala
d5c247b005 Update automl readme 2018-10-03 14:23:49 -04:00
Colleen
b5482fcd4b Adding dataprep notebook 2018-10-03 09:58:55 -07:00
Roope Astala
2bdd131b0c Merge pull request #37 from rastala/master
update pipeline notebook
2018-10-02 16:13:00 -04:00
Roope Astala
2c391a4486 update pipeline notebook 2018-10-02 16:12:22 -04:00
Roope Astala
87b6114156 Merge pull request #36 from rastala/master
Update to automl notebooks
2018-10-02 14:34:22 -04:00
Roope Astala
9b701ebaeb updates to automl tutorial 2018-10-02 14:33:28 -04:00
Roope Astala
758b0ee808 updates to automl notebooks 2018-10-02 14:32:18 -04:00
Roope Astala
eeb4d92d7c Merge pull request #34 from rastala/master
update notebooks for new version
2018-10-01 13:48:51 -04:00
Roope Astala
b4df74c72e adding nb 13 for app insights 2018-10-01 13:47:58 -04:00
Roope Astala
231c1062a8 update notebooks for new version 2018-10-01 13:45:50 -04:00
Sheri Gilley
92be6bfd19 Merge pull request #30 from Azure/sdgilley-fix-links
fix links
2018-09-28 17:25:04 -05:00
Sheri Gilley
b0b0756aed fix links 2018-09-28 17:24:37 -05:00
Roope Astala
ff19151d0a Merge pull request #29 from rastala/master
onnx update
2018-09-28 16:42:09 -04:00
rastala
933c1ffc4e onnx update 2018-09-28 16:41:21 -04:00
Roope Astala
f75faaa31e Merge pull request #28 from rastala/master
mitigation to image creation issue
2018-09-27 12:52:47 -04:00
Roope Astala
ae8874ad32 mitigation to image creation issue 2018-09-27 12:50:38 -04:00
Hai Ning
6c3abe2d03 Update train.py 2018-09-27 11:30:22 -04:00
Hai Ning
4627080ff4 Update 04.train-on-remote-vm.ipynb 2018-09-27 11:30:01 -04:00
Sheri Gilley
69af6e36fe Merge pull request #23 from Azure/sdg-update
update readme
2018-09-26 17:57:57 -05:00
Hai Ning
e27ab9a58e Update 04.train-on-remote-vm.ipynb 2018-09-26 14:02:27 -04:00
Hai Ning
c85e7e52af Update 04.train-on-remote-vm.ipynb 2018-09-26 14:01:39 -04:00
Hai Ning
5598e07729 Update 05.train-in-spark.ipynb 2018-09-26 14:00:38 -04:00
Roope Astala
d9b62ad651 Merge pull request #26 from rastala/master
Updating Azure Databricks examples
2018-09-26 09:34:40 -04:00
Roope Astala
8aa287dadf Updating Azure Databricks examples 2018-09-26 09:32:24 -04:00
Roope Astala
9ab092a4d0 Merge pull request #21 from sitomani/master
Fixed an error on Data Exploration chapter
2018-09-25 19:14:30 -04:00
Sheri Gilley
1a1a81621f remove file 2018-09-25 16:24:00 -05:00
Sheri Gilley
d93daa3f38 create link for nb 2018-09-25 16:21:14 -05:00
Sheri Gilley
2fb910b0e0 fix link 2018-09-25 16:17:34 -05:00
Sheri Gilley
2879e00884 update 2018-09-25 16:09:01 -05:00
Sheri Gilley
b574bfd3cf updates 2018-09-25 16:03:57 -05:00
Sheri Gilley
6a3b814394 Update README.md 2018-09-25 15:07:24 -05:00
Sheri Gilley
1009ffab36 Update README.md 2018-09-25 15:06:20 -05:00
Sheri Gilley
995fb1ac8c Update README.md 2018-09-25 14:46:25 -05:00
Sheri Gilley
e418e4fbb2 Update README.md 2018-09-25 14:46:01 -05:00
Sheri Gilley
cdbfa203e1 Update README.md 2018-09-25 14:36:26 -05:00
Roope Astala
a9a9635e72 Merge pull request #22 from rastala/master
Update 00, 01 and 10 notebooks
2018-09-25 15:02:28 -04:00
Roope Astala
b568dc364f Update 00, 01 and 10 notebooks 2018-09-25 15:01:08 -04:00
Aleksi Sitomaniemi
59bdd5a858 Fixed an error on Data Exploration chapter
The sample code comment discusses using a portion of the full dataset to make training faster, suggesting that the following code takes 100 first samples from the dataset. However, the code in repository actually leaves the first 100 items out, and picks the rest of the full set into the evaluated X_digits, Y_digits matrices.
2018-09-25 12:28:51 +03:00
263 changed files with 29025 additions and 20093 deletions

View File

@@ -1,223 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 00. Installation and configuration\n",
"\n",
"## Prerequisites:\n",
"\n",
"### 1. Install Azure ML SDK\n",
"Follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment).\n",
"\n",
"### 2. Install some additional packages\n",
"This Notebook requires some additional libraries. In the conda environment, run below commands: \n",
"```shell\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"### 3. Make sure your subscription is registered to use ACI.\n",
"This Notebook makes use of Azure Container Instance (ACI). You need to ensure your subscription has been registered to use ACI in order be able to deploy a dev/test web service.\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```\n",
"\n",
"In this example you will optionally create an Azure Machine Learning Workspace and initialize your notebook directory to easily use this workspace. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n",
"\n",
"This notebook also contains optional cells to install and update the require Azure Machine Learning libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"# Check core SDK version number for debugging purposes\n",
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organaizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"### What do I need\n",
"\n",
"In order to use an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com). Inside your subscription, you will need access to a _resource group_, which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com)\n",
"\n",
"You can also easily create a new resource group using azure-cli.\n",
"\n",
"```sh\n",
"(myenv) $ az group create -n my_resource_group -l eastus2\n",
"```\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and the following information:\n",
"* A name for your workspace\n",
"* Your subscription id\n",
"* The resource group name\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. BatchAI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Supported Azure Regions\n",
"Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the workspace, for example \"eastus2\". "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<my-subscription-id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<my-rg>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<my-workspace>\")\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a workspace\n",
"If you already have access to an AML Workspace you want to use, you can skip this cell. Otherwise, this cell will create an AML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail when:\n",
"1. You do not have permission to create a workspace in the resource group\n",
"2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" exist_ok = True)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring your local environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then load the workspace from this config file from any notebook in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# load workspace configuratio from ./aml_config/config.json file.\n",
"my_workspace = Workspace.from_config()\n",
"my_workspace.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,810 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 01. Train in the Notebook & Deploy Model to ACI\n",
"\n",
"* Load workspace\n",
"* Train a simple regression model directly in the Notebook python kernel\n",
"* Record run history\n",
"* Find the best model in run history and download it.\n",
"* Deploy the model as an Azure Container Instance (ACI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. \n",
"\n",
"2. Install following pre-requisite libraries to your conda environment and restart notebook.\n",
"```shell\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"3. Check that ACI is registered for your Azure Subscription. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider show -n Microsoft.ContainerInstance -o table"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider register -n Microsoft.ContainerInstance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Validate Azure ML SDK installation and get version number for debugging purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"from azureml.core import Experiment, Run, Workspace\n",
"import azureml.core\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set experiment name\n",
"Choose a name for experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-in-notebook'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start a training run in local Notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load diabetes dataset, a well-known small dataset that comes with scikit-learn\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.externals import joblib\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
"data = {\n",
" \"train\":{\"X\": X_train, \"y\": y_train}, \n",
" \"test\":{\"X\": X_test, \"y\": y_test}\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train a simple Ridge model\n",
"Train a very simple Ridge regression model in scikit-learn, and save it as a pickle file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"reg = Ridge(alpha = 0.03)\n",
"reg.fit(X=data['train']['X'], y=data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add experiment tracking\n",
"Now, let's add Azure ML experiment logging, and upload persisted model into run record as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"local run",
"outputs upload"
]
},
"outputs": [],
"source": [
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"run = experiment.start_logging()\n",
"\n",
"run.tag(\"Description\",\"My first run!\")\n",
"run.log('alpha', 0.03)\n",
"reg = Ridge(alpha=0.03)\n",
"reg.fit(data['train']['X'], data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"run.log('mse', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl')\n",
"run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')\n",
"\n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple parameter sweep\n",
"Sweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import os\n",
"from tqdm import tqdm\n",
"\n",
"model_name = \"model.pkl\"\n",
"\n",
"# list of numbers from 0 to 1.0 with a 0.05 interval\n",
"alphas = np.arange(0.0, 1.0, 0.05)\n",
"\n",
"# try a bunch of alpha values in a Linear Regression (Ridge) model\n",
"for alpha in tqdm(alphas):\n",
" # create a bunch of runs, each train a model with a different alpha value\n",
" with experiment.start_logging() as run:\n",
" # Use Ridge algorithm to build a regression model\n",
" reg = Ridge(alpha=alpha)\n",
" reg.fit(X=data[\"train\"][\"X\"], y=data[\"train\"][\"y\"])\n",
" preds = reg.predict(X=data[\"test\"][\"X\"])\n",
" mse = mean_squared_error(y_true=data[\"test\"][\"y\"], y_pred=preds)\n",
"\n",
" # log alpha, mean_squared_error and feature names in run history\n",
" run.log(name=\"alpha\", value=alpha)\n",
" run.log(name=\"mse\", value=mse)\n",
" run.log_list(name=\"columns\", value=columns)\n",
"\n",
" with open(model_name, \"wb\") as file:\n",
" joblib.dump(value=reg, filename=file)\n",
" \n",
" # upload the serialized model into run history record\n",
" run.upload_file(name=\"outputs/\" + model_name, path_or_stream=model_name)\n",
"\n",
" # now delete the serialized model from local folder since it is already uploaded to run history \n",
" os.remove(path=model_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# now let's take a look at the experiment in Azure portal.\n",
"experiment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select best model from the experiment\n",
"Load all experiment run metrics recursively from the experiment into a dictionary object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runs = {}\n",
"run_metrics = {}\n",
"\n",
"for r in tqdm(experiment.get_runs()):\n",
" metrics = r.get_metrics()\n",
" if 'mse' in metrics.keys():\n",
" runs[r.id] = r\n",
" run_metrics[r.id] = metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now find the run with the lowest Mean Squared Error value"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])\n",
"best_run = runs[best_run_id]\n",
"print('Best run is:', best_run_id)\n",
"print('Metrics:', run_metrics[best_run_id])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags to your runs to make them easier to catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"best_run.tag(key=\"Description\", value=\"The best one\")\n",
"best_run.get_tags()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plot MSE over alpha\n",
"\n",
"Let's observe the best model visually by plotting the MSE values over alpha values:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"best_alpha = run_metrics[best_run_id]['alpha']\n",
"min_mse = run_metrics[best_run_id]['mse']\n",
"\n",
"alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])\n",
"sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]\n",
"\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')\n",
"\n",
"plt.xlabel('alpha', fontsize = 14)\n",
"plt.ylabel('mean squared error', fontsize = 14)\n",
"plt.title('MSE over alpha', fontsize = 16)\n",
"\n",
"# plot arrow\n",
"plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,\n",
" width = 0, head_width = .03, head_length = 8)\n",
"\n",
"# plot \"best run\" text\n",
"plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the best model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find the model file saved in the run record of best run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"for f in best_run.get_file_names():\n",
" print(f)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can register this model in the model registry of the workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"models = ws.models(name='best_model')\n",
"for m in models:\n",
" print(m.name, m.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"download file"
]
},
"outputs": [],
"source": [
"# remove the model file if it is already on disk\n",
"if os.path.isfile('model.pkl'): \n",
" os.remove('model.pkl')\n",
"# download the model\n",
"model.download(target_dir=\"./\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scoring script\n",
"\n",
"Now we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.\n",
"\n",
"Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./score.py', 'r') as scoring_script:\n",
" print(scoring_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create environment dependency file\n",
"\n",
"We need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_conda_package(\"scikit-learn\")\n",
"print(myenv.serialize_to_string())\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy web service into an Azure Container Instance\n",
"The deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).\n",
"\n",
"Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML.\n",
" \n",
"** Note: ** The web service creation can take 6-7 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice, Webservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
" memory_gb=1, \n",
" tags={'sample name': 'AML 101'}, \n",
" description='This is a great example.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. \n",
"\n",
"If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"image_config = ContainerImage.image_configuration(execution_script=\"score.py\", \n",
" runtime=\"python\", \n",
" conda_file=\"myenv.yml\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# this will take 5-10 minutes to finish\n",
"# you can also use \"az container list\" command to find the ACI being deployed\n",
"service = Webservice.deploy_from_model(name='my-aci-svc',\n",
" deployment_config=aciconfig,\n",
" models=[model],\n",
" image_config=image_config,\n",
" workspace=ws)\n",
"\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Test web service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"print('web service is hosted in ACI:', service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the `run` API to call the web service with one row of data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import json\n",
"# score the first row from the test set.\n",
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
"service.run(input_data = test_samples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Feed the entire test set and calculate the errors (residual values)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"# score the entire test set.\n",
"test_samples = json.dumps({'data': X_test.tolist()})\n",
"\n",
"result = json.loads(service.run(input_data = test_samples))['result']\n",
"residual = result - y_test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also send raw HTTP request to test the web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"\n",
"# 2 rows of input data, each with 10 made-up numerical features\n",
"input_data = \"{\\\"data\\\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\"\n",
"\n",
"headers = {'Content-Type':'application/json'}\n",
"\n",
"# for AKS deployment you'd need to the service key in the header as well\n",
"# api_key = service.get_key()\n",
"# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} \n",
"\n",
"resp = requests.post(service.scoring_uri, input_data, headers = headers)\n",
"print(resp.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Residual graph\n",
"Plot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Residual Values', fontsize = 18)\n",
"\n",
"f.set_figheight(6)\n",
"f.set_figwidth(14)\n",
"\n",
"a0.plot(residual, 'bo', alpha=0.4);\n",
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
"a0.set_ylabel('residue values', fontsize=14)\n",
"a0.set_xlabel('test data set', fontsize=14)\n",
"\n",
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');\n",
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete ACI to clean up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deleting ACI is super fast!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"service.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,27 +0,0 @@
import pickle
import json
import numpy as np
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
def init():
global model
# note here "best_model" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name='best_model')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = np.array(data)
result = model.predict(data)
return json.dumps({"result": result.tolist()})
except Exception as e:
result = str(e)
return json.dumps({"error": result})

View File

@@ -1,465 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 02. Train locally\n",
"* Create or load workspace.\n",
"* Create scripts locally.\n",
"* Create `train.py` in a folder, along with a `my.lib` file.\n",
"* Configure & execute a local run in a user-managed Python environment.\n",
"* Configure & execute a local run in a system-managed Python environment.\n",
"* Configure & execute a local run in a Docker environment.\n",
"* Query run metrics to find the best model\n",
"* Register model for operationalization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-local'\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train.py`\n",
"\n",
"`train.py` is already created for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note `train.py` also references a `mylib.py` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./mylib.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run\n",
"### User-managed environment\n",
"Below, we use a user-managed run, which means you are responsible to ensure all the necessary packages are available in the Python environment you choose to run the script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"\n",
"# Editing a run configuration property on-fly.\n",
"run_config_user_managed = RunConfiguration()\n",
"\n",
"run_config_user_managed.environment.python.user_managed_dependencies = True\n",
"\n",
"# You can choose a specific Python environment by pointing to a Python path \n",
"#run_config.environment.python.interpreter_path = '/home/johndoe/miniconda3/envs/sdk2/bin/python'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit script to run in the user-managed environment\n",
"Note whole script folder is submitted for execution, including the `mylib.py` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory='./', script='train.py', run_config=run_config_user_managed)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Block to wait till run finishes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### System-managed environment\n",
"You can also ask the system to build a new conda environment and execute your scripts in it. The environment is built once and will be reused in subsequent executions as long as the conda dependencies remain unchanged. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"run_config_system_managed = RunConfiguration()\n",
"\n",
"run_config_system_managed.environment.python.user_managed_dependencies = False\n",
"run_config_system_managed.prepare_environment = True\n",
"\n",
"# Specify conda dependencies with scikit-learn\n",
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"run_config_system_managed.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit script to run in the system-managed environment\n",
"A new conda environment is built based on the conda dependencies object. If you are running this for the first time, this might take up to 5 mninutes. But this conda environment is reused so long as you don't change the conda dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_system_managed)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Block and wait till run finishes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Docker-based execution\n",
"**IMPORTANT**: You must have Docker engine installed locally in order to use this execution mode. If your kernel is already running in a Docker container, such as **Azure Notebooks**, this mode will **NOT** work.\n",
"\n",
"You can also ask the system to pull down a Docker image and execute your scripts in it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"run_config_docker = RunConfiguration()\n",
"\n",
"run_config_docker.environment.python.user_managed_dependencies = False\n",
"run_config_docker.prepare_environment = True\n",
"run_config_docker.environment.docker.enabled = True\n",
"run_config_docker.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"# Specify conda dependencies with scikit-learn\n",
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"run_config_docker.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Submit script to run in the system-managed environment\n",
"A new conda environment is built based on the conda dependencies object. If you are running this for the first time, this might take up to 5 mninutes. But this conda environment is reused so long as you don't change the conda dependencies.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_docker)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Get run history details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query run metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history",
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's find the model that has the lowest MSE value logged."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"best_alpha = metrics['alpha'][np.argmin(metrics['mse'])]\n",
"\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" best_alpha\n",
"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also list all the files that are associated with this run record"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_file_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We know the model `ridge_0.40.pkl` is the best performing model from the eariler queries. So let's register it with the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# supply a model name, and the full path to the serialized model file.\n",
"model = run.register_model(model_name='best_ridge_model', model_path='./outputs/ridge_0.40.pkl')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.version, model.url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can deploy this model following the example in the 01 notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,284 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 03. Train on Azure Container Instance\n",
"\n",
"* Create Workspace\n",
"* Create `train.py` in the project folder.\n",
"* Configure an ACI (Azure Container Instance) run\n",
"* Execute in ACI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-aci'\n",
"experiment = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote execution on ACI\n",
"\n",
"The training script `train.py` is already created for you. Let's have a look."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure for using ACI\n",
"Linux-based ACI is available in `West US`, `East US`, `West Europe`, `North Europe`, `West US 2`, `Southeast Asia`, `Australia East`, `East US 2`, and `Central US` regions. See details [here](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas#region-availability)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"configure run"
]
},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# signal that you want to use ACI to execute script.\n",
"run_config.target = \"containerinstance\"\n",
"\n",
"# ACI container group is only supported in certain regions, which can be different than the region the Workspace is in.\n",
"run_config.container_instance.region = 'eastus2'\n",
"\n",
"# set the ACI CPU and Memory \n",
"run_config.container_instance.cpu_cores = 1\n",
"run_config.container_instance.memory_gb = 2\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit the Experiment\n",
"Finally, run the training job on the ACI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time \n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory='./',\n",
" script='train.py',\n",
" run_config=run_config)\n",
"\n",
"run = experiment.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"# Show run details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# show all the files stored within the run record\n",
"run.get_file_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can take a model produced here, register it and then deploy as a web service."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,321 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 04. Train in a remote VM (MLC managed DSVM)\n",
"* Create Workspace\n",
"* Create Project\n",
"* Create `train.py` file\n",
"* Create DSVM as Machine Learning Compute (MLC) resource\n",
"* Configure & execute a run in a conda environment in the default miniconda Docker container on DSVM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-remote-vm'\n",
"\n",
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train.py`\n",
"\n",
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train.py` in a cell to show the file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Linux DSVM as a compute target\n",
"\n",
"**Note**: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
" \n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can append the port number to the address like the example below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"compute_target_name = 'mydsvm'\n",
"\n",
"try:\n",
" dsvm_compute = DsvmCompute(workspace = ws, name = compute_target_name)\n",
" print('found existing:', dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" print('creating new.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = compute_target_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach an existing Linux DSVM as a compute target\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"'''\n",
" from azureml.core.compute import RemoteCompute \n",
" # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase \n",
" dsvm_compute = RemoteCompute.attach(ws,name=\"attach-from-sdk6\",username=<username>,address=<ipaddress>,ssh_port=22,password=<password>)\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure a Docker run with new conda environment on the VM\n",
"You can execute in a Docker container in the VM. If you choose this route, you don't need to install anything on the VM yourself. Azure ML execution service will take care of it for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
"run_config = RunConfiguration(framework = \"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"run_config.target = compute_target_name\n",
"\n",
"# Use Docker in the remote VM\n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# Use CPU base image from DockerHub\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"print('Base Docker image is:', run_config.environment.docker.base_image)\n",
"\n",
"# Ask system to provision a new one based on the conda_dependencies.yml file\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# Prepare the Docker and conda environment automatically when executingfor the first time.\n",
"run_config.prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the Experiment\n",
"Submit script to run in the Docker image in the remote VM. If you run this for the first time, the system will download the base image, layer in packages specified in the `conda_dependencies.yml` file on top of the base image, create a container and then execute the script in the container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory = '.', script = 'train.py', run_config = run_config)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Find the best run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up compute resource"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dsvm_compute.delete()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,257 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 05. Train in Spark\n",
"* Create Workspace\n",
"* Create Experiment\n",
"* Copy relevant files to the script folder\n",
"* Configure and Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-spark'\n",
"\n",
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train-spark.py`\n",
"\n",
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train-spark.py` in a cell to show the file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('train-spark.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Attach an HDI cluster\n",
"To use HDI commpute target:\n",
" 1. Create an Spark for HDI cluster in Azure. Here is some [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor, NOT CentOS.\n",
" 2. Enter the IP address, username and password below"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import HDInsightCompute\n",
"\n",
"try:\n",
" # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase\n",
" hdi_compute_new = HDInsightCompute.attach(ws, \n",
" name=\"hdi-attach\", \n",
" address=\"hdi-ignite-demo-ssh.azurehdinsight.net\", \n",
" ssh_port=22, \n",
" username='<username>', \n",
" password='<password>')\n",
"\n",
"except UserErrorException as e:\n",
" print(\"Caught = {}\".format(e.message))\n",
" print(\"Compute config already attached.\")\n",
" \n",
" \n",
"hdi_compute_new.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure HDI run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
"run_config = RunConfiguration(framework = \"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"run_config.target = hdi_compute.name\n",
"\n",
"# Use Docker in the remote VM\n",
"# run_config.environment.docker.enabled = True\n",
"\n",
"# Use CPU base image from DockerHub\n",
"# run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"# print('Base Docker image is:', run_config.environment.docker.base_image)\n",
"\n",
"# Ask system to provision a new one based on the conda_dependencies.yml file\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# Prepare the Docker and conda environment automatically when executingfor the first time.\n",
"# run_config.prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"# run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"# load the runconfig object from the \"myhdi.runconfig\" file generated by the attach operaton above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the script to HDI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_run_config = ScriptRunConfig(source_directory = '.',\n",
" script= 'train-spark.py',\n",
" run_config = run_config)\n",
"run = experiment.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get the URL of the run history web page\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"metrics = run.get_metrics()\n",
"print(metrics)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,420 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 10. Register Model, Create Image and Deploy Service\n",
"\n",
"This example shows how to deploy a web service in step-by-step fashion:\n",
"\n",
" 1. Register model\n",
" 2. Query versions of models and select one to deploy\n",
" 3. Create Docker image\n",
" 4. Query versions of images\n",
" 5. Deploy the image as web service\n",
" \n",
"**IMPORTANT**:\n",
" * This notebook requires you to first complete \"01.SDK-101-Train-and-Deploy-to-ACI.ipynb\" Notebook\n",
" \n",
"The 101 Notebook taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags and descriptions to your models. Note you need to have a `sklearn_linreg_model.pkl` file in the current directory. This file is generated by the 01 notebook. The below call registers that file as a model with the same name `sklearn_linreg_model.pkl` in the workspace.\n",
"\n",
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"import sklearn\n",
"\n",
"library_version = \"sklearn\"+sklearn.__version__.replace(\".\",\"x\")\n",
"\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\",\n",
" model_name = \"sklearn_regression_model.pkl\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\", 'version': library_version},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can explore the registered models within your workspace and query by tag. Models are versioned. If you call the register_model command many times with same model name, you will get multiple versions of the model with increasing version numbers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"regression_models = ws.models(tags=['area'])\n",
"for name, m in regression_models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can pick a specific model to deploy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.description, model.version, sep = '\\t')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Docker Image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show `score.py`. Note that the `sklearn_regression_model.pkl` in the `get_model_path` call is referring to a model named `sklearn_linreg_model.pkl` registered under the workspace. It is NOT referenceing the local file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"\n",
"def init():\n",
" global model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n",
" # this is a different behavior than before when the code is run locally, even though the code is the same.\n",
" model_path = Model.get_model_path('sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"result\": result.tolist()})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that following command can take few minutes. \n",
"\n",
"You can add tags and descriptions to images. Also, an image can contain multiple models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script=\"score.py\",\n",
" conda_file=\"myenv.yml\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Image with ridge regression model\")\n",
"\n",
"image = Image.create(name = \"myimage1\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"List images by tag and find out the detailed build log for debugging."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"for i in Image.list(workspace = ws,tags = [\"area\"]):\n",
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy image as web service on Azure Container Instance\n",
"\n",
"Note that the service creation can take few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}, \n",
" description = 'Predict diabetes using regression model')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'my-aci-service-2'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test web service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the web service with some dummy input data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aci_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete ACI to clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"aci_service.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,447 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enabling Data Collection for Models in Production\n",
"With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity:\n",
"\n",
"* Monitor data drifts as production data enters your model\n",
"* Make better decisions on when to retrain or optimize your model\n",
"* Retrain your model with the data collected\n",
"\n",
"## What data is collected?\n",
"* Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS)\n",
"* Model predictions using production input data.\n",
"\n",
"**Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product.\n",
"\n",
"## What is different compared to standard production deployment process?\n",
"1. Update scoring file.\n",
"2. Update yml file with new dependency.\n",
"3. Update aks configuration.\n",
"4. Build new image and deploy it. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Import your dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace, Run\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n",
"from azureml.core.model import Model\n",
"\n",
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Set up your configuration and create a workspace\n",
"Follow Notebook 00 instructions to do this.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Register Model\n",
"Register an existing trained model, add descirption and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. *Update your scoring file with Data Collection*\n",
"The file below, compared to the file used in notebook 11, has the following changes:\n",
"### a. Import the module\n",
"```python \n",
"from azureml.monitoring import ModelDataCollector```\n",
"### b. In your init function add:\n",
"```python \n",
"global inputs_dc, prediction_d\n",
"inputs_dc = ModelDataCollector(\"best_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\", \"feat3\". \"feat4\", \"feat5\", \"Feat6\"])\n",
"prediction_dc = ModelDataCollector(\"best_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"])```\n",
" \n",
"* Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide \"raw\" data versus \"processed\".\n",
"* CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.)\n",
"* Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created.\n",
"\n",
"### c. In your run function add:\n",
"```python\n",
"inputs_dc.collect(data)\n",
"prediction_dc.collect(result)```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy \n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"from azureml.monitoring import ModelDataCollector\n",
"import time\n",
"\n",
"def init():\n",
" global model\n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n",
" # this call should return the path to the model.pkl file on the local disk.\n",
" model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
" global inputs_dc, prediction_dc\n",
" # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n",
" inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n",
" # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n",
" prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n",
" \n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" global inputs_dc, prediction_dc\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
" inputs_dc.collect(data) #this call is saving our input data into our blob\n",
" prediction_dc.collect(result)#this call is saving our prediction data into our blob\n",
" print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"result\": result.tolist()})\n",
" except Exception as e:\n",
" result = str(e)\n",
" print (result + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"error\": result})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. *Update your myenv.yml file with the required module*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"myenv.add_pip_package(\"azureml-monitoring\")\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Create your new Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"Image with ridge regression model\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" )\n",
"\n",
"image = ContainerImage.create(name = \"myimage1\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Deploy to AKS service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so (Notebook 11)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-test1' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_target.wait_for_completion(show_output = True)\n",
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you already have a cluster you can attach the service to it:"
]
},
{
"cell_type": "markdown",
"metadata": {
"scrolled": true
},
"source": [
"```python \n",
" %%time\n",
" resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
" create_name= 'myaks4'\n",
" aks_target = AksCompute.attach(workspace = ws, \n",
" name = create_name, \n",
" #esource_id=resource_id)\n",
" ## Wait for the operation to complete\n",
" aks_target.wait_for_provisioning(True)```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration*\n",
"In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Set the web service configuration\n",
"aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Deploy your service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-w-dc2'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. Test your service and send some data\n",
"**Note**: It will take around 15 mins for your data to appear in your blob.\n",
"The data will appear in your Azure Blob following this format:\n",
"\n",
"/modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,54,6,7,8,88,10], \n",
" [10,9,8,37,36,45,4,33,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. Validate you data and analyze it\n",
"You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear):\n",
"\n",
"/modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv \n",
"\n",
"For doing further analysis you have multiple options:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. Create DataBricks cluter and connect it to your blob\n",
"https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template \"Azure Blob Storage Import Example Notebook\".\n",
"\n",
"\n",
"Here is an example for setting up the file location to extract the relevant data:\n",
"\n",
"<code> file_location = \"wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv\" \n",
"file_type = \"csv\"</code>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Connect Blob to Power Bi (Small Data only)\n",
"1. Download and Open PowerBi Desktop\n",
"2. Select “Get Data” and click on “Azure Blob Storage” >> Connect\n",
"3. Add your storage account and enter your storage key.\n",
"4. Select the container where your Data Collection is stored and click on Edit. \n",
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"6. Click on the double arrow aside the “Content” column to combine the files. \n",
"7. Click OK and the data will preload.\n",
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Disable Data Collection"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aks_service.update(collect_model_data=False)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.10"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.10" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.15"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.15" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.17"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.17" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.18"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.18" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.2"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.2" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.21"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.21" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.23"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.23" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.30"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.30" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.33"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.33" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.41"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.41" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.43"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.43" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.6"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.6" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.8"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.8" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) Microsoft Corporation. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE

View File

@@ -1,45 +0,0 @@
# Sample notebooks for Azure Machine Learning service
To run the notebooks in this repository use one of these methods:
## Use Azure Notebooks - Jupyter based notebooks in the Azure cloud
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks if they are not already there.
1. Create a workspace and its configuration file (**config.json**) using [these instructions](https://aka.ms/aml-how-to-configure-environment).
1. Select `+New` in the Azure Notebook toolbar to add your **config.json** file to the imported folder.
![upload config file to notebook folder](images/additems.png)
1. Open the notebook.
**Make sure the Azure Notebook kernal is set to `Python 3.6`** when you open a notebook.
![set kernal to Python 3.6](images/python36.png)
## **Use your own notebook server**
1. Use [these instructions](https://aka.ms/aml-how-to-configure-environment) to:
* Create a workspace and its configuration file (**config.json**).
* Configure your notebook server.
1. Clone [this repository](https://aka.ms/aml-notebooks).
1. Add your **config.json** file to the cloned folder
1. You may need to install other packages for specific notebooks
1. Start your notebook server.
1. Open the notebook you want to run.
> Note: **Looking for automated machine learning samples?**
> For your convenience, you can use an installation script instead of the steps below for the automated ML notebooks. Go to the [automl folder README](automl/README.md) and follow the instructions. The script installs all packages needed for notebooks in that folder.
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

View File

@@ -1,265 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 00. configuration\n",
"\n",
"In this example you will create an Azure Machine Learning Workspace and initialize your notebook directory to easily use this workspace. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n",
"\n",
"\n",
"## Prerequisites:\n",
"\n",
"Before running this notebook, run the automl_setup script described in README.md.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to your Azure Subscription\n",
"\n",
"In order to use an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com).\n",
"\n",
"First login to azure and follow prompts to authenticate. Then check that your subscription is correct"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az login"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az account show"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you have multiple subscriptions and need to change the active one, you can use a command\n",
"```shell\n",
"az account set -s <subscription-id>\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Machine Learning Services Resource Provider\n",
"\n",
"This step is required to use the Azure ML services backing the SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# register the new RP\n",
"!az provider register -n Microsoft.MachineLearningServices\n",
"\n",
"# check the registration status\n",
"!az provider show -n Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check core SDK version number for validate your installation and for debugging purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organaizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I need\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use *id* value from *az account show* output above. \n",
"* The resource group name. Resource group organizes Azure resources and provides default region for the resources in the group. You can either specify a new one, in which case it gets created for your Workspace, or use an existing one or create a new one from [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a workspace\n",
"If you already have access to an AML Workspace you want to use, you can skip this cell. Otherwise, this cell will create an AML workspace for you in a subscription provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists\n",
"2. You do not have permission to create a workspace in the resource group\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note** The workspace creation can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring your local environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then load the workspace from this config file from any notebook in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load workspace configuratio from ./aml_config/config.json file.\n",
"my_workspace = Workspace.from_config()\n",
"my_workspace.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a folder to host all sample projects\n",
"Lastly, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,399 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 01: Classification with local compute\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-local-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Digits Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_digits = digits.data[100:,:]\n",
"y_digits = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data |\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 50,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, you can continue an interrupted local run by calling continue_experiment without the <b>iterations</b> parameter, or run more iterations to a completed run by specifying the <b>iterations</b> parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_digits, \n",
" y = y_digits, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric\n",
"Give me the run and the model that has the smallest `log_loss`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration\n",
"Give me the run and the model from the 3rd iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model \n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing our best pipeline\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"for index in np.random.choice(len(y_digits), 2):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" title = \"Label value = %d Predicted value = %d \" % ( label,predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,409 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 02: Regression with local compute\n",
"\n",
"In this example we use the scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showcase how you can use AutoML for a simple regression problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the experiment\n",
"experiment_name = 'automl-local-regression'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-regression'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Read Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load diabetes dataset, a well-known built-in small dataset that comes with scikit-learn\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task='regression',\n",
" max_time_sec = 600,\n",
" iterations = 10,\n",
" primary_metric = 'spearman_correlation', \n",
" n_cross_validations = 5,\n",
" debug_log = 'automl.log',\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric\n",
"Show the run and model that has the smallest `root_mean_squared_error` (which turned out to be the same as the one with largest `spearman_correlation` value):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"root_mean_squared_error\"\n",
"best_run, fitted_model = local_run.get_output(metric=lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration\n",
"\n",
"Simply show the run and model from the 3rd iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Predict on training and test set, and calculate residual values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred_train = fitted_model.predict(X_train)\n",
"y_residual_train = y_train - y_pred_train\n",
"\n",
"y_pred_test = fitted_model.predict(X_test)\n",
"y_residual_test = y_test - y_pred_test"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn import datasets\n",
"from sklearn.metrics import mean_squared_error, r2_score\n",
"\n",
"# set up a multi-plot chart\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Regression Residual Values', fontsize = 18)\n",
"f.set_figheight(6)\n",
"f.set_figwidth(16)\n",
"\n",
"# plot residual values of training set\n",
"a0.axis([0, 360, -200, 200])\n",
"a0.plot(y_residual_train, 'bo', alpha = 0.5)\n",
"a0.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n",
"a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12)\n",
"a0.set_xlabel('Training samples', fontsize = 12)\n",
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"# plot histogram\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
"\n",
"# plot residual values of test set\n",
"a1.axis([0, 90, -200, 200])\n",
"a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12)\n",
"a1.set_xlabel('Test samples', fontsize = 12)\n",
"a1.set_yticklabels([])\n",
"# plot histogram\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
"\n",
"plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,471 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 03: Remote Execution using DSVM (Ubuntu)\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Attaching an existing DSVM to a workspace\n",
"3. Instantiating AutoMLConfig \n",
"4. Training the Model using the DSVM\n",
"5. Exploring the results\n",
"6. Testing the fitted model\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** Executions for iterations\n",
"- Asyncronous tracking of progress\n",
"- **Cancelling** individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- specify automl settings as **kwargs**\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a workspace. For AutoML you would need to create a <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-remote-dsvm4'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-dsvm4'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"\n",
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('found existing dsvm.')\n",
"except:\n",
" print('creating new dsvm.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"from sklearn import datasets\n",
"from scipy import sparse\n",
"import numpy as np\n",
"\n",
"def get_data():\n",
" \n",
" digits = datasets.load_digits()\n",
" X_digits = digits.data[100:,:]\n",
" y_digits = digits.target[100:]\n",
"\n",
" return { \"X\" : X_digits, \"y\" : y_digits }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to the fit method.</i>\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 600,\n",
" \"iterations\": 20,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"concurrent_iterations\": 2,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path=project_folder, \n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<b>Note</b> that the first run on a new DSVM may take a several minutes to preparing the environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results\n",
"\n",
"#### Loading executed runs\n",
"In case you need to load a previously executed run given a run id please enable the below cell"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"remote_run = AutoMLRun(experiment=experiment, run_id='AutoML_480d3ed6-fc94-44aa-8f4e-0b945db9d3ef')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# wait till the run finishes\n",
"remote_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling runs\n",
"\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric\n",
"Show the run/model which has the smallest `log_loss` value."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration\n",
"Show the run and model from the 3rd iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = remote_run.get_output(iteration=iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing our best pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"for index in np.random.choice(len(y_digits), 2):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" title = \"Label value = %d Predicted value = %d \" % ( label,predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,522 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 03: Remote Execution using Batch AI\n",
"\n",
"In this example we use the scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [setup](setup.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Attaching an existing Batch AI compute to a workspace\n",
"3. Instantiating AutoMLConfig \n",
"4. Training the Model using the Batch AI\n",
"5. Exploring the results\n",
"6. Testing the fitted model\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** Executions for iterations\n",
"- Asyncronous tracking of progress\n",
"- **Cancelling** individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- specify automl settings as **kwargs**\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a workspace. For AutoML you would need to create a <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-remote-batchai'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-batchai'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Batch AI Cluster\n",
"The cluster is created as Machine Learning Compute and will appear under your workspace.\n",
"\n",
"<b>Note</b>: The cluster creation can take over 10 minutes, please be patient.\n",
"\n",
"As with other Azure services, there are limits on certain resources (for eg. BatchAI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import BatchAiCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# choose a name for your cluster\n",
"batchai_cluster_name = ws.name + \"cpu\"\n",
"\n",
"found = False\n",
"# see if this compute target already exists in the workspace\n",
"for ct in ws.compute_targets():\n",
" print(ct.name, ct.type)\n",
" if (ct.name == batchai_cluster_name and ct.type == 'BatchAI'):\n",
" found = True\n",
" print('found compute target. just use it.')\n",
" compute_target = ct\n",
" break\n",
" \n",
"if not found:\n",
" print('creating a new compute target...')\n",
" provisioning_config = BatchAiCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" autoscale_enabled = True,\n",
" cluster_min_nodes = 1, \n",
" cluster_max_nodes = 4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws,batchai_cluster_name, provisioning_config)\n",
" \n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current BatchAI cluster status, use the 'status' property "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"from sklearn import datasets\n",
"from scipy import sparse\n",
"import numpy as np\n",
"\n",
"def get_data():\n",
" \n",
" digits = datasets.load_digits()\n",
" X_digits = digits.data\n",
" y_digits = digits.target\n",
"\n",
" return { \"X\" : X_digits, \"y\" : y_digits }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to the fit method.</i>\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 120,\n",
" \"iterations\": 20,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"concurrent_iterations\": 5,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path=project_folder,\n",
" compute_target = compute_target,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results\n",
"\n",
"#### Loading executed runs\n",
"In case you need to load a previously executed run given a run id please enable the below cell"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"remote_run = AutoMLRun(experiment=experiment, run_id='AutoML_5db13491-c92a-4f1d-b622-8ab8d973a058')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# wait till the run finishes\n",
"remote_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling runs\n",
"\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric\n",
"Show the run/model which has the smallest `log_loss` value."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration\n",
"Show the run and model from the 3rd iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = remote_run.get_output(iteration=iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"remote_run.register_model(description=description, tags=tags)\n",
"remote_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing our best pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"for index in np.random.choice(len(y_digits), 2):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" title = \"Label value = %d Predicted value = %d \" % ( label,predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,495 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Auto ML : Remote Execution with Text data from Blobstorage\n",
"\n",
"In this example we use the [Burning Man 2016 dataset](https://innovate.burningman.org/datasets-page/) to showcase how you can use AutoML to handle text data from a Azure blobstorage.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Attaching an existing DSVM to a workspace\n",
"3. Instantiating AutoMLConfig \n",
"4. Training the Model using the DSVM\n",
"5. Exploring the results\n",
"6. Testing the fitted model\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** Executions for iterations\n",
"- Asyncronous tracking of progress\n",
"- **Cancelling** individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- specify automl settings as **kwargs**\n",
"- handling **text** data with **preprocess** flag\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-remote-dsvm-blobstore'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-dsvm-blobstore'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach a Remote Linux DSVM\n",
"To use remote docker commpute target:\n",
"1. Create a Linux DSVM in Azure. Here is some [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor, NOT CentOS. Make sure that disk space is available under /tmp because AutoML creates files under /tmp/azureml_runs. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4Gb per core.\n",
"2. Enter the IP address, username and password below\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import RemoteCompute\n",
"\n",
"# Add your VM information below\n",
"dsvm_name = 'mydsvm1'\n",
"dsvm_ip_addr = '<<ip_addr>>'\n",
"dsvm_username = '<<username>>'\n",
"dsvm_password = '<<password>>'\n",
"\n",
"dsvm_compute = RemoteCompute.attach(workspace=ws, name=dsvm_name, address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=22)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"\n",
"The *get_data()* function returns a [dictionary](README.md#getdata)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"\n",
"def get_data():\n",
" # Burning man 2016 data\n",
" df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
" # get integer labels\n",
" le = LabelEncoder()\n",
" le.fit(df[\"Label\"].values)\n",
" y = le.transform(df[\"Label\"].values)\n",
" df = df.drop([\"Label\"], axis=1)\n",
"\n",
" df_train, _, y_train, _ = train_test_split(df, y, test_size=0.1, random_state=42)\n",
"\n",
" return { \"X\" : df, \"y\" : y }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View data\n",
"\n",
"You can execute the *get_data()* function locally to view the *train* data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%run $project_folder/get_data.py\n",
"data_dict = get_data()\n",
"df = data_dict[\"X\"]\n",
"y = data_dict[\"y\"]\n",
"pd.set_option('display.max_colwidth', 15)\n",
"df['Label'] = pd.Series(y, index=df.index)\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to the fit method.</i>\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables AutoML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 3600,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": True,\n",
" \"max_cores_per_iteration\": 2\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" path=project_folder,\n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling runs\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
"remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = remote_run.get_output(metric=lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"zero_run, zero_model = remote_run.get_output(iteration=iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"remote_run.register_model(description=description, tags=tags)\n",
"remote_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"\n",
"# get integer labels\n",
"le = LabelEncoder()\n",
"le.fit(df[\"Label\"].values)\n",
"y = le.transform(df[\"Label\"].values)\n",
"df = df.drop([\"Label\"], axis=1)\n",
"\n",
"_, df_test, _, y_test = train_test_split(df, y, test_size=0.1, random_state=42)\n",
"\n",
"\n",
"ypred = fitted_model.predict(df_test.values)\n",
"\n",
"\n",
"ypred_strings = le.inverse_transform(ypred)\n",
"ytest_strings = le.inverse_transform(y_test)\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,396 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 05 : Blacklisting models, Early termination and handling missing data\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for handling missing values in data. We also provide a stopping metric indicating a target for the primary metric so that AutoML can terminate the run without necessarly going through all the iterations. Finally, if you want to avoid a certain pipeline, we allow you to specify a black list of algos that AutoML will ignore for this run.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"4. Training the Model\n",
"5. Exploring the results\n",
"6. Testing the fitted model\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Blacklist** certain pipelines\n",
"- Specify a **target metrics** to indicate stopping criteria\n",
"- Handling **Missing Data** in the input\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the experiment\n",
"experiment_name = 'automl-local-missing-data'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-missing-data'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating Missing Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"\n",
"digits = datasets.load_digits()\n",
"X_digits = digits.data[10:,:]\n",
"y_digits = digits.target[10:]\n",
"\n",
"# Add missing values in 75% of the lines\n",
"missing_rate = 0.75\n",
"n_missing_samples = int(np.floor(X_digits.shape[0] * missing_rate))\n",
"missing_samples = np.hstack((np.zeros(X_digits.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n",
"rng = np.random.RandomState(0)\n",
"rng.shuffle(missing_samples)\n",
"missing_features = rng.randint(0, X_digits.shape[1], n_missing_samples)\n",
"X_digits[np.where(missing_samples)[0], missing_features] = np.nan"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(data=X_digits)\n",
"df['Label'] = pd.Series(y_digits, index=df.index)\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"\n",
"This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
"|**exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|\n",
"|**blacklist_algos**|*Array* of *strings* indicating pipelines to ignore for Auto ML.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGDClassifierWrapper</i><br><i>NBWrapper</i><br><i>BernoulliNB</i><br><i>SVCWrapper</i><br><i>LinearSVMWrapper</i><br><i>KNeighborsClassifier</i><br><i>DecisionTreeClassifier</i><br><i>RandomForestClassifier</i><br><i>ExtraTreesClassifier</i><br><i>LightGBMClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet<i><br><i>GradientBoostingRegressor<i><br><i>DecisionTreeRegressor<i><br><i>KNeighborsRegressor<i><br><i>LassoLars<i><br><i>SGDRegressor<i><br><i>RandomForestRegressor<i><br><i>ExtraTreesRegressor<i>|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 20,\n",
" n_cross_validations = 5,\n",
" preprocess = True,\n",
" exit_score = 0.994,\n",
" blacklist_algos = ['KNeighborsClassifier','LinearSVMWrapper'],\n",
" verbosity = logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"NOTE: The widget will display a link at the bottom. This will not currently work, but will eventually link to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. Each pipeline is a tuple of three elements. The first element is the score for the pipeline the second element is the string description of the pipeline and the last element are the pipeline objects used for each fold in the cross-validation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = local_run.get_output(metric=lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 3\n",
"# best_run, fitted_model = local_run.get_output(iteration=iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"local_run.register_model(description=description, tags=tags)\n",
"local_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"#Randomly select digits and test\n",
"for index in np.random.choice(len(y_digits), 2):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" title = \"Label value = %d Predicted value = %d \" % ( label,predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,418 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 06: Custom CV splits, handling sparse data\n",
"\n",
"In this example we use the scikit learn's [20newsgroup](In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for handling sparse data and specify custom cross validation splits.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"4. Training the Model\n",
"5. Exploring the results\n",
"6. Testing the fitted model\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Custom CV** splits \n",
"- Handling **Sparse Data** in the input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the experiment\n",
"experiment_name = 'automl-local-missing-data'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-missing-data'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating Sparse Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import fetch_20newsgroups\n",
"from sklearn.feature_extraction.text import HashingVectorizer\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"remove = ('headers', 'footers', 'quotes')\n",
"categories = [\n",
" 'alt.atheism',\n",
" 'talk.religion.misc',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
"]\n",
"data_train = fetch_20newsgroups(subset='train', categories=categories,\n",
" shuffle=True, random_state=42,\n",
" remove=remove)\n",
"\n",
"X_train, X_validation, y_train, y_validation = train_test_split(data_train.data, data_train.target, test_size=0.33, random_state=42)\n",
"\n",
"\n",
"vectorizer = HashingVectorizer(stop_words='english', alternate_sign=False,\n",
" n_features=2**16)\n",
"X_train = vectorizer.transform(X_train)\n",
"X_validation = vectorizer.transform(X_validation)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features'])\n",
"summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]]\n",
"summary_df['Validation Set'] = [X_validation.shape[0], X_validation.shape[1]]\n",
"summary_df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*<br>*Note: If input data is Sparse you cannot use preprocess=True*|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom Validation set|\n",
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. for the custom Validation set|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log='automl_errors.log',\n",
" primary_metric='AUC_weighted',\n",
" max_time_sec=3600,\n",
" iterations=5,\n",
" preprocess=False,\n",
" verbosity=logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" X_valid = X_validation, \n",
" y_valid = y_validation, \n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = local_run.get_output(metric=lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 3\n",
"# best_run, fitted_model = local_run.get_output(iteration=iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"local_run.register_model(description=description, tags=tags)\n",
"local_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()### Testing the Fitted Model\n",
"\n",
"#### Load Test Data\n",
"import sklearn\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"remove = ('headers', 'footers', 'quotes')\n",
"categories = [\n",
" 'alt.atheism',\n",
" 'talk.religion.misc',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
"]\n",
"\n",
"\n",
"data_test = fetch_20newsgroups(subset='test', categories=categories,\n",
" shuffle=True, random_state=42,\n",
" remove=remove)\n",
"\n",
"vectorizer = HashingVectorizer(stop_words='english', alternate_sign=False,\n",
" n_features=2**16)\n",
"\n",
"X_test = vectorizer.transform(data_test.data)\n",
"y_test = data_test.target\n",
"\n",
"#### Testing our best pipeline\n",
"\n",
"ypred = fitted_model.predict(X_test)\n",
"ypred_strings = [categories[i] for i in ypred]\n",
"ytest_strings = [categories[i] for i in y_test]\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"print(cm)\n",
"cm.plot()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,326 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 07: Exploring previous runs\n",
"\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. List all Experiments for the workspace\n",
"2. List AutoML runs for an Experiment\n",
"3. Get details for a AutoML Run. (Automl settings, run widget & all metrics)\n",
"4. Download fitted pipeline for any iteration\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List all AutoML Experiments in a Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import re\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.run import Run\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"experiment_list = Experiment.list(workspace=ws)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"for experiment in experiment_list:\n",
" all_runs = list(experiment.get_runs())\n",
" automl_runs = []\n",
" for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" automl_runs.append(run) \n",
" summary_df[experiment.name] = [len(automl_runs)]\n",
" \n",
"pd.set_option('display.max_colwidth', -1)\n",
"summary_df.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List AutoML runs for an Experiment\n",
"You can set <i>Experiment</i> name with any experiment name from the result of the Experiment.list cell to load the AutoML runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell\n",
"\n",
"proj = ws.experiments()[experiment_name]\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"all_runs = list(proj.get_runs(properties={'azureml.runsource': 'automl'}))\n",
"for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
" iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" \n",
"from IPython.display import HTML\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
"\n",
"from IPython.display import display\n",
"display(projname_html)\n",
"display(summary_df.T)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get Details for a Auto ML Run\n",
"\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_id = '' # Filling your own run_id\n",
"\n",
"from azureml.train.widgets import RunDetails\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment=experiment, run_id=run_id)\n",
"\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time'])\n",
"properties = ml_run.get_properties()\n",
"tags = ml_run.get_tags()\n",
"status = ml_run.get_details()\n",
"amlsettings = eval(properties['RawAMLSettingsString'])\n",
"if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
"else:\n",
" iterations = properties['num_iterations']\n",
"start_time = None\n",
"if 'startTimeUtc' in status:\n",
" start_time = status['startTimeUtc']\n",
"end_time = None\n",
"if 'endTimeUtc' in status:\n",
" end_time = status['endTimeUtc']\n",
"summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time]\n",
"display(HTML('<h3>Runtime Details</h3>'))\n",
"display(summary_df)\n",
"\n",
"#settings_df = pd.DataFrame(data=amlsettings, index=[''])\n",
"display(HTML('<h3>AutoML Settings</h3>'))\n",
"display(amlsettings)\n",
"\n",
"display(HTML('<h3>Iterations</h3>'))\n",
"RunDetails(ml_run).show() \n",
"\n",
"children = list(ml_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"display(HTML('<h3>Metrics</h3>'))\n",
"display(rundata)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Download fitted models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download best model for any given metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name\n",
"best_run, fitted_model = ml_run.get_output(metric=metric)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download model for any given iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 4 # Replace with an interation number\n",
"best_run, fitted_model = ml_run.get_output(iteration=iteration)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description=description, tags=tags)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register best model for any given metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description=description, tags=tags, metric=metric)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register model for any given iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 4 # Replace with an interation number\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description=description, tags=tags, iteration=iteration)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,480 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 08: Remote Execution with Text file\n",
"\n",
"In this sample accesses a data file on a remote DSVM. This is more efficient than reading the file from Blob storage in the get_data method.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Configuring the DSVM to allow files to be access directly by the get_data method.\n",
"2. get_data returning data from a local file.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-remote-dsvm-file'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-dsvm-file'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"\n",
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('found existing dsvm.')\n",
"except:\n",
" print('creating new dsvm.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copy data file to the DSVM\n",
"Download the data file.\n",
"Copy the data file to the DSVM under the folder: /tmp/data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"df.to_csv(\"data.tsv\", sep=\"\\t\", quotechar='\"', index=False)\n",
"\n",
"# Now copy the file data.tsv to the folder /tmp/data on the DSVM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"\n",
"The *get_data()* function returns a [dictionary](README.md#getdata)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"import os\n",
"\n",
"def get_data():\n",
" # Burning man 2016 data\n",
" df = pd.read_csv('/tmp/data/data.tsv',\n",
" delimiter=\"\\t\", quotechar='\"')\n",
" # get integer labels\n",
" le = LabelEncoder()\n",
" le.fit(df[\"Label\"].values)\n",
" y = le.transform(df[\"Label\"].values)\n",
" df = df.drop([\"Label\"], axis=1)\n",
"\n",
" df_train, _, y_train, _ = train_test_split(df, y, test_size=0.1, random_state=42)\n",
"\n",
" return { \"X\" : df.values, \"y\" : y }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to the fit method.</i>\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 3600,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": True,\n",
" \"max_cores_per_iteration\": 2,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path=project_folder,\n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling runs\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = remote_run.get_output(metric=lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 1\n",
"# best_run, fitted_model = remote_run.get_output(iteration=iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"remote_run.register_model(description=description, tags=tags)\n",
"remote_run.model_id # Use this id to deploy the model as a web service in Azure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"\n",
"# get integer labels\n",
"le = LabelEncoder()\n",
"le.fit(df[\"Label\"].values)\n",
"y = le.transform(df[\"Label\"].values)\n",
"df = df.drop([\"Label\"], axis=1)\n",
"\n",
"_, df_test, _, y_test = train_test_split(df, y, test_size=0.1, random_state=42)\n",
"\n",
"ypred = fitted_model.predict(df_test.values)\n",
"\n",
"ypred_strings = le.inverse_transform(ypred)\n",
"ytest_strings = le.inverse_transform(y_test)\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,500 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 09: Classification with deployment\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Registering the model\n",
"6. Creating Image and creating aci service\n",
"7. Testing the aci service\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-local-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[10:,:]\n",
"y_digits = digits.target[10:]\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" name=experiment_name,\n",
" debug_log='automl_errors.log',\n",
" primary_metric='AUC_weighted',\n",
" max_time_sec=1200,\n",
" iterations=10,\n",
" n_cross_validations=2,\n",
" verbosity=logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description=description, tags=tags, iteration=8)\n",
"local_run.model_id # This will be written to the script file later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Scoring script ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create yml file for env"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the consistence the fit results with the training results, the sdk dependence versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook 12.auto-ml-retrieve-the-training-sdk-versions.ipynb."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment=experiment, run_id=local_run.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dependencies = ml_run.get_run_sdk_dependencies(iteration=7)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:\n",
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile myenv.yml\n",
"name: myenv\n",
"channels:\n",
" - defaults\n",
"dependencies:\n",
" - pip:\n",
" - numpy==1.14.2\n",
" - scikit-learn==0.19.2\n",
" - azureml-sdk[notebooks,automl]==<<azureml-version>> "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<azureml-version>>', dependencies['azureml-sdk']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', local_run.model_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Image ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",
"image = Image.create(name = \"automlsampleimage\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy Image as web service on Azure Container Instance ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'automl-sample-01'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### To delete a service ##"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### To get logs from deployed service ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test Web Service ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"for index in np.random.choice(len(y_digits), 3):\n",
" print(index)\n",
" test_sample = json.dumps({'data':X_digits[index:index + 1].tolist()})\n",
" predicted = aci_service.run(input_data = test_sample)\n",
" label = y_digits[index]\n",
" predictedDict = json.loads(predicted)\n",
" title = \"Label value = %d Predicted value = %s \" % ( label,predictedDict['result'][0])\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,292 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 10: Multi output Example for AutoML"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows an example to use AutoML to train the multi output problems by leveraging the correlation between the outputs using indicator vectors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transformer functions\n",
"The transformation of the input are happening for input X and Y as following, e.g. Y = {y_1, y_2}, then X becomes\n",
" \n",
"X 1 0\n",
" \n",
"X 0 1\n",
"\n",
"and Y becomes,\n",
"\n",
"y_1\n",
"\n",
"y_2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"from scipy import linalg\n",
"\n",
"#Transformer functions\n",
"def multi_output_transform_x_y(X, Y):\n",
" X_new = multi_output_transformer_x(X, Y.shape[1])\n",
" y_new = multi_output_transform_y(Y)\n",
" return X_new, y_new\n",
"\n",
"def multi_output_transformer_x(X, number_of_columns_Y):\n",
" indicator_vecs = linalg.block_diag(*([np.ones((X.shape[0], 1))] * number_of_columns_Y))\n",
" if sparse.issparse(X):\n",
" X_new = sparse.vstack(np.tile(X, number_of_columns_Y))\n",
" indicator_vecs = sparse.coo_matrix(indicator_vecs)\n",
" X_new = sparse.hstack((X_new, indicator_vecs))\n",
" else:\n",
" X_new = np.tile(X, (number_of_columns_Y, 1))\n",
" X_new = np.hstack((X_new, indicator_vecs))\n",
" return X_new\n",
"\n",
"def multi_output_transform_y(Y):\n",
" return Y.reshape(-1, order=\"F\")\n",
" \n",
"def multi_output_inverse_transform_y(y, number_of_columns_y):\n",
" return y.reshape((-1, number_of_columns_y), order=\"F\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AutoML experiment set up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-local-multi-output'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-multi-output'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a random dataset for the test purpose "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rng = np.random.RandomState(1)\n",
"X_train = np.sort(200 * rng.rand(600, 1) - 100, axis=0)\n",
"Y_train = np.array([np.pi * np.sin(X_train).ravel(), np.pi * np.cos(X_train).ravel()]).T\n",
"Y_train += (0.5 - rng.rand(*Y_train.shape))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perform X and Y transformation using transformer function"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train_transformed, y_train_transformed = multi_output_transform_x_y(X_train, Y_train)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log='automl_errors_multi.log',\n",
" primary_metric='r2_score',\n",
" iterations=10,\n",
" n_cross_validations=2,\n",
" verbosity=logging.INFO,\n",
" X=X_train_transformed,\n",
" y=y_train_transformed,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fit the transformed data "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the best fit model\n",
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate random data set for predicting\n",
"X_predict = np.sort(200 * rng.rand(200, 1) - 100, axis=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Transform predict data\n",
"X_predict_transformed = multi_output_transformer_x(X_predict, Y_train.shape[1])\n",
"# Predict and inverse transform the prediction\n",
"y_predict = fitted_model.predict(X_predict_transformed)\n",
"Y_predict = multi_output_inverse_transform_y(y_predict, Y_train.shape[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(Y_predict)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,251 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 11: Sample weight\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use sample weight with the AutoML Classifier.\n",
"Sample weight is used where some sample values are more important than others.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. How to specifying sample_weight\n",
"2. The difference that it makes to test results\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'non_sample_weight_experiment'\n",
"sample_weight_experiment_name = 'sample_weight_experiment'\n",
"\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate two AutoMLConfig Objects. One will be used with sample_weight and one without."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[100:,:]\n",
"y_digits = digits.target[100:]\n",
"\n",
"# The example makes the sample weight 0.9 for the digit 4 and 0.1 for all other digits.\n",
"# This makes the model more likely to classify as 4 if the image it not clear.\n",
"sample_weight = np.array([(0.9 if x == 4 else 0.01) for x in y_digits])\n",
"\n",
"automl_classifier = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" path=project_folder)\n",
"\n",
"automl_sample_weight = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" sample_weight = sample_weight,\n",
" path=project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Models\n",
"\n",
"Call the submit method on the experiment and pass the configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_classifier, show_output=True)\n",
"sample_weight_run = sample_weight_experiment.submit(automl_sample_weight, show_output=True)\n",
"\n",
"best_run, fitted_model = local_run.get_output()\n",
"best_run_sample_weight, fitted_model_sample_weight = sample_weight_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Models\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:100, :]\n",
"y_digits = digits.target[:100]\n",
"images = digits.images[:100]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Compare the pipelines\n",
"The prediction from the sample weight model is more likely to correctly predict 4's. However, it is also more likely to predict 4 for some images that are not labelled as 4."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"for index in range(0,len(y_digits)):\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" predicted_sample_weight = fitted_model_sample_weight.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" if predicted == 4 or predicted_sample_weight == 4 or label == 4:\n",
" title = \"Label value = %d Predicted value = %d Prediced with sample weight = %d\" % ( label,predicted,predicted_sample_weight)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,240 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 12: Retrieving Training SDK Versions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from azureml.train.automl.utilities import get_sdk_dependencies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Retrieve the SDK versions in the current env"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To retrieve the SDK versions in the current env, simple running get_sdk_dependencies()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"get_sdk_dependencies()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Training Model Using AutoML"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-local-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_digits = digits.data[10:,:]\n",
"y_digits = digits.target[10:]\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log='automl_errors.log',\n",
" primary_metric='AUC_weighted',\n",
" iterations=3,\n",
" n_cross_validations=2,\n",
" verbosity=logging.INFO,\n",
" X = X_digits, \n",
" y = y_digits,\n",
" path=project_folder)\n",
"\n",
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. Retrieve the SDK versions from RunHistory"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the SDK versions from RunHistory, first the RunId need to be recorded. This can either be done by copy it from the output message or retieve if after each run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_id = local_run.id\n",
"print(run_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize a new AutoMLRunClass."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification'\n",
"#run_id = 'AutoML_c0585b1f-a0e6-490b-84c7-3a099468b28e'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment=experiment, run_id=run_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get parent training SDK versions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_run.get_run_sdk_dependencies()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the traning SDK versions of a specific run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_run.get_run_sdk_dependencies(iteration=2)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,567 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 13: Prepare Data using `azureml.dataprep`\n",
"In this example we showcase how you can use `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone - full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n",
"Make sure you have executed the [setup](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Defining data loading and preparation steps in a `Dataflow` using `azureml.dataprep`\n",
"2. Passing the `Dataflow` to AutoML for local run\n",
"3. Passing the `Dataflow` to AutoML for remote run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install `azureml.dataprep` SDK"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please restart your kernel after the below installs.\n",
"\n",
"Tornado must be downgraded to a pre-5 version due to a known Tornado x Jupyter event loop bug."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml-dataprep\n",
"!pip install tornado==4.5.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.runconfig import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.workspace import Workspace\n",
"import azureml.dataprep as dprep\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
" \n",
"# choose a name for experiment\n",
"experiment_name = 'automl-dataprep-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-dataprep-classification'\n",
" \n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading Data using DataPrep"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can use `smart_read_file` which intelligently figures out delimiters and datatypes of a file\n",
"# data pulled from sklearn.datasets.load_digits()\n",
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.smart_read_file(simple_example_data_root + 'X.csv').skip(1) # remove header\n",
"\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter).\n",
"# and convert column types manually.\n",
"# Here we read a comma delimited file and convert all columns to integers.\n",
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X.skip(1).head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML Settings\n",
"\n",
"This creates a general Auto ML Settings applicable for both Local and Remote runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 600,\n",
" \"iterations\": 2,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"verbosity\": logging.INFO,\n",
" \"n_cross_validations\" : 3\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass data with Dataflows\n",
"\n",
"The `Dataflow` objects captured above can be passed to `submit` method for local run. AutoML will retrieve the results from the `Dataflow` for model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote Run\n",
"*Note: This feature might not work properly in your workspace region before the October update. You may jump to the \"Exploring the results\" section below to explore other features AutoML and DataPrep has to offer.*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach a Remote Linux DSVM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('found existing dsvm.')\n",
"except:\n",
" print('creating new dsvm.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Update Conda Dependency file to have AutoML and DataPrep SDK\n",
"\n",
"Currently AutoML and DataPrep SDK is not installed with Azure ML SDK by default. Due to this we update the conda dependency file to add such dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cd = CondaDependencies()\n",
"cd.add_pip_package(pip_package='azureml-dataprep')\n",
"cd.add_pip_package(pip_package='tornado==4.5.1')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a RunConfiguration with DSVM name"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration(conda_dependencies=cd)\n",
"run_config.target = dsvm_compute\n",
"run_config.auto_prepare_environment = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass data with Dataflows\n",
"\n",
"The `Dataflow` objects captured above can also be passed to `submit` method for remote run. AutoML will serialize the `Dataflow` and send to remote compute target. The `Dataflow` will not be evaluated locally."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" run_configuration = run_config,\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)\n",
"# Please uncomment the line below to try out remote run with dataprep. \n",
"# This feature might not work properly in your workspace region before the October update.\n",
"# remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve all child runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"import pandas as pd\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric\n",
"Give me the run and the model that has the smallest `log_loss`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any iteration\n",
"Give me the run and the model from the 1st iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model \n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"X_digits = digits.data[:10, :]\n",
"y_digits = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing our best pipeline\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import random\n",
"import numpy as np\n",
"\n",
"for index in np.random.choice(len(y_digits), 2):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_digits[index:index + 1])[0]\n",
" label = y_digits[index]\n",
" title = \"Label value = %d Predicted value = %d \" % ( label,predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap=plt.cm.gray_r, interpolation='nearest')\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appendix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Capture the Dataflows to use for AutoML later\n",
"\n",
"`Dataflow` objects are immutable. Each of them is composed of a list of data preparation steps. A `Dataflow` can be branched at any point for further usage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# sklearn.digits.data + target\n",
"digits_complete = dprep.smart_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`digits_complete` (sourced from `sklearn.datasets.load_digits()`)is forked into `dflow_X` to capture all the feature columns and `dflow_y` to capture the label column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,264 +0,0 @@
# Table of Contents
1. [Automated ML Introduction](#introduction)
1. [Running samples in Azure Notebooks](#jupyter)
1. [Running samples in a Local Conda environment](#localconda)
1. [Automated ML SDK Sample Notebooks](#samples)
1. [Documentation](#documentation)
1. [Running using python command](#pythoncommand)
1. [Troubleshooting](#troubleshooting)
<a name="introduction"></a>
# Automated ML introduction
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
<a name="jupyter"></a>
## Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks if they are not already there.
1. Create a workspace and its configuration file (**config.json**) using [these instructions](https://aka.ms/aml-how-to-configure-environment).
1. Select `+New` in the Azure Notebook toolbar to add your **config.json** file to the imported folder.
![upload config file to notebook folder](../images/additems.png)
1. Open the notebook.
**Make sure the Azure Notebook kernal is set to `Python 3.6`** when you open a notebook.
![set kernal to Python 3.6](../images/python36.png)
<a name="localconda"></a>
## Running samples in a Local Conda environment
To run these notebook on your own notebook server, use these installation instructions.
The instructions below will install everything you need and then start a Jupyter notebook. To start your Jupyter notebook manually, use:
```
conda activate azure_automl
jupyter notebook
```
or on Mac:
```
source activate azure_automl
jupyter notebook
```
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose Python 3.7 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
### 3. Setup a new conda environment
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 30 minutes to execute.
## Windows
Start a conda command windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup
```
## Mac
Install "Command line developer tools" if it is not already installed (you can use the command: `xcode-select --install`).
Start a Terminal windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_mac.sh
```
## Linux
cd to the **automl** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_linux.sh
```
### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in AutoML
<a name="samples"></a>
# Automated ML SDK Sample Notebooks
- [00.configuration.ipynb](00.configuration.ipynb)
- Register Machine Learning Services Resource Provider
- Create new Azure ML Workspace
- Save Workspace configuration file
- [01.auto-ml-classification.ipynb](01.auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Uses local compute for training
- [02.auto-ml-regression.ipynb](02.auto-ml-regression.ipynb)
- Dataset: scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)
- Simple example of using Auto ML for regression
- Uses local compute for training
- [03.auto-ml-remote-execution.ipynb](03.auto-ml-remote-execution.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using Auto ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [03b.auto-ml-remote-batchai.ipynb](03b.auto-ml-remote-batchai.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using automated ML for classification using a remote Batch AI compute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [04.auto-ml-remote-execution-text-data-blob-store.ipynb](04.auto-ml-remote-execution-text-data-blob-store.ipynb)
- Dataset: [Burning Man 2016 dataset](https://innovate.burningman.org/datasets-page/)
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
- [05.auto-ml-missing-data-blacklist-early-termination.ipynb](05.auto-ml-missing-data-blacklist-early-termination.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
- [06.auto-ml-sparse-data-custom-cv-split.ipynb](06.auto-ml-sparse-data-custom-cv-split.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Handle sparse datasets
- Specify custom train and validation set
- [07.auto-ml-exploring-previous-runs.ipynb](07.auto-ml-exploring-previous-runs)
- List all projects for the workspace
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Downlaod fitted pipeline for any iteration
- [08.auto-ml-remote-execution-with-text-file-on-DSVM](08.auto-ml-remote-execution-with-text-file-on-DSVM.ipynb)
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Download the data and store it in the DSVM to improve performance.
- [09.auto-ml-classification-with-deployment.ipynb](09.auto-ml-classification-with-deployment.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
- [10.auto-ml-multi-output-example.ipynb](10.auto-ml-multi-output-example.ipynb)
- Dataset: scikit learn's random example using multi-output pipeline(http://scikit-learn.org/stable/auto_examples/ensemble/plot_random_forest_regression_multioutput.html#sphx-glr-auto-examples-ensemble-plot-random-forest-regression-multioutput-py)
- Simple example of using Auto ML for multi output regression
- Handle both the dense and sparse metrix
- [11.auto-ml-sample-weight.ipynb](11.auto-ml-sample-weight.ipynb)
- How to specifying sample_weight
- The difference that it makes to test results
- [12.auto-ml-retrieve-the-training-sdk-versions.ipynb](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)
- How to get current and training env SDK versions
- [13.auto-ml-dataprep.ipynb](13.auto-ml-dataprep.ipynb)
- Using DataPrep for reading data
<a name="documentation"></a>
# Documentation
## Table of Contents
1. [Automated ML Settings ](#automlsettings)
1. [Cross validation split options](#cvsplits)
1. [Get Data Syntax](#getdata)
1. [Data pre-processing and featurization](#preprocessing)
<a name="automlsettings"></a>
## Automated ML Settings
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**max_time_sec**|Time limit in seconds for each iteration|None|
|**iterations**|Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. |100|
|**n_cross_validations**|Number of cross validation splits|None|
|**validation_size**|Size of validation set as percentage of all training samples|None|
|**concurrent_iterations**|Max number of iterations that would be executed in parallel|1|
|**preprocess**|*True/False* <br>Setting this to *True* enables preprocessing <br>on the input to handle missing data, and perform some common feature extraction<br>*Note: If input data is Sparse you cannot use preprocess=True*|False|
|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> You can set it to *-1* to use all cores|1|
|**exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|None|
|**blacklist_algos**|*Array* of *strings* indicating pipelines to ignore for Auto ML.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGDClassifierWrapper</i><br><i>NBWrapper</i><br><i>BernoulliNB</i><br><i>SVCWrapper</i><br><i>LinearSVMWrapper</i><br><i>KNeighborsClassifier</i><br><i>DecisionTreeClassifier</i><br><i>RandomForestClassifier</i><br><i>ExtraTreesClassifier</i><br><i>gradient boosting</i><br><i>LightGBMClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoostingRegressor</i><br><i>DecisionTreeRegressor</i><br><i>KNeighborsRegressor</i><br><i>LassoLars</i><br><i>SGDRegressor</i><br><i>RandomForestRegressor</i><br><i>ExtraTreesRegressor</i>|None|
<a name="cvsplits"></a>
## Cross validation split options
### K-Folds Cross Validation
Use *n_cross_validations* setting to specify the number of cross validations. The training data set will be randomly split into *n_cross_validations* folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for *n_cross_validations* rounds until each fold is used once as validation set. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)
Use *validation_size* to specify the percentage of the training data set that should be used for validation, and use *n_cross_validations* to specify the number of cross validations. During each cross validation round, a subset of size *validation_size* will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Custom train and validation set
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
<a name="getdata"></a>
## get_data() syntax
The *get_data()* function can be used to return a dictionary with these values:
|Key|Type|Dependency|Mutually Exclusive with|Description|
|:-|:-|:-|:-|:-|
|X|Pandas Dataframe or Numpy Array|y|data_train, label, columns|All features to train with|
|y|Pandas Dataframe or Numpy Array|X|label|Label data to train with. For classification, this should be an array of integers. |
|X_valid|Pandas Dataframe or Numpy Array|X, y, y_valid|data_train, label|*Optional* All features to validate with. If this is not specified, X is split between train and validate|
|y_valid|Pandas Dataframe or Numpy Array|X, y, X_valid|data_train, label|*Optional* The label data to validate with. If this is not specified, y is split between train and validate|
|sample_weight|Pandas Dataframe or Numpy Array|y|data_train, label, columns|*Optional* A weight value for each label. Higher values indicate that the sample is more important.|
|sample_weight_valid|Pandas Dataframe or Numpy Array|y_valid|data_train, label, columns|*Optional* A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate|
|data_train|Pandas Dataframe|label|X, y, X_valid, y_valid|All data (features+label) to train with|
|label|string|data_train|X, y, X_valid, y_valid|Which column in data_train represents the label|
|columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features|
|cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation|
<a name="preprocessing"></a>
## Data pre-processing and featurization
If you use `preprocess=True`, the following data preprocessing steps are performed automatically for you:
1. Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
2. Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
3. Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
4. Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
<a name="pythoncommand"></a>
# Running using python command
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
However, on Windows the file needs to be modified before it can be run.
The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
<a name="troubleshooting"></a>
# Troubleshooting
## Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size.
If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.
## Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the concurrent_iterations setting should always be less than the number of cores of the DSVM.
To resolve this issue, try reducing the value specified for the concurrent_iterations setting.

View File

@@ -1,20 +0,0 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6
- nb_conda
- matplotlib
- numpy>=1.11.0,<1.16.0
- scipy>=0.19.0,<0.20.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.19.0,<0.23.0
- pip:
# Required packages for AzureML execution, history, and data preparation.
- --extra-index-url https://pypi.python.org/simple
- azureml-sdk[automl]
- azureml-train-widgets
- azure-cli
- pandas_ml

View File

@@ -1,41 +0,0 @@
@echo off
set conda_env_name=%1
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
call conda env update --file automl_env.yml -n %conda_env_name%
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f automl_env.yml -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call pip install psutil
call jupyter nbextension install --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
call jupyter nbextension enable --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
echo.
echo Starting jupyter notebook - please run notebook 00.configuration
echo.
jupyter notebook --log-level=50
goto End
:ErrorExit
echo Install failed
:End

View File

@@ -1,34 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
conda env update -file automl_env.yml -n $CONDA_ENV_NAME
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
conda env update -file automl_env.yml -n $CONDA_ENV_NAME
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

389
configuration.ipynb Normal file
View File

@@ -0,0 +1,389 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/configuration.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Configuration\n",
"\n",
"_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_\n",
"\n",
"---\n",
"---\n",
"\n",
"## Table of Contents\n",
"\n",
"1. [Introduction](#Introduction)\n",
" 1. What is an Azure Machine Learning workspace\n",
"1. [Setup](#Setup)\n",
" 1. Azure subscription\n",
" 1. Azure ML SDK and other library installation\n",
" 1. Azure Container Instance registration\n",
"1. [Configure your Azure ML Workspace](#Configure%20your%20Azure%20ML%20workspace)\n",
" 1. Workspace parameters\n",
" 1. Access your workspace\n",
" 1. Create a new workspace\n",
" 1. Create compute resources\n",
"1. [Next steps](#Next%20steps)\n",
"\n",
"---\n",
"\n",
"## Introduction\n",
"\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.\n",
"\n",
"Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.\n",
"\n",
"In this notebook you will\n",
"* Learn about getting an Azure subscription\n",
"* Specify your workspace parameters\n",
"* Access or create your workspace\n",
"* Add a default compute cluster for your workspace\n",
"\n",
"### What is an Azure Machine Learning workspace\n",
"\n",
"An Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"This section describes activities required before you can access any Azure ML services functionality."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Azure Subscription\n",
"\n",
"In order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces.\n",
"\n",
"### 2. Azure ML SDK and other library installation\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, the following cell checks the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version AZUREML-SDK-VERSION of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK.\n",
"\n",
"### 3. Azure Container Instance registration\n",
"Azure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI. \n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure your Azure ML workspace\n",
"\n",
"### Workspace parameters\n",
"\n",
"To use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:\n",
"* Your subscription id\n",
"* A resource group name\n",
"* (optional) The region that will host your workspace\n",
"* A name for your workspace\n",
"\n",
"You can get your subscription ID from the [Azure portal](https://portal.azure.com).\n",
"\n",
"You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.\n",
"\n",
"The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.\n",
"\n",
"The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. \n",
"\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"Replace the default values in the cell below with your workspace parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.getenv(\"SUBSCRIPTION_ID\", default=\"<my-subscription-id>\")\n",
"resource_group = os.getenv(\"RESOURCE_GROUP\", default=\"<my-resource-group>\")\n",
"workspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<my-workspace-name>\")\n",
"workspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Access your workspace\n",
"\n",
"The following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" # write the details of the workspace to a configuration file to the notebook library\n",
" ws.write_config()\n",
" print(\"Workspace configuration succeeded. Skip the workspace creation steps below\")\n",
"except:\n",
" print(\"Workspace not accessible. Change your parameters or create a new workspace below\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a new workspace\n",
"\n",
"If you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.\n",
"\n",
"This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail if:\n",
"* You do not have permission to create a workspace in the resource group\n",
"* You do not have permission to create a resource group if it's non-existing.\n",
"* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note**: A Basic workspace is created by default. If you would like to create an Enterprise workspace, please specify sku = 'enterprise'.\n",
"Please visit our [pricing page](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) for more details on our Enterprise edition.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"# Create the workspace using the specified parameters\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" sku = 'basic',\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"\n",
"# write the details of the workspace to a configuration file to the notebook library\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create compute resources for your training experiments\n",
"\n",
"Many of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.\n",
"\n",
"The cluster parameters are:\n",
"* vm_size - this describes the virtual machine type and size used in the cluster. All machines in the cluster are the same type. You can get the list of vm sizes available in your region by using the CLI command\n",
"\n",
"```shell\n",
"az vm list-skus -o tsv\n",
"```\n",
"* min_nodes - this sets the minimum size of the cluster. If you set the minimum to 0 the cluster will shut down all nodes while not in use. Setting this number to a value higher than 0 will allow for faster start-up times, but you will also be billed when the cluster is not in use.\n",
"* max_nodes - this sets the maximum size of the cluster. Setting this to a larger number allows for more concurrency and a greater distributed processing of scale-out jobs.\n",
"\n",
"\n",
"To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpu-cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
"\n",
" # Create the cluster with the specified name and configuration\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
" # Wait for the cluster to complete, show the output log\n",
" cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"gpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print(\"Found existing gpu cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new gpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" # Create the cluster with the specified name and configuration\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
" # Wait for the cluster to complete, show the output log\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Next steps\n",
"\n",
"In this notebook you configured this notebook library to connect easily to an Azure ML workspace. You can copy this notebook to your own libraries to connect them to you workspace, or use it to bootstrap new workspaces completely.\n",
"\n",
"If you came here from another notebook, you can return there and complete that exercise, or you can try out the [Tutorials](./tutorials) or jump into \"how-to\" notebooks and start creating and deploying models. A good place to start is the [train within notebook](./how-to-use-azureml/training/train-within-notebook) example that walks through a simplified but complete end to end machine learning process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "ninhu"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

4
configuration.yml Normal file
View File

@@ -0,0 +1,4 @@
name: configuration
dependencies:
- pip:
- azureml-sdk

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run this notebook before proceeding."],"metadata":{}},{"cell_type":"markdown","source":["Now we support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package (during private preview). You can select the option to attach the library to all clusters or just one cluster.\n\nProvide this full string to install the SDK:\n\nazureml-sdk[databricks]"],"metadata":{}},{"cell_type":"code","source":["import azureml.core\n\n# Check core SDK version number - based on build number of preview/master.\nprint(\"SDK version:\", azureml.core.VERSION)"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["subscription_id = \"<your-subscription-id>\"\nresource_group = \"<your-existing-resource-group>\"\nworkspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\nworkspace_region = \"<your-resource group-region>\""],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\n# exist_ok checks if workspace exists or not.\n\nfrom azureml.core import Workspace\n\nws = Workspace.create(name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group, \n location = workspace_region,\n exist_ok=True)\n\nws.get_details()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["ws = Workspace(workspace_name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group)\n\n# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\nws.write_config()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["%sh\ncat /databricks/driver/aml_config/config.json"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\nfrom azureml.core import Workspace\n\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":11}],"metadata":{"name":"01.Installation_and_Configuration","notebookId":3874566296719377},"nbformat":4,"nbformat_minor":0}

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this."],"metadata":{}},{"cell_type":"markdown","source":["#Data Ingestion"],"metadata":{}},{"cell_type":"code","source":["import os\nimport urllib"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\nbasedataurl = \"https://amldockerdatasets.azureedge.net\"\ndatafile = \"AdultCensusIncome.csv\"\ndatafile_dbfs = os.path.join(\"/dbfs\", datafile)\n\nif os.path.isfile(datafile_dbfs):\n print(\"found {} at {}\".format(datafile, datafile_dbfs))\nelse:\n print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n urllib.request.urlretrieve(os.path.join(basedataurl, datafile), datafile_dbfs)"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# Create a Spark dataframe out of the csv file.\ndata_all = sqlContext.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)\nprint(\"({}, {})\".format(data_all.count(), len(data_all.columns)))\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["#renaming columns\ncolumns_new = [col.replace(\"-\", \"_\") for col in data_all.columns]\ndata_all = data_all.toDF(*columns_new)\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["display(data_all.limit(5))"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"markdown","source":["#Data Preparation"],"metadata":{}},{"cell_type":"code","source":["# Choose feature columns and the label column.\nlabel = \"income\"\nxvals_all = set(data_all.columns) - {label}\n\n#dbutils.widgets.remove(\"xvars_multiselect\")\ndbutils.widgets.removeAll()\n\ndbutils.widgets.multiselect('xvars_multiselect', 'hours_per_week', xvals_all)\nxvars_multiselect = dbutils.widgets.get(\"xvars_multiselect\")\nxvars = xvars_multiselect.split(\",\")\n\nprint(\"label = {}\".format(label))\nprint(\"features = {}\".format(xvars))\n\ndata = data_all.select([*xvars, label])\n\n# Split data into train and test.\ntrain, test = data.randomSplit([0.75, 0.25], seed=123)\n\nprint(\"train ({}, {})\".format(train.count(), len(train.columns)))\nprint(\"test ({}, {})\".format(test.count(), len(test.columns)))"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"markdown","source":["#Data Persistence"],"metadata":{}},{"cell_type":"code","source":["# Write the train and test data sets to intermediate storage\ntrain_data_path = \"AdultCensusIncomeTrain\"\ntest_data_path = \"AdultCensusIncomeTest\"\n\ntrain_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTrain\")\ntest_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTest\")\n\ntrain.write.mode('overwrite').parquet(train_data_path)\ntest.write.mode('overwrite').parquet(test_data_path)\nprint(\"train and test datasets saved to {} and {}\".format(train_data_path_dbfs, test_data_path_dbfs))"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":13},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":14}],"metadata":{"name":"02.Ingest_data","notebookId":3874566296719393},"nbformat":4,"nbformat_minor":0}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this. This notebook uses image from ACI notebook for deploying to AKS."],"metadata":{}},{"cell_type":"code","source":["from azureml.core import Workspace\nimport azureml.core\n\n# Check core SDK version number\nprint(\"SDK version:\", azureml.core.VERSION)\n\n#'''\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')\n#'''"],"metadata":{},"outputs":[],"execution_count":3},{"cell_type":"code","source":["# List images by ws\n\nfrom azureml.core.image import ContainerImage\nfor i in ContainerImage.list(workspace = ws):\n print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["from azureml.core.image import Image\nmyimage = Image(workspace=ws, id=\"aciws:25\")"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["#create AKS compute\n#it may take 20-25 minutes to create a new cluster\n\nfrom azureml.core.compute import AksCompute, ComputeTarget\n\n# Use the default configuration (can also provide parameters to customize)\nprov_config = AksCompute.provisioning_configuration()\n\naks_name = 'ps-aks-clus2' \n\n# Create the cluster\naks_target = ComputeTarget.create(workspace = ws, \n name = aks_name, \n provisioning_configuration = prov_config)\n\naks_target.wait_for_completion(show_output = True)\n\nprint(aks_target.provisioning_state)\nprint(aks_target.provisioning_errors)"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["from azureml.core.webservice import Webservice\nhelp( Webservice.deploy_from_image)"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["from azureml.core.webservice import Webservice, AksWebservice\nfrom azureml.core.image import ContainerImage\n\n#Set the web service configuration (using default here)\naks_config = AksWebservice.deploy_configuration()\n\n#unique service name\nservice_name ='ps-aks-service'\n\n# Webservice creation using single command, there is a variant to use image directly as well.\naks_service = Webservice.deploy_from_image(\n workspace=ws, \n name=service_name,\n deployment_config = aks_config,\n image = myimage,\n deployment_target = aks_target\n )\n\naks_service.wait_for_deployment(show_output=True)"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["#for using the Web HTTP API \nprint(aks_service.scoring_uri)\nprint(aks_service.get_keys())"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["import json\n\n#get the some sample data\ntest_data_path = \"AdultCensusIncomeTest\"\ntest = spark.read.parquet(test_data_path).limit(5)\n\ntest_json = json.dumps(test.toJSON().collect())\n\nprint(test_json)"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":["#using data defined above predict if income is >50K (1) or <=50K (0)\naks_service.run(input_data=test_json)"],"metadata":{},"outputs":[],"execution_count":11},{"cell_type":"code","source":["#comment to not delete the web service\naks_service.delete()\n#image.delete()\n#model.delete()\n#aks_target.delete()"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":13}],"metadata":{"name":"04.DeploytoACI","notebookId":3874566296719318},"nbformat":4,"nbformat_minor":0}

View File

@@ -1,26 +0,0 @@
# Azure Databricks - Azure ML SDK Sample Notebooks
**NOTE**: With the latest version of our AML SDK, there are some API changes due to which previous version of notebooks will not work.
Kindly use this v4 notebooks (updated Sep 18) if you had installed the AML SDK in your Databricks cluster please update to latest SDK version by installing azureml-sdk[databricks] as a library from GUI.
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown). We are extending it to more runtimes asap.
**NOTE**: Some packages like psutil upgrade libs that can cause a conflict, please install such packages by freezing lib version. Eg. "pstuil **cryptography==1.5 pyopenssl==16.0.0 ipython=2.2.0**" to avoid install error. This issue is related to Databricks and not related to AML SDK.
**NOTE**: You should at least have contributor access to your Azure subcription to run some of the notebooks.
The iPython Notebooks have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing .dbc instead of downloading individually.
This set of notebooks are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. For details on SDK concepts, please refer to [Private preview notebooks](https://github.com/Azure/ViennaDocs/tree/master/PrivatePreview/notebooks)
(Recommended) [Azure Databricks AML SDK notebooks](Databricks_AMLSDK_github.dbc) A single DBC package to import all notebooks in your Databricks workspace.
01. [Installation and Configuration](01.Installation_and_Configuration.ipynb): Install the Azure ML Python SDK and Initialize an Azure ML Workspace and save the Workspace configuration file.
02. [Ingest data](02.Ingest_data.ipynb): Download the Adult Census Income dataset and split it into train and test sets.
03. [Build model](03a.Build_model.ipynb): Train a binary classification model in Azure Databricks with a Spark ML Pipeline.
04. [Build model with Run History](03b.Build_model_runHistory.ipynb): Train model and also capture run history (tracking) with Azure ML Python SDK.
05. [Deploy to ACI](04.Deploy_to_ACI.ipynb): Deploy model to Azure Container Instance (ACI) with Azure ML Python SDK.
06. [Deploy to AKS](04.Deploy_to_AKS_existingImage.ipynb): Deploy model to Azure Kubernetis Service (AKS) with Azure ML Python SDK from an existing Image with model, conda and score file.
Copyright (c) Microsoft Corporation. All rights reserved.
All notebooks in this folder are licensed under the MIT License.

View File

@@ -0,0 +1,217 @@
NOTICES AND INFORMATION
Do Not Translate or Localize
This Azure Machine Learning service example notebooks repository includes material from the projects listed below.
1. SSD-Tensorflow (https://github.com/balancap/ssd-tensorflow)
%% SSD-Tensorflow NOTICES AND INFORMATION BEGIN HERE
=========================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
=========================================
END OF SSD-Tensorflow NOTICES AND INFORMATION

View File

@@ -0,0 +1,104 @@
# Notebooks for Microsoft Azure Machine Learning Hardware Accelerated Models SDK
Easily create and train a model using various deep neural networks (DNNs) as a featurizer for deployment to Azure or a Data Box Edge device for ultra-low latency inferencing using FPGA's. These models are currently available:
* ResNet 50
* ResNet 152
* DenseNet-121
* VGG-16
* SSD-VGG
To learn more about the azureml-accel-model classes, see the section [Model Classes](#model-classes) below or the [Azure ML Accel Models SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel?view=azure-ml-py).
### Step 1: Create an Azure ML workspace
Follow [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace) to install the Azure ML SDK on your local machine, create an Azure ML workspace, and set up your notebook environment, which is required for the next step.
### Step 2: Check your FPGA quota
Use the Azure CLI to check whether you have quota.
```shell
az vm list-usage --location "eastus" -o table
```
The other locations are ``southeastasia``, ``westeurope``, and ``westus2``.
Under the "Name" column, look for "Standard PBS Family vCPUs" and ensure you have at least 6 vCPUs under "CurrentValue."
If you do not have quota, then submit a request form [here](https://aka.ms/accelerateAI).
### Step 3: Install the Azure ML Accelerated Models SDK
Once you have set up your environment, install the Azure ML Accel Models SDK. This package requires tensorflow >= 1.6,<2.0 to be installed.
If you already have tensorflow >= 1.6,<2.0 installed in your development environment, you can install the SDK package using:
```
pip install azureml-accel-models
```
If you do not have tensorflow >= 1.6,<2.0 and are using a CPU-only development environment, our SDK with tensorflow can be installed using:
```
pip install azureml-accel-models[cpu]
```
If your machine supports GPU (for example, on an [Azure DSVM](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview)), then you can leverage the tensorflow-gpu functionality using:
```
pip install azureml-accel-models[gpu]
```
### Step 4: Follow our notebooks
We provide notebooks to walk through the following scenarios, linked below:
* [Quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb), deploy and inference a ResNet50 model trained on ImageNet
* [Object Detection](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb), deploy and inference an SSD-VGG model that can do object detection
* [Training models](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb), train one of our accelerated models on the Kaggle Cats and Dogs dataset to see how to improve accuracy on custom datasets
**Note**: the above notebooks work only for tensorflow >= 1.6,<2.0.
<a name="model-classes"></a>
## Model Classes
As stated above, we support 5 Accelerated Models. Here's more information on their input and output tensors.
**Available models and output tensors**
The available models and the corresponding default classifier output tensors are below. This is the value that you would use during inferencing if you used the default classifier.
* Resnet50, QuantizedResnet50
``
output_tensors = "classifier_1/resnet_v1_50/predictions/Softmax:0"
``
* Resnet152, QuantizedResnet152
``
output_tensors = "classifier/resnet_v1_152/predictions/Softmax:0"
``
* Densenet121, QuantizedDensenet121
``
output_tensors = "classifier/densenet121/predictions/Softmax:0"
``
* Vgg16, QuantizedVgg16
``
output_tensors = "classifier/vgg_16/fc8/squeezed:0"
``
* SsdVgg, QuantizedSsdVgg
``
output_tensors = ['ssd_300_vgg/block4_box/Reshape_1:0', 'ssd_300_vgg/block7_box/Reshape_1:0', 'ssd_300_vgg/block8_box/Reshape_1:0', 'ssd_300_vgg/block9_box/Reshape_1:0', 'ssd_300_vgg/block10_box/Reshape_1:0', 'ssd_300_vgg/block11_box/Reshape_1:0', 'ssd_300_vgg/block4_box/Reshape:0', 'ssd_300_vgg/block7_box/Reshape:0', 'ssd_300_vgg/block8_box/Reshape:0', 'ssd_300_vgg/block9_box/Reshape:0', 'ssd_300_vgg/block10_box/Reshape:0', 'ssd_300_vgg/block11_box/Reshape:0']
``
For more information, please reference the azureml.accel.models package in the [Azure ML Python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel.models?view=azure-ml-py).
**Input tensors**
The input_tensors value defaults to "Placeholder:0" and is created in the [Image Preprocessing](#construct-model) step in the line:
``
in_images = tf.placeholder(tf.string)
``
You can change the input_tensors name by doing this:
``
in_images = tf.placeholder(tf.string, name="images")
``
## Resources
* [Read more about FPGAs](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas)

View File

@@ -0,0 +1,14 @@
# Model Deployment with Azure ML service
You can use Azure Machine Learning to package, debug, validate and deploy inference containers to a variety of compute targets. This process is known as "MLOps" (ML operationalization).
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where
## Get Started
To begin, you will need an ML workspace.
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
## Deploy to the cloud
You can deploy to the cloud using the Azure ML CLI or the Azure ML SDK.
- CLI example: https://aka.ms/azmlcli
- Notebook example: [model-register-and-deploy](./model-register-and-deploy.ipynb).
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-multi-model/README.png)

View File

@@ -0,0 +1,395 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy Multiple Models as Webservice\n",
"\n",
"This example shows how to deploy a Webservice with multiple models in step-by-step fashion:\n",
"\n",
" 1. Register Models\n",
" 2. Deploy Models as Webservice"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we will be using and registering two models. \n",
"\n",
"First we will train two simple models on the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) included with scikit-learn, serializing them to files in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import joblib\n",
"import sklearn\n",
"\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import BayesianRidge, Ridge\n",
"\n",
"x, y = load_diabetes(return_X_y=True)\n",
"\n",
"first_model = Ridge().fit(x, y)\n",
"second_model = BayesianRidge().fit(x, y)\n",
"\n",
"joblib.dump(first_model, \"first_model.pkl\")\n",
"joblib.dump(second_model, \"second_model.pkl\")\n",
"\n",
"print(\"Trained models using scikit-learn {}.\".format(sklearn.__version__))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have our trained models locally, we will register them as Models with the names `my_first_model` and `my_second_model` in the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"my_model_1 = Model.register(model_path=\"first_model.pkl\",\n",
" model_name=\"my_first_model\",\n",
" workspace=ws)\n",
"\n",
"my_model_2 = Model.register(model_path=\"second_model.pkl\",\n",
" model_name=\"my_second_model\",\n",
" workspace=ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Write the Entry Script\n",
"Write the script that will be used to predict on your models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Model.get_model_path()\n",
"\n",
"To get the paths of your models, use `Model.get_model_path(model_name, version=None, _workspace=None)` method. This method will find the path to a model using the name of the model registered under the workspace.\n",
"\n",
"In this example, we do not use the optional arguments `version` and `_workspace`.\n",
"\n",
"#### Using environment variable AZUREML_MODEL_DIR\n",
"\n",
"In other [examples](../deploy-to-cloud/score.py) with a single model deployment, we use the environment variable `AZUREML_MODEL_DIR` and model file name to get the model path. \n",
"\n",
"For single model deployments, this environment variable is the path to the model folder (`./azureml-models/$MODEL_NAME/$VERSION`). When we deploy multiple models, the environment variable is set to the folder containing all models (./azureml-models).\n",
"\n",
"If you're using multiple models and you know the versions of the models you deploy, you can use this method to get the model path:\n",
"\n",
"```python\n",
"# Construct the model path using the registered model name, version, and model file name\n",
"model_1_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_first_model', '1', 'first_model.pkl')\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import joblib\n",
"import json\n",
"import numpy as np\n",
"\n",
"from azureml.core.model import Model\n",
"\n",
"def init():\n",
" global model_1, model_2\n",
" # Here \"my_first_model\" is the name of the model registered under the workspace.\n",
" # This call will return the path to the .pkl file on the local disk.\n",
" model_1_path = Model.get_model_path(model_name='my_first_model')\n",
" model_2_path = Model.get_model_path(model_name='my_second_model')\n",
" \n",
" # Deserialize the model files back into scikit-learn models.\n",
" model_1 = joblib.load(model_1_path)\n",
" model_2 = joblib.load(model_2_path)\n",
"\n",
"# Note you can pass in multiple rows for scoring.\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = np.array(data)\n",
" \n",
" # Call predict() on each model\n",
" result_1 = model_1.predict(data)\n",
" result_2 = model_2.predict(data)\n",
"\n",
" # You can return any JSON-serializable value.\n",
" return {\"prediction1\": result_1.tolist(), \"prediction2\": result_2.tolist()}\n",
" except Exception as e:\n",
" result = str(e)\n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.\n",
"\n",
"More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"\n",
"env = Environment(\"deploytocloudenv\")\n",
"env.python.conda_dependencies.add_pip_package(\"joblib\")\n",
"env.python.conda_dependencies.add_pip_package(\"numpy\")\n",
"env.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Inference Configuration\n",
"\n",
"There is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\n",
"Note: in that case, environments's entry_script and file_path are relative paths to the source_directory path; myenv.docker.base_dockerfile is a string containing extra docker steps or contents of the docker file.\n",
"\n",
"Sample code for using a source directory:\n",
"\n",
"```python\n",
"from azureml.core.environment import Environment\n",
"from azureml.core.model import InferenceConfig\n",
"\n",
"myenv = Environment.from_conda_specification(name='myenv', file_path='env/myenv.yml')\n",
"\n",
"# explicitly set base_image to None when setting base_dockerfile\n",
"myenv.docker.base_image = None\n",
"# add extra docker commends to execute\n",
"myenv.docker.base_dockerfile = \"FROM ubuntu\\n RUN echo \\\"hello\\\"\"\n",
"\n",
"inference_config = InferenceConfig(source_directory=\"C:/abc\",\n",
" entry_script=\"x/y/score.py\",\n",
" environment=myenv)\n",
"```\n",
"\n",
" - file_path: input parameter to Environment constructor. Manages conda and python package dependencies.\n",
" - env.docker.base_dockerfile: any extra steps you want to inject into docker file\n",
" - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
" - entry_script: contains logic specific to initializing your model and running predictions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy Model as Webservice on Azure Container Instance\n",
"\n",
"Note that the service creation can take few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"azuremlexception-remarks-sample"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aci_service_name = \"aciservice-multimodel\"\n",
"\n",
"deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
"\n",
"service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config, overwrite=True)\n",
"service.wait_for_deployment(True)\n",
"\n",
"print(service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test web service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"test_sample = json.dumps({'data': x[0:2].tolist()})\n",
"\n",
"prediction = service.run(test_sample)\n",
"\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Delete ACI to clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "jenns"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,6 @@
name: multi-model-register-and-deploy
dependencies:
- pip:
- azureml-sdk
- numpy
- scikit-learn

View File

@@ -0,0 +1,12 @@
# Model Deployment with Azure ML service
You can use Azure Machine Learning to package, debug, validate and deploy inference containers to a variety of compute targets. This process is known as "MLOps" (ML operationalization).
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where
## Get Started
To begin, you will need an ML workspace.
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
## Deploy to the cloud
You can deploy to the cloud using the Azure ML CLI or the Azure ML SDK.
- CLI example: https://aka.ms/azmlcli
- Notebook example: [model-register-and-deploy](./model-register-and-deploy.ipynb).

View File

@@ -0,0 +1,593 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register model and deploy as webservice in ACI\n",
"\n",
"Following this notebook, you will:\n",
"\n",
" - Learn how to register a model in your Azure Machine Learning Workspace.\n",
" - Deploy your model as a web service in an Azure Container Instance."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create a workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"\n",
"# Check core SDK version number.\n",
"print('SDK version:', azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"\n",
"Create a [Workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace%28class%29?view=azure-ml-py) object from your persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create trained model\n",
"\n",
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import joblib\n",
"\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"\n",
"\n",
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
"\n",
"model = Ridge().fit(dataset_x, dataset_y)\n",
"\n",
"joblib.dump(model, 'sklearn_regression_model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register input and output datasets\n",
"\n",
"Here, you will register the data used to create the model in your workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"from azureml.core import Dataset\n",
"\n",
"\n",
"np.savetxt('features.csv', dataset_x, delimiter=',')\n",
"np.savetxt('labels.csv', dataset_y, delimiter=',')\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore.upload_files(files=['./features.csv', './labels.csv'],\n",
" target_path='sklearn_regression/',\n",
" overwrite=True)\n",
"\n",
"input_dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, 'sklearn_regression/features.csv')])\n",
"output_dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, 'sklearn_regression/labels.csv')])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register model\n",
"\n",
"Register a file or folder as a model by calling [Model.register()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#register-workspace--model-path--model-name--tags-none--properties-none--description-none--datasets-none--model-framework-none--model-framework-version-none--child-paths-none-).\n",
"\n",
"In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace. Also, marking this model with the scikit-learn framework will simplify deploying it as a web service, as we'll see later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file",
"sample-model-register"
]
},
"outputs": [],
"source": [
"import sklearn\n",
"\n",
"from azureml.core import Model\n",
"from azureml.core.resource_configuration import ResourceConfiguration\n",
"\n",
"\n",
"model = Model.register(workspace=ws,\n",
" model_name='my-sklearn-model', # Name of the registered model in your workspace.\n",
" model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model.\n",
" model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.\n",
" model_framework_version=sklearn.__version__, # Version of scikit-learn used to create the model.\n",
" sample_input_dataset=input_dataset,\n",
" sample_output_dataset=output_dataset,\n",
" resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5),\n",
" description='Ridge regression model to predict diabetes progression.',\n",
" tags={'area': 'diabetes', 'type': 'regression'})\n",
"\n",
"print('Name:', model.name)\n",
"print('Version:', model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy model\n",
"\n",
"Deploy your model as a web service using [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). Web services take one or more models, load them in an environment, and run them on one of several supported deployment targets. For more information on all your options when deploying models, see the [next steps](#Next-steps) section at the end of this notebook.\n",
"\n",
"For this example, we will deploy your scikit-learn model to an Azure Container Instance (ACI)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use a default environment (for supported models)\n",
"\n",
"The Azure Machine Learning service provides a default environment for supported model frameworks, including scikit-learn, based on the metadata you provided when registering your model. This is the easiest way to deploy your model.\n",
"\n",
"Even when you deploy your model to ACI with a default environment you can still customize the deploy configuration (i.e. the number of cores and amount of memory made available for the deployment) using the [AciWebservice.deploy_configuration()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--). Look at the \"Use a custom environment\" section of this notebook for more information on deploy configuration.\n",
"\n",
"**Note**: This step can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service_name = 'my-sklearn-service'\n",
"\n",
"service = Model.deploy(ws, service_name, [model], overwrite=True)\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After your model is deployed, perform a call to the web service using [service.run()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#run-input-)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"input_payload = json.dumps({\n",
" 'data': dataset_x[0:2].tolist(),\n",
" 'method': 'predict' # If you have a classification model, you can get probabilities by changing this to 'predict_proba'.\n",
"})\n",
"\n",
"output = service.run(input_payload)\n",
"\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are finished testing your service, clean up the deployment with [service.delete()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#delete--)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use a custom environment\n",
"\n",
"If you want more control over how your model is run, if it uses another framework, or if it has special runtime requirements, you can instead specify your own environment and scoring method. Custom environments can be used for any model you want to deploy.\n",
"\n",
"Specify the model's runtime environment by creating an [Environment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment%28class%29?view=azure-ml-py) object and providing the [CondaDependencies](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) needed by your model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn=={}'.format(sklearn.__version__)\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When using a custom environment, you must also provide Python code for initializing and running your model. An example script is included with this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('score.py') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deploy your model in the custom environment by providing an [InferenceConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) object to [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). In this case we are also using the [AciWebservice.deploy_configuration()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--) method to generate a custom deploy configuration.\n",
"\n",
"**Note**: This step can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"azuremlexception-remarks-sample",
"sample-aciwebservice-deploy-config"
]
},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"\n",
"\n",
"service_name = 'my-custom-env-service'\n",
"\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
"\n",
"service = Model.deploy(workspace=ws,\n",
" name=service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aci_config,\n",
" overwrite=True)\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After your model is deployed, make a call to the web service using [service.run()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#run-input-)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_payload = json.dumps({\n",
" 'data': dataset_x[0:2].tolist()\n",
"})\n",
"\n",
"output = service.run(input_payload)\n",
"\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are finished testing your service, clean up the deployment with [service.delete()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py#delete--)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Model Profiling\n",
"\n",
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
"\n",
"In order to profile your model you will need:\n",
"- a registered model\n",
"- an entry script\n",
"- an inference configuration\n",
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
"\n",
"Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset.\n",
"\n",
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
"\n",
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may want to register datasets using the register() method to your workspace so they can be shared with others, reused and referred to by name in your script.\n",
"You can try get the dataset first to see if it's already registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.data import dataset_type_definitions\n",
"\n",
"dataset_name='diabetes_sample_request_data'\n",
"\n",
"dataset_registered = False\n",
"try:\n",
" sample_request_data = Dataset.get_by_name(workspace = ws, name = dataset_name)\n",
" dataset_registered = True\n",
"except:\n",
" print(\"The dataset {} is not registered in workspace yet.\".format(dataset_name))\n",
"\n",
"if not dataset_registered:\n",
" # create a string that can be utf-8 encoded and\n",
" # put in the body of the request\n",
" serialized_input_json = json.dumps({\n",
" 'data': [\n",
" [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,\n",
" -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613]\n",
" ]\n",
" })\n",
" dataset_content = []\n",
" for i in range(100):\n",
" dataset_content.append(serialized_input_json)\n",
" dataset_content = '\\n'.join(dataset_content)\n",
" file_name = \"{}.txt\".format(dataset_name)\n",
" f = open(file_name, 'w')\n",
" f.write(dataset_content)\n",
" f.close()\n",
"\n",
" # upload the txt file created above to the Datastore and create a dataset from it\n",
" data_store = Datastore.get_default(ws)\n",
" data_store.upload_files(['./' + file_name], target_path='sample_request_data')\n",
" datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]\n",
" sample_request_data = Dataset.Tabular.from_delimited_files(\n",
" datastore_path,\n",
" separator='\\n',\n",
" infer_column_types=True,\n",
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
" sample_request_data = sample_request_data.register(workspace=ws,\n",
" name=dataset_name,\n",
" create_new_version=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn=={}'.format(sklearn.__version__)\n",
"])\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"# if cpu and memory_in_gb parameters are not provided\n",
"# the model will be profiled on default configuration of\n",
"# 3.5CPU and 15GB memory\n",
"profile = Model.profile(ws,\n",
" 'rgrsn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
" [model],\n",
" inference_config,\n",
" input_dataset=sample_request_data,\n",
" cpu=1.0,\n",
" memory_in_gb=0.5)\n",
"\n",
"# profiling is a long running operation and may take up to 25 min\n",
"profile.wait_for_completion(True)\n",
"details = profile.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Model packaging\n",
"\n",
"If you want to build a Docker image that encapsulates your model and its dependencies, you can use the model packaging option. The output image will be pushed to your workspace's ACR.\n",
"\n",
"You must include an Environment object in your inference configuration to use `Model.package()`.\n",
"\n",
"```python\n",
"package = Model.package(ws, [model], inference_config)\n",
"package.wait_for_creation(show_output=True) # Or show_output=False to hide the Docker build logs.\n",
"package.pull()\n",
"```\n",
"\n",
"Instead of a fully-built image, you can also generate a Dockerfile and download all the assets needed to build an image on top of your Environment.\n",
"\n",
"```python\n",
"package = Model.package(ws, [model], inference_config, generate_dockerfile=True)\n",
"package.wait_for_creation(show_output=True)\n",
"package.save(\"./local_context_dir\")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
" - To run a production-ready web service, see the [notebook on deployment to Azure Kubernetes Service](../production-deploy-to-aks/production-deploy-to-aks.ipynb).\n",
" - To run a local web service, see the [notebook on deployment to a local Docker container](../deploy-to-local/register-model-deploy-local.ipynb).\n",
" - For more information on datasets, see the [notebook on training with datasets](../../work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.ipynb).\n",
" - For more information on environments, see the [notebook on using environments](../../training/using-environments/using-environments.ipynb).\n",
" - For information on all the available deployment targets, see [&ldquo;How and where to deploy models&rdquo;](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#choose-a-compute-target)."
]
}
],
"metadata": {
"authors": [
{
"name": "vaidyas"
}
],
"category": "deployment",
"compute": [
"None"
],
"datasets": [
"Diabetes"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"Scikit-learn"
],
"friendly_name": "Register model and deploy as webservice",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"star_tag": [
"featured"
],
"tags": [
"None"
],
"task": "Deploy a model with Azure Machine Learning"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,6 @@
name: model-register-and-deploy
dependencies:
- pip:
- azureml-sdk
- numpy
- scikit-learn

View File

@@ -0,0 +1,38 @@
import joblib
import numpy as np
import os
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
# The init() method is called once, when the web service starts up.
#
# Typically you would deserialize the model file, as shown here using joblib,
# and store it in a global variable so your run() method can access it later.
def init():
global model
# The AZUREML_MODEL_DIR environment variable indicates
# a directory containing the model file you registered.
model_filename = 'sklearn_regression_model.pkl'
model_path = os.path.join(os.environ['AZUREML_MODEL_DIR'], model_filename)
model = joblib.load(model_path)
# The run() method is called each time a request is made to the scoring API.
#
# Shown here are the optional input_schema and output_schema decorators
# from the inference-schema pip package. Using these decorators on your
# run() method parses and validates the incoming payload against
# the example input you provide here. This will also generate a Swagger
# API document for your web service.
@input_schema('data', NumpyParameterType(np.array([[0.1, 1.2, 2.3, 3.4, 4.5, 5.6, 6.7, 7.8, 8.9, 9.0]])))
@output_schema(NumpyParameterType(np.array([4429.929236457418])))
def run(data):
# Use the model object loaded by init().
result = model.predict(data)
# You can return any JSON-serializable object.
return result.tolist()

View File

@@ -0,0 +1,12 @@
# Model Deployment with Azure ML service
You can use Azure Machine Learning to package, debug, validate and deploy inference containers to a variety of compute targets. This process is known as "MLOps" (ML operationalization).
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where
## Get Started
To begin, you will need an ML workspace.
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
## Deploy locally
You can deploy a model locally for testing & debugging using the Azure ML CLI or the Azure ML SDK.
- CLI example: https://aka.ms/azmlcli
- Notebook example: [register-model-deploy-local](./register-model-deploy-local.ipynb).

View File

@@ -0,0 +1 @@
RUN echo "this is test"

View File

@@ -0,0 +1,495 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local-advanced.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register model and deploy locally with advanced usages\n",
"\n",
"This example shows how to deploy a web service in step-by-step fashion:\n",
"\n",
" 1. Register model\n",
" 2. Deploy the image as a web service in a local Docker container.\n",
" 3. Quickly test changes to your entry script by reloading the local service.\n",
" 4. Optionally, you can also make changes to model, conda or extra_docker_file_steps and update local service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create trained model\n",
"\n",
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import joblib\n",
"\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"\n",
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
"\n",
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
"\n",
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags and descriptions to your models. we are using `sklearn_regression_model.pkl` file in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
"\n",
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file",
"sample-model-register"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path=\"sklearn_regression_model.pkl\",\n",
" model_name=\"sklearn_regression_model\",\n",
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
" description=\"Ridge regression model to predict diabetes\",\n",
" workspace=ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Manage your dependencies in a folder"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"source_directory = \"source_directory\"\n",
"\n",
"os.makedirs(source_directory, exist_ok=True)\n",
"os.makedirs(os.path.join(source_directory, \"x/y\"), exist_ok=True)\n",
"os.makedirs(os.path.join(source_directory, \"env\"), exist_ok=True)\n",
"os.makedirs(os.path.join(source_directory, \"dockerstep\"), exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show `score.py`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile source_directory/x/y/score.py\n",
"import joblib\n",
"import json\n",
"import numpy as np\n",
"import os\n",
"\n",
"from inference_schema.schema_decorators import input_schema, output_schema\n",
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment. Join this path with the filename of the model file.\n",
" # It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # Deserialize the model file back into a sklearn model.\n",
" model = joblib.load(model_path)\n",
"\n",
" global name\n",
" # Note here, the entire source directory from inference config gets added into image.\n",
" # Below is an example of how you can use any extra files in image.\n",
" with open('./source_directory/extradata.json') as json_file:\n",
" data = json.load(json_file)\n",
" name = data[\"people\"][0][\"name\"]\n",
"\n",
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
"output_sample = np.array([3726.995])\n",
"\n",
"@input_schema('data', NumpyParameterType(input_sample))\n",
"@output_schema(NumpyParameterType(output_sample))\n",
"def run(data):\n",
" try:\n",
" result = model.predict(data)\n",
" # You can return any JSON-serializable object.\n",
" return \"Hello \" + name + \" here is your result = \" + str(result)\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile source_directory/extradata.json\n",
"{\n",
" \"people\": [\n",
" {\n",
" \"website\": \"microsoft.com\", \n",
" \"from\": \"Seattle\", \n",
" \"name\": \"Mrudula\"\n",
" }\n",
" ]\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Inference Configuration\n",
"\n",
" - file_path: input parameter to Environment constructor. Manages conda and python package dependencies.\n",
" - env.docker.base_dockerfile: any extra steps you want to inject into docker file\n",
" - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
" - entry_script: contains logic specific to initializing your model and running predictions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"\n",
"from azureml.core.environment import Environment\n",
"from azureml.core.model import InferenceConfig\n",
"\n",
"\n",
"myenv = Environment('myenv')\n",
"myenv.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
"myenv.python.conda_dependencies.add_pip_package(\"joblib\")\n",
"myenv.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))\n",
"\n",
"# explicitly set base_image to None when setting base_dockerfile\n",
"myenv.docker.base_image = None\n",
"myenv.docker.base_dockerfile = \"FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\\nRUN echo \\\"this is test\\\"\"\n",
"myenv.inferencing_stack_version = \"latest\"\n",
"\n",
"inference_config = InferenceConfig(source_directory=source_directory,\n",
" entry_script=\"x/y/score.py\",\n",
" environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy Model as a Local Docker Web Service\n",
"\n",
"*Make sure you have Docker installed and running.*\n",
"\n",
"Note that the service creation can take few minutes.\n",
"\n",
"NOTE:\n",
"\n",
"The Docker image runs as a Linux container. If you are running Docker for Windows, you need to ensure the Linux Engine is running:\n",
"\n",
" # PowerShell command to switch to Linux engine\n",
" & 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchLinuxEngine"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import LocalWebservice\n",
"\n",
"# This is optional, if not provided Docker will choose a random unused port.\n",
"deployment_config = LocalWebservice.deploy_configuration(port=6789)\n",
"\n",
"local_service = Model.deploy(ws, \"test\", [model], inference_config, deployment_config)\n",
"\n",
"local_service.wait_for_deployment()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('Local service port: {}'.format(local_service.port))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check Status and Get Container Logs\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(local_service.get_logs())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test Web Service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the web service with some input data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"sample_input = json.dumps({\n",
" 'data': dataset_x[0:2].tolist()\n",
"})\n",
"\n",
"print(local_service.run(sample_input))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reload Service\n",
"\n",
"You can update your score.py file and then call `reload()` to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, `reload()` is fast, but if you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call `update()`, instead (see below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile source_directory/x/y/score.py\n",
"import joblib\n",
"import json\n",
"import numpy as np\n",
"import os\n",
"\n",
"from inference_schema.schema_decorators import input_schema, output_schema\n",
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # Deserialize the model file back into a sklearn model.\n",
" model = joblib.load(model_path)\n",
"\n",
" global name, from_location\n",
" # Note here, the entire source directory from inference config gets added into image.\n",
" # Below is an example of how you can use any extra files in image.\n",
" with open('source_directory/extradata.json') as json_file: \n",
" data = json.load(json_file)\n",
" name = data[\"people\"][0][\"name\"]\n",
" from_location = data[\"people\"][0][\"from\"]\n",
"\n",
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
"output_sample = np.array([3726.995])\n",
"\n",
"@input_schema('data', NumpyParameterType(input_sample))\n",
"@output_schema(NumpyParameterType(output_sample))\n",
"def run(data):\n",
" try:\n",
" result = model.predict(data)\n",
" # You can return any JSON-serializable object.\n",
" return \"Hello \" + name + \" from \" + from_location + \" here is your result = \" + str(result)\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_service.reload()\n",
"print(\"--------------------------------------------------------------\")\n",
"\n",
"# After calling reload(), run() will return the updated message.\n",
"local_service.run(sample_input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Update Service\n",
"\n",
"If you want to change your model(s), Conda dependencies, or deployment configuration, call `update()` to rebuild the Docker image.\n",
"\n",
"```python\n",
"\n",
"local_service.update(models=[SomeOtherModelObject],\n",
" deployment_config=local_config,\n",
" inference_config=inference_config)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "keriehm"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,556 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register model and deploy locally\n",
"\n",
"This example shows how to deploy a web service in step-by-step fashion:\n",
"\n",
" 1. Register model\n",
" 2. Deploy the image as a web service in a local Docker container.\n",
" 3. Quickly test changes to your entry script by reloading the local service.\n",
" 4. Optionally, you can also make changes to model, conda or extra_docker_file_steps and update local service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create trained model\n",
"\n",
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import joblib\n",
"\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"\n",
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
"\n",
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
"\n",
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we are registering the serialized file `sklearn_regression_model.pkl` in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
"\n",
"You can add tags and descriptions to your models. Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path=\"sklearn_regression_model.pkl\",\n",
" model_name=\"sklearn_regression_model\",\n",
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
" description=\"Ridge regression model to predict diabetes\",\n",
" workspace=ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Environment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"\n",
"from azureml.core.environment import Environment\n",
"\n",
"environment = Environment(\"LocalDeploy\")\n",
"environment.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
"environment.python.conda_dependencies.add_pip_package(\"joblib\")\n",
"environment.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Provide the Scoring Script\n",
"\n",
"This Python script handles the model execution inside the service container. The `init()` method loads the model file, and `run(data)` is called for every input to the service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import joblib\n",
"import json\n",
"import numpy as np\n",
"import os\n",
"\n",
"from inference_schema.schema_decorators import input_schema, output_schema\n",
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # Deserialize the model file back into a sklearn model.\n",
" model = joblib.load(model_path)\n",
"\n",
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
"output_sample = np.array([3726.995])\n",
"\n",
"@input_schema('data', NumpyParameterType(input_sample))\n",
"@output_schema(NumpyParameterType(output_sample))\n",
"def run(data):\n",
" try:\n",
" result = model.predict(data)\n",
" # You can return any JSON-serializable object.\n",
" return result.tolist()\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Inference Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inference_config = InferenceConfig(entry_script=\"score.py\",\n",
" environment=environment)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy Model as a Local Docker Web Service\n",
"\n",
"*Make sure you have Docker installed and running.*\n",
"\n",
"Note that the service creation can take few minutes.\n",
"\n",
"NOTE:\n",
"\n",
"The Docker image runs as a Linux container. If you are running Docker for Windows, you need to ensure the Linux Engine is running:\n",
"\n",
" # PowerShell command to switch to Linux engine\n",
" & 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchLinuxEngine"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"sample-localwebservice-deploy"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import LocalWebservice\n",
"\n",
"# This is optional, if not provided Docker will choose a random unused port.\n",
"deployment_config = LocalWebservice.deploy_configuration(port=6789)\n",
"\n",
"local_service = Model.deploy(ws, \"test\", [model], inference_config, deployment_config)\n",
"\n",
"local_service.wait_for_deployment()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('Local service port: {}'.format(local_service.port))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check Status and Get Container Logs\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(local_service.get_logs())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test Web Service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the web service with some input data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"sample_input = json.dumps({\n",
" 'data': dataset_x[0:2].tolist()\n",
"})\n",
"\n",
"local_service.run(sample_input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reload Service\n",
"\n",
"You can update your score.py file and then call `reload()` to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, `reload()` is fast, but if you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call `update()`, instead (see below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import joblib\n",
"import json\n",
"import numpy as np\n",
"import os\n",
"\n",
"from inference_schema.schema_decorators import input_schema, output_schema\n",
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # Deserialize the model file back into a sklearn model.\n",
" model = joblib.load(model_path)\n",
"\n",
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
"output_sample = np.array([3726.995])\n",
"\n",
"@input_schema('data', NumpyParameterType(input_sample))\n",
"@output_schema(NumpyParameterType(output_sample))\n",
"def run(data):\n",
" try:\n",
" result = model.predict(data)\n",
" # You can return any JSON-serializable object.\n",
" return 'Hello from the updated score.py: ' + str(result.tolist())\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_service.reload()\n",
"print(\"--------------------------------------------------------------\")\n",
"\n",
"# After calling reload(), run() will return the updated message.\n",
"local_service.run(sample_input)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Update Service\n",
"\n",
"If you want to change your model(s), Conda dependencies or deployment configuration, call `update()` to rebuild the Docker image.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_service.update(models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=deployment_config)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy model to AKS cluster based on the LocalWebservice's configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This is a one time setup for AKS Cluster. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your AKS cluster\n",
"aks_name = 'my-aks-9' \n",
"\n",
"# Verify the cluster does not exist already\n",
"try:\n",
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" # Use the default configuration (can also provide parameters to customize)\n",
" prov_config = AksCompute.provisioning_configuration()\n",
"\n",
" # Create the cluster\n",
" aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)\n",
"\n",
"if aks_target.get_status() != \"Succeeded\":\n",
" aks_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AksWebservice\n",
"# Set the web service configuration (using default here)\n",
"aks_config = AksWebservice.deploy_configuration()\n",
"\n",
"# # Enable token auth and disable (key) auth on the webservice\n",
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-service-1'\n",
"\n",
"aks_service = local_service.deploy_to_cloud(name=aks_service_name,\n",
" deployment_config=aks_config,\n",
" deployment_target=aks_target)\n",
"\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test aks service\n",
"\n",
"sample_input = json.dumps({\n",
" 'data': dataset_x[0:2].tolist()\n",
"})\n",
"\n",
"aks_service.run(sample_input)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete the service if not needed.\n",
"aks_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "keriehm"
}
],
"category": "tutorial",
"compute": [
"Local"
],
"datasets": [
"None"
],
"deployment": [
"Local"
],
"exclude_from_index": false,
"framework": [
"None"
],
"friendly_name": "Register a model and deploy locally",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"star_tag": [],
"tags": [
"None"
],
"task": "Deployment"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,371 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy models to Azure Kubernetes Service (AKS) using controlled roll out\n",
"This notebook will show you how to deploy mulitple AKS webservices with the same scoring endpoint and how to roll out your models in a controlled manner by configuring % of scoring traffic going to each webservice. If you are using a Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create an Azure ML Workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check for latest version\n",
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"Create a [Workspace](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace%28class%29?view=azure-ml-py) object from your persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the model\n",
"Register a file or folder as a model by calling [Model.register()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#register-workspace--model-path--model-name--tags-none--properties-none--description-none--datasets-none--model-framework-none--model-framework-version-none--child-paths-none-).\n",
"In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"model = Model.register(workspace=ws,\n",
" model_name='sklearn_regression_model.pkl', # Name of the registered model in your workspace.\n",
" model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model.\n",
" model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.\n",
" model_framework_version='0.19.1', # Version of scikit-learn used to create the model.\n",
" description='Ridge regression model to predict diabetes progression.',\n",
" tags={'area': 'diabetes', 'type': 'regression'})\n",
"\n",
"print('Name:', model.name)\n",
"print('Version:', model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register an environment (for all models)\n",
"\n",
"If you control over how your model is run, or if it has special runtime requirements, you can specify your own environment and scoring method.\n",
"\n",
"Specify the model's runtime environment by creating an [Environment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.environment%28class%29?view=azure-ml-py) object and providing the [CondaDependencies](https://docs.microsoft.com/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) needed by your model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"environment=Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'numpy',\n",
" 'scikit-learn==0.19.1',\n",
" 'scipy'\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When using a custom environment, you must also provide Python code for initializing and running your model. An example script is included with this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('score.py') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the InferenceConfig\n",
"Create the inference configuration to reference your environment and entry script during deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inference_config = InferenceConfig(entry_script='score.py', \n",
" source_directory='.',\n",
" environment=environment)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Provision the AKS Cluster\n",
"If you already have an AKS cluster attached to this workspace, skip the step below and provide the name of the cluster.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AksCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config) \n",
"aks_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Endpoint and add a version (AKS service)\n",
"This creates a new endpoint and adds a version behind it. By default the first version added is the default version. You can specify the traffic percentile a version takes behind an endpoint. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# deploying the model and create a new endpoint\n",
"from azureml.core.webservice import AksEndpoint\n",
"# from azureml.core.compute import ComputeTarget\n",
"\n",
"#select a created compute\n",
"compute = ComputeTarget(ws, 'my-aks')\n",
"namespace_name=\"endpointnamespace\"\n",
"# define the endpoint name\n",
"endpoint_name = \"myendpoint1\"\n",
"# define the service name\n",
"version_name= \"versiona\"\n",
"\n",
"endpoint_deployment_config = AksEndpoint.deploy_configuration(tags = {'modelVersion':'firstversion', 'department':'finance'}, \n",
" description = \"my first version\", namespace = namespace_name, \n",
" version_name = version_name, traffic_percentile = 40)\n",
"\n",
"endpoint = Model.deploy(ws, endpoint_name, [model], inference_config, endpoint_deployment_config, compute)\n",
"endpoint.wait_for_deployment(True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add another version of the service to an existing endpoint\n",
"This adds another version behind an existing endpoint. You can specify the traffic percentile the new version takes. If no traffic_percentile is specified then it defaults to 0. All the unspecified traffic percentile (in this example 50) across all versions goes to default version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Adding a new version to an existing Endpoint.\n",
"version_name_add=\"versionb\" \n",
"\n",
"endpoint.create_version(version_name = version_name_add, inference_config=inference_config, models=[model], tags = {'modelVersion':'secondversion', 'department':'finance'}, \n",
" description = \"my second version\", traffic_percentile = 10)\n",
"endpoint.wait_for_deployment(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Update an existing version in an endpoint\n",
"There are two types of versions: control and treatment. An endpoint contains one or more treatment versions but only one control version. This categorization helps compare the different versions against the defined control version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint.update_version(version_name=endpoint.versions[version_name_add].name, description=\"my second version update\", traffic_percentile=40, is_default=True, is_control_version_type=True)\n",
"endpoint.wait_for_deployment(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test the web service using run method\n",
"Test the web sevice by passing in data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Scoring on endpoint\n",
"import json\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"\n",
"test_sample_encoded = bytes(test_sample, encoding='utf8')\n",
"prediction = endpoint.run(input_data=test_sample_encoded)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete Resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# deleting a version in an endpoint\n",
"endpoint.delete_version(version_name=version_name)\n",
"endpoint.wait_for_deployment(True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# deleting an endpoint, this will delete all versions in the endpoint and the endpoint itself\n",
"endpoint.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "shipatel"
}
],
"category": "deployment",
"compute": [
"None"
],
"datasets": [
"Diabetes"
],
"deployment": [
"Azure Kubernetes Service"
],
"exclude_from_index": false,
"framework": [
"Scikit-learn"
],
"friendly_name": "Deploy models to AKS using controlled roll out",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.0"
},
"star_tag": [
"featured"
],
"tags": [
"None"
],
"task": "Deploy a model with Azure Machine Learning"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: deploy-aks-with-controlled-rollout
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,28 @@
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
def init():
global model
# note here "sklearn_regression_model.pkl" is the name of the model registered under
# this is a different behavior than before when the code is run locally, even though the code is the same.
model_path = Model.get_model_path('sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
# you can return any data type as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
return error

View File

@@ -0,0 +1,498 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enabling App Insights for Services in Production\n",
"With this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model.\n",
"\n",
"\n",
"## What does Application Insights monitor?\n",
"It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview)\n",
"\n",
"\n",
"## What is different compared to standard production deployment process?\n",
"If you want to enable generic App Insights for a service run:\n",
"```python\n",
"aks_service= Webservice(ws, \"aks-w-dc2\")\n",
"aks_service.update(enable_app_insights=True)```\n",
"Where \"aks-w-dc2\" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select \"Enable AppInsights diagnostics\"\n",
"\n",
"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:\n",
"1. Update scoring file.\n",
"2. Update aks configuration.\n",
"3. Deploy the model with this new configuration. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Import your dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import json\n",
"\n",
"from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import AksWebservice\n",
"\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Set up your configuration and create a workspace\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Register Model\n",
"Register an existing trained model, add descirption and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"model = Model.register(model_path=\"sklearn_regression_model.pkl\", # This points to a local file.\n",
" model_name=\"sklearn_regression_model.pkl\", # This is the name the model is registered as.\n",
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
" description=\"Ridge regression model to predict diabetes\",\n",
" workspace=ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. *Update your scoring file with custom print statements*\n",
"Here is an example:\n",
"### a. In your init function add:\n",
"```python\n",
"print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))```\n",
"\n",
"### b. In your run function add:\n",
"```python\n",
"print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import os\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"import time\n",
"\n",
"def init():\n",
" global model\n",
" #Print statement for appinsights custom traces:\n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
"\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
"\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))\n",
" # you can return any datatype as long as it is JSON-serializable\n",
" return result.tolist()\n",
" except Exception as e:\n",
" error = str(e)\n",
" print (error + time.strftime(\"%H:%M:%S\"))\n",
" return error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. *Create myenv.yml file*\n",
"Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.20.3'],\n",
" pip_packages=['azureml-defaults'])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Create Inference Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.environment import Environment\n",
"from azureml.core.model import InferenceConfig\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy to ACI (Optional)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,\n",
" memory_gb=1,\n",
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
" description=\"Predict diabetes using regression model\",\n",
" enable_app_insights=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aci_service_name = \"aci-service-appinsights\"\n",
"\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)\n",
"aci_service.wait_for_deployment(show_output=True)\n",
"\n",
"print(aci_service.state)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aci_service.state == \"Healthy\":\n",
" test_sample = json.dumps({\n",
" \"data\": [\n",
" [1,28,13,45,54,6,57,8,8,10],\n",
" [101,9,8,37,6,45,4,3,2,41]\n",
" ]\n",
" })\n",
"\n",
" prediction = aci_service.run(test_sample)\n",
"\n",
" print(prediction)\n",
"else:\n",
" raise ValueError(\"Service deployment isn't healthy, can't call the service. Error: \", aci_service.error)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Deploy to AKS service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AksCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"aks_name = \"my-aks-insights\"\n",
"\n",
"creating_compute = False\n",
"try:\n",
" aks_target = ComputeTarget(ws, aks_name)\n",
" print(\"Using existing AKS compute target {}.\".format(aks_name))\n",
"except ComputeTargetException:\n",
" print(\"Creating a new AKS compute target {}.\".format(aks_name))\n",
"\n",
" # Use the default configuration (can also provide parameters to customize).\n",
" prov_config = AksCompute.provisioning_configuration()\n",
" aks_target = ComputeTarget.create(workspace=ws,\n",
" name=aks_name,\n",
" provisioning_configuration=prov_config)\n",
" creating_compute = True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"if creating_compute and aks_target.provisioning_state != \"Succeeded\":\n",
" aks_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you already have a cluster you can attach the service to it:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"%%time\n",
"resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
"create_name= 'myaks4'\n",
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"aks_target = ComputeTarget.attach(workspace=ws,\n",
" name=create_name,\n",
" attach_configuration=attach_config)\n",
"## Wait for the operation to complete\n",
"aks_target.wait_for_provisioning(True)```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. *Activate App Insights through updating AKS Webservice configuration*\n",
"In order to enable App Insights in your service you will need to update your AKS configuration file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the web service configuration.\n",
"aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Deploy your service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aks_target.provisioning_state == \"Succeeded\":\n",
" aks_service_name = \"aks-service-appinsights\"\n",
" aks_service = Model.deploy(ws,\n",
" aks_service_name,\n",
" [model],\n",
" inference_config,\n",
" aks_deployment_config,\n",
" deployment_target=aks_target,\n",
" overwrite=True)\n",
" aks_service.wait_for_deployment(show_output=True)\n",
" print(aks_service.state)\n",
"else:\n",
" raise ValueError(\"AKS cluster provisioning failed. Error: \", aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. Test your service "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"if aks_service.state == \"Healthy\":\n",
" test_sample = json.dumps({\n",
" \"data\": [\n",
" [1,28,13,45,54,6,57,8,8,10],\n",
" [101,9,8,37,6,45,4,3,2,41]\n",
" ]\n",
" })\n",
"\n",
" prediction = aks_service.run(input_data=test_sample)\n",
" print(prediction)\n",
"else:\n",
" raise ValueError(\"Service deployment isn't healthy, can't call the service. Error: \", aks_service.error)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. See your service telemetry in App Insights\n",
"1. Go to the [Azure Portal](https://portal.azure.com/)\n",
"2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type\n",
"3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.\n",
"4. Click on the top banner \"Analytics\"\n",
"5. In the \"Schema\" section select \"traces\" and run your query.\n",
"6. Voila! All your custom traces should be there."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Disable App Insights"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aks_service.update(enable_app_insights=False)\n",
"aks_service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"aci_service.delete()\n",
"model.delete()\n",
"if creating_compute:\n",
" aks_target.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "gopalv"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: enable-app-insights-in-production-service
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,2 @@
RUN apt-get update
RUN apt-get install -y libgomp1

View File

@@ -0,0 +1,39 @@
# ONNX on Azure Machine Learning
These tutorials show how to create and deploy Open Neural Network eXchange ([ONNX](http://onnx.ai)) models in Azure Machine Learning environments using [ONNX Runtime](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) for inference. Once deployed as a web service, you can ping the model with your own set of images to be analyzed!
## Tutorials
0. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
#### Obtain pretrained models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime
1. [MNIST - Handwritten Digit Classification with ONNX Runtime](onnx-inference-mnist-deploy.ipynb)
2. [Emotion FER+ - Facial Expression Recognition with ONNX Runtime](onnx-inference-facial-expression-recognition-deploy.ipynb)
#### Train model on Azure ML, convert to ONNX, and deploy with ONNX Runtime
3. [MNIST - Train using PyTorch and deploy with ONNX Runtime](onnx-train-pytorch-aml-deploy-mnist.ipynb)
#### Demo Notebooks from Microsoft Ignite 2018
Note that the following notebooks do not have evaluation sections for the models since they were deployed as part of a live demo. You can find the respective pre-processing and post-processing code linked from the ONNX Model Zoo Github pages ([ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [TinyYoloV2](https://github.com/onnx/models/tree/master/tiny_yolov2)), or experiment with the ONNX models by [running them in the browser](https://microsoft.github.io/onnxjs-demo/#/).
4. [ResNet50 - Image Recognition with ONNX Runtime](onnx-modelzoo-aml-deploy-resnet50.ipynb)
5. [TinyYoloV2 - Convert from CoreML and deploy with ONNX Runtime](onnx-convert-aml-deploy-tinyyolo.ipynb)
## Documentation
- [ONNX Runtime Python API Documentation](http://aka.ms/onnxruntime-python)
- [Azure Machine Learning API Documentation](http://aka.ms/aml-docs)
## Related Articles
- [Building and Deploying ONNX Runtime Models](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx)
- [Azure AI Making AI Real for Business](https://aka.ms/aml-blog-overview)
- [Whats new in Azure Machine Learning](https://aka.ms/aml-blog-whats-new)
## License
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
## Acknowledgements
These tutorials were developed by Vinitra Swamy and Prasanth Pulavarthi of the Microsoft AI Frameworks team and adapted for presentation at Microsoft Ignite 2018.
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/README.png)

Binary file not shown.

View File

@@ -0,0 +1,135 @@
# This is a modified version of https://github.com/pytorch/examples/blob/master/mnist/main.py which is
# licensed under BSD 3-Clause (https://github.com/pytorch/examples/blob/master/LICENSE)
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import os
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch, output_dir):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False, reduce=True).item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=5, metavar='N',
help='number of epochs to train (default: 5)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--output-dir', type=str, default='outputs')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
output_dir = args.output_dir
os.makedirs(output_dir, exist_ok=True)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# Use Azure Open Datasets for MNIST dataset
datasets.MNIST.resources = [
("https://azureopendatastorage.azurefd.net/mnist/train-images-idx3-ubyte.gz",
"f68b3c2dcbeaaa9fbdd348bbdeb94873"),
("https://azureopendatastorage.azurefd.net/mnist/train-labels-idx1-ubyte.gz",
"d53e105ee54ea40749a09fcbcd1e9432"),
("https://azureopendatastorage.azurefd.net/mnist/t10k-images-idx3-ubyte.gz",
"9fb629c4189551a2d022fa330f9573f3"),
("https://azureopendatastorage.azurefd.net/mnist/t10k-labels-idx1-ubyte.gz",
"ec29112dd5afa0611ce80d1b7f02629c")
]
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch, output_dir)
test(args, model, device, test_loader)
# save model
dummy_input = torch.randn(1, 1, 28, 28, device=device)
model_path = os.path.join(output_dir, 'mnist.onnx')
torch.onnx.export(model, dummy_input, model_path)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,434 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# YOLO Real-time Object Detection using ONNX on AzureML\n",
"\n",
"This example shows how to convert the TinyYOLO model from CoreML to ONNX and operationalize it as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
"\n",
"## What is ONNX\n",
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
"\n",
"## YOLO Details\n",
"You Only Look Once (YOLO) is a state-of-the-art, real-time object detection system. For more information about YOLO, please visit the [YOLO website](https://pjreddie.com/darknet/yolo/)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"To make the best use of your time, make sure you have done the following:\n",
"\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Install necessary packages\n",
"\n",
"You'll need to run the following commands to use this tutorial:\n",
"\n",
"```sh\n",
"pip install onnxmltools\n",
"pip install coremltools # use this on Linux and Mac\n",
"pip install git+https://github.com/apple/coremltools # use this on Windows\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert model to ONNX\n",
"\n",
"First we download the CoreML model. We use the CoreML model from [Matthijs Hollemans's tutorial](https://github.com/hollance/YOLO-CoreML-MPSNNGraph). This may take a few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import urllib.request\n",
"\n",
"coreml_model_url = \"https://github.com/hollance/YOLO-CoreML-MPSNNGraph/raw/master/TinyYOLO-CoreML/TinyYOLO-CoreML/TinyYOLO.mlmodel\"\n",
"urllib.request.urlretrieve(coreml_model_url, filename=\"TinyYOLO.mlmodel\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we use ONNXMLTools to convert the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import onnxmltools\n",
"import coremltools\n",
"\n",
"# Load a CoreML model\n",
"coreml_model = coremltools.utils.load_spec('TinyYOLO.mlmodel')\n",
"\n",
"# Convert from CoreML into ONNX\n",
"onnx_model = onnxmltools.convert_coreml(coreml_model, 'TinyYOLOv2')\n",
"\n",
"# Fix the preprocessor bias in the ImageScaler\n",
"for init in onnx_model.graph.initializer:\n",
" if init.name == 'scalerPreprocessor_bias':\n",
" init.dims[1] = 1\n",
"\n",
"# Save ONNX model\n",
"onnxmltools.utils.save_model(onnx_model, 'tinyyolov2.onnx')\n",
"\n",
"import os\n",
"print(os.path.getsize('tinyyolov2.onnx'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploying as a web service with Azure ML\n",
"\n",
"### Load Azure ML workspace\n",
"\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Registering your model with Azure ML\n",
"\n",
"Now we upload the model and register it in the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path = \"tinyyolov2.onnx\",\n",
" model_name = \"tinyyolov2\",\n",
" tags = {\"onnx\": \"demo\"},\n",
" description = \"TinyYOLO\",\n",
" workspace = ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Displaying your registered models\n",
"\n",
"You can optionally list out all the models that you have registered in this workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"models = ws.models\n",
"for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Write scoring file\n",
"\n",
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import json\n",
"import time\n",
"import sys\n",
"import os\n",
"from azureml.core.model import Model\n",
"import numpy as np # we're going to use numpy to process input and output data\n",
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
"\n",
"def init():\n",
" global session\n",
" model = Model.get_model_path(model_name = 'tinyyolov2')\n",
" session = onnxruntime.InferenceSession(model)\n",
"\n",
"def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n",
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
"\n",
"def postprocess(result):\n",
" return np.array(result).tolist()\n",
"\n",
"def run(input_data_json):\n",
" try:\n",
" start = time.time() # start timer\n",
" input_data = preprocess(input_data_json)\n",
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
" result = session.run([], {input_name: input_data})\n",
" end = time.time() # stop timer\n",
" return {\"result\": postprocess(result),\n",
" \"time\": end - start}\n",
" except Exception as e:\n",
" result = str(e)\n",
" return {\"error\": result}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up inference configuration\n",
"First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime\", \"azureml-core\", \"azureml-defaults\"])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we create the inference configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'demo': 'onnx'}, \n",
" description = 'web service for TinyYOLO ONNX model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will take a few minutes to run as the model gets packaged up and deployed to ACI."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aci_service_name = 'my-aci-service-tiny-yolo'\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aci_service.state != 'Healthy':\n",
" # run this command for debugging.\n",
" print(aci_service.get_logs())\n",
" aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"\n",
"If you've made it this far, you've deployed a working web service that does object detection using an ONNX model. You can get the URL for the webservice with the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aci_service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are eventually done using the web service, remember to delete it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aci_service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"local"
],
"datasets": [
"PASCAL VOC"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Convert and deploy TinyYolo with ONNX Runtime",
"index_order": 5,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"star_tag": [
"featured"
],
"tags": [
"ONNX Converter"
],
"task": "Object Detection"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,8 @@
name: onnx-convert-aml-deploy-tinyyolo
dependencies:
- pip:
- azureml-sdk
- numpy
- git+https://github.com/apple/coremltools@v2.1
- onnx<1.7.0
- onnxmltools

View File

@@ -12,7 +12,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Facial Expression Recognition using ONNX Runtime on AzureML\n",
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Facial Expression Recognition (FER+) using ONNX Runtime on Azure ML\n",
"\n",
"This example shows how to deploy an image classification neural network using the Facial Expression Recognition ([FER](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. This tutorial will show you how to deploy a FER+ model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n",
"\n",
@@ -34,32 +41,54 @@
"## Prerequisites\n",
"\n",
"### 1. Install Azure ML SDK and create a new workspace\n",
"Please follow [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook.\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this Notebook\n",
"You need to install the popular plotting library `matplotlib`, the image manipulation library `PIL`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
"\n",
"```sh\n",
"(myenv) $ pip install matplotlib onnx Pillow\n",
"(myenv) $ pip install matplotlib onnx opencv-python\n",
"```\n",
"\n",
"**Debugging tip**: Make sure that to activate your virtual environment (myenv) before you re-launch this notebook using the `jupyter notebook` comand. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n",
"\n",
"### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n",
"\n",
"[Download the ONNX Emotion FER+ model and corresponding test data](https://www.cntk.ai/OnnxModels/emotion_ferplus/opset_7/emotion_ferplus.tar.gz) and place them in the same folder as this tutorial notebook. You can unzip the file through the following line of code.\n",
"In the following lines of code, we download [the trained ONNX Emotion FER+ model and corresponding test data](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus) and place them in the same folder as this tutorial notebook. For more information about the FER+ dataset, please visit Microsoft Researcher Emad Barsoum's [FER+ source data repository](https://github.com/ebarsoum/FERPlus)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# urllib is a built-in Python library to download files from URLs\n",
"\n",
"```sh\n",
"(myenv) $ tar xvzf emotion_ferplus.tar.gz\n",
"```\n",
"# Objective: retrieve the latest version of the ONNX Emotion FER+ model files from the\n",
"# ONNX Model Zoo and save it in the same folder as this tutorial\n",
"\n",
"More information can be found about the ONNX FER+ model on [github](https://github.com/onnx/models/tree/master/emotion_ferplus). For more information about the FER+ dataset, please visit Microsoft Researcher Emad Barsoum's [FER+ source data repository](https://github.com/ebarsoum/FERPlus)."
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n",
"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"emotion-ferplus-7.tar.gz\")\n",
"\n",
"# the ! magic command tells our jupyter notebook kernel to run the following line of \n",
"# code from the command line instead of the notebook kernel\n",
"\n",
"# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n",
"\n",
"!tar xvzf emotion-ferplus-7.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Azure ML workspace\n",
"## Deploy a VM with your ONNX model in the Cloud\n",
"\n",
"### Load Azure ML workspace\n",
"\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
]
@@ -136,9 +165,9 @@
"metadata": {},
"outputs": [],
"source": [
"models = ws.models()\n",
"for m in models:\n",
" print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
"models = ws.models\n",
"for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
@@ -147,9 +176,9 @@
"source": [
"### ONNX FER+ Model Methodology\n",
"\n",
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous FER+ data set, provided as part of the [trained Emotion Recognition model](https://github.com/onnx/models/tree/master/emotion_ferplus) in the ONNX model zoo.\n",
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the well-known FER+ data set, provided as part of the [trained Emotion Recognition model](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus) in the ONNX model zoo.\n",
"\n",
"The original Facial Emotion Recognition (FER) Dataset was released in 2013, but some of the labels are not entirely appropriate for the expression. In the FER+ Dataset, each photo was evaluated by at least 10 croud sourced reviewers, creating a better basis for ground truth. \n",
"The original Facial Emotion Recognition (FER) Dataset was released in 2013 by Pierre-Luc Carrier and Aaron Courville as part of a [Kaggle Competition](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data), but some of the labels are not entirely appropriate for the expression. In the FER+ Dataset, each photo was evaluated by at least 10 croud sourced reviewers, creating a more accurate basis for ground truth. \n",
"\n",
"You can see the difference of label quality in the sample model input below. The FER labels are the first word below each image, and the FER+ labels are the second word below each image.\n",
"\n",
@@ -175,7 +204,6 @@
"source": [
"# for images and plots in this notebook\n",
"import matplotlib.pyplot as plt \n",
"from IPython.display import Image\n",
"\n",
"# display images inline\n",
"%matplotlib inline"
@@ -202,20 +230,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy our model on Azure ML"
"### Specify our Score and Environment Files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file.\n",
"\n",
"You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
"\n",
"### Write Score File\n",
"\n",
"A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime GPU inference session to evaluate the data passed in on our function calls."
"A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime inference session to evaluate the data passed in on our function calls."
]
},
{
@@ -230,12 +256,11 @@
"import onnxruntime\n",
"import sys\n",
"import os\n",
"from azureml.core.model import Model\n",
"import time\n",
"\n",
"def init():\n",
" global session, input_name, output_name\n",
" model = Model.get_model_path(model_name = 'onnx_emotion')\n",
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.onnx')\n",
" session = onnxruntime.InferenceSession(model, None)\n",
" input_name = session.get_inputs()[0].name\n",
" output_name = session.get_outputs()[0].name \n",
@@ -248,10 +273,13 @@
" try:\n",
" # load in our data, convert to readable format\n",
" data = np.array(json.loads(input_data)['data']).astype('float32')\n",
" \n",
" start = time.time()\n",
" r = session.run([output_name], {input_name : data})\n",
" end = time.time()\n",
" \n",
" result = emotion_map(postprocess(r[0]))\n",
" \n",
" result_dict = {\"result\": result,\n",
" \"time_in_sec\": [end - start]}\n",
" except Exception as e:\n",
@@ -260,9 +288,12 @@
" return json.dumps(result_dict)\n",
"\n",
"def emotion_map(classes, N=1):\n",
" \"\"\"Take the most probable labels (output of postprocess) and returns the top N emotional labels that fit the picture.\"\"\"\n",
" \"\"\"Take the most probable labels (output of postprocess) and returns the \n",
" top N emotional labels that fit the picture.\"\"\"\n",
" \n",
" emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n",
" 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n",
" \n",
" emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n",
" emotion_keys = list(emotion_table.keys())\n",
" emotions = []\n",
" for i in range(N):\n",
@@ -276,8 +307,8 @@
" return e_x / e_x.sum(axis=0)\n",
"\n",
"def postprocess(scores):\n",
" \"\"\"This function takes the scores generated by the network and returns the class IDs in decreasing \n",
" order of probability.\"\"\"\n",
" \"\"\"This function takes the scores generated by the network and \n",
" returns the class IDs in decreasing order of probability.\"\"\"\n",
" prob = softmax(scores)\n",
" prob = np.squeeze(prob)\n",
" classes = np.argsort(prob)[::-1]\n",
@@ -288,7 +319,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Write Environment File"
"### Write Environment File\n",
"Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
@@ -299,11 +331,8 @@
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_pip_package(\"numpy\")\n",
"myenv.add_pip_package(\"azureml-core\")\n",
"myenv.add_pip_package(\"onnxruntime\")\n",
"\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime\", \"azureml-core\", \"azureml-defaults\"])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
@@ -313,9 +342,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the Container Image\n",
"\n",
"This step will likely take a few minutes."
"### Setup inference configuration"
]
},
{
@@ -324,49 +351,19 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"test\",\n",
" tags = {\"demo\": \"onnx\"})\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"image = ContainerImage.create(name = \"onnxtest\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Debugging\n",
"\n",
"In case you need to debug your code, the next line of code accesses the log file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(image.image_build_log_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We're all set! Let's get our model chugging.\n",
"\n",
"## Deploy the container image"
"### Deploy the model"
]
},
{
@@ -383,33 +380,26 @@
" description = 'ONNX for emotion recognition model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will likely take a few minutes to run as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'onnx-demo-emotion'\n",
"print(\"Service\", aci_service_name)\n",
"\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will likely take a few minutes to run as well."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -439,23 +429,56 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Testing and Evaluation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Useful Helper Functions\n",
"## Testing and Evaluation\n",
"\n",
"We preprocess and postprocess our data (see score.py file) using the helper functions specified in the [ONNX FER+ Model page in the Model Zoo repository](https://github.com/onnx/models/tree/master/emotion_ferplus)."
"### Useful Helper Functions\n",
"\n",
"We preprocess and postprocess our data (see score.py file) using the helper functions specified in the [ONNX FER+ Model page in the Model Zoo repository](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def emotion_map(classes, N=1):\n",
" \"\"\"Take the most probable labels (output of postprocess) and returns the \n",
" top N emotional labels that fit the picture.\"\"\"\n",
" \n",
" emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n",
" 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n",
" \n",
" emotion_keys = list(emotion_table.keys())\n",
" emotions = []\n",
" for c in range(N):\n",
" emotions.append(emotion_keys[classes[c]])\n",
" return emotions\n",
"\n",
"def softmax(x):\n",
" \"\"\"Compute softmax values (probabilities from 0 to 1) for each possible label.\"\"\"\n",
" x = x.reshape(-1)\n",
" e_x = np.exp(x - np.max(x))\n",
" return e_x / e_x.sum(axis=0)\n",
"\n",
"def postprocess(scores):\n",
" \"\"\"This function takes the scores generated by the network and \n",
" returns the class IDs in decreasing order of probability.\"\"\"\n",
" prob = softmax(scores)\n",
" prob = np.squeeze(prob)\n",
" classes = np.argsort(prob)[::-1]\n",
" return classes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Test Data"
"### Load Test Data\n",
"\n",
"These are already in your directory from your ONNX model download (from the model zoo).\n",
"\n",
"Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays."
]
},
{
@@ -475,17 +498,15 @@
"import json\n",
"import os\n",
"\n",
"from score import emotion_map, softmax, postprocess\n",
"\n",
"test_inputs = []\n",
"test_outputs = []\n",
"\n",
"# read in 3 testing images from .pb files\n",
"test_data_size = 3\n",
"\n",
"for i in np.arange(test_data_size):\n",
" input_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'input_0.pb')\n",
" output_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'output_0.pb')\n",
"for num in np.arange(test_data_size):\n",
" input_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(num), 'input_0.pb')\n",
" output_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(num), 'output_0.pb')\n",
" \n",
" # convert protobuf tensors to np arrays using the TensorProto reader from ONNX\n",
" tensor = onnx.TensorProto()\n",
@@ -499,7 +520,7 @@
" tensor.ParseFromString(f.read())\n",
" \n",
" output_data = numpy_helper.to_array(tensor)\n",
" output_processed = emotion_map(postprocess(output_data))[0]\n",
" output_processed = emotion_map(postprocess(output_data[0]))[0]\n",
" test_outputs.append(output_processed)"
]
},
@@ -512,7 +533,7 @@
},
"source": [
"### Show some sample images\n",
"We use `matplotlib` to plot 3 test images from the model zoo with their labels over them."
"We use `matplotlib` to plot 3 test images from the dataset."
]
},
{
@@ -532,7 +553,7 @@
" plt.axhline('')\n",
" plt.axvline('')\n",
" plt.text(x = 10, y = -10, s = test_outputs[test_image], fontsize = 18)\n",
" plt.imshow(test_inputs[test_image].reshape(64, 64), cmap = plt.cm.Greys)\n",
" plt.imshow(test_inputs[test_image].reshape(64, 64), cmap = plt.cm.gray)\n",
"plt.show()"
]
},
@@ -549,7 +570,7 @@
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize = (16, 6), frameon=False)\n",
"plt.figure(figsize = (16, 6))\n",
"plt.subplot(1, 8, 1)\n",
"\n",
"plt.text(x = 0, y = -30, s = \"True Label: \", fontsize = 13, color = 'black')\n",
@@ -571,7 +592,7 @@
" print(r['error'])\n",
" break\n",
" \n",
" result = r['result'][0][0]\n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" \n",
" ground_truth = test_outputs[i]\n",
@@ -583,7 +604,7 @@
"\n",
" # use different color for misclassified sample\n",
" font_color = 'red' if ground_truth != result else 'black'\n",
" clr_map = plt.cm.gray if ground_truth != result else plt.cm.Greys\n",
" clr_map = plt.cm.Greys if ground_truth != result else plt.cm.gray\n",
"\n",
" # ground truth labels are in blue\n",
" plt.text(x = 10, y = -70, s = ground_truth, fontsize = 18, color = 'blue')\n",
@@ -611,15 +632,30 @@
"metadata": {},
"outputs": [],
"source": [
"from PIL import Image\n",
"# Preprocessing functions take your image and format it so it can be passed\n",
"# as input into our ONNX model\n",
"\n",
"def preprocess(image_path):\n",
" input_shape = (1, 1, 64, 64)\n",
" img = Image.open(image_path)\n",
" img = img.resize((64, 64), Image.ANTIALIAS)\n",
" img_data = np.array(img)\n",
" img_data = np.resize(img_data, input_shape)\n",
" return img_data"
"import cv2\n",
"\n",
"def rgb2gray(rgb):\n",
" \"\"\"Convert the input image into grayscale\"\"\"\n",
" return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n",
"\n",
"def resize_img(img_to_resize):\n",
" \"\"\"Resize image to FER+ model input dimensions\"\"\"\n",
" r_img = cv2.resize(img_to_resize, dsize=(64, 64), interpolation=cv2.INTER_AREA)\n",
" r_img.resize((1, 1, 64, 64))\n",
" return r_img\n",
"\n",
"def preprocess(img_to_preprocess):\n",
" \"\"\"Resize input images and convert them to grayscale.\"\"\"\n",
" if img_to_preprocess.shape == (64, 64):\n",
" img_to_preprocess.resize((1, 1, 64, 64))\n",
" return img_to_preprocess\n",
" \n",
" grayscale = rgb2gray(img_to_preprocess)\n",
" processed_img = resize_img(grayscale)\n",
" return processed_img"
]
},
{
@@ -634,14 +670,19 @@
"# Any PNG or JPG image file should work\n",
"# Make sure to include the entire path with // instead of /\n",
"\n",
"# e.g. your_test_image = \"C://Users//vinitra.swamy//Pictures//emotion_test_images//img_1.png\"\n",
"# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/face.png\"\n",
"\n",
"your_test_image = \"<path to file>\"\n",
"\n",
"import matplotlib.image as mpimg\n",
"\n",
"if your_test_image != \"<path to file>\":\n",
" img = preprocess(your_test_image)\n",
" img = mpimg.imread(your_test_image)\n",
" plt.subplot(1,3,1)\n",
" plt.imshow(img.reshape((64,64)), cmap = plt.cm.gray)\n",
" plt.imshow(img, cmap = plt.cm.Greys)\n",
" print(\"Old Dimensions: \", img.shape)\n",
" img = preprocess(img)\n",
" print(\"New Dimensions: \", img.shape)\n",
"else:\n",
" img = None"
]
@@ -659,21 +700,22 @@
"\n",
" try:\n",
" r = json.loads(aci_service.run(input_data))\n",
" result = r['result'][0][0]\n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" except Exception as e:\n",
" except KeyError as e:\n",
" print(str(e))\n",
"\n",
" plt.figure(figsize = (16, 6))\n",
" plt.subplot(1,8,1)\n",
" plt.axhline('')\n",
" plt.axvline('')\n",
" plt.text(x = -10, y = -35, s = \"Model prediction: \", fontsize = 14)\n",
" plt.text(x = -10, y = -20, s = \"Inference time: \", fontsize = 14)\n",
" plt.text(x = 100, y = -35, s = str(result), fontsize = 14)\n",
" plt.text(x = 100, y = -20, s = str(time_ms) + \" ms\", fontsize = 14)\n",
" plt.text(x = -10, y = -8, s = \"Input image: \", fontsize = 14)\n",
" plt.imshow(img.reshape(64, 64), cmap = plt.cm.gray) "
" plt.text(x = -10, y = -40, s = \"Model prediction: \", fontsize = 14)\n",
" plt.text(x = -10, y = -25, s = \"Inference time: \", fontsize = 14)\n",
" plt.text(x = 100, y = -40, s = str(result), fontsize = 14)\n",
" plt.text(x = 100, y = -25, s = str(time_ms) + \" ms\", fontsize = 14)\n",
" plt.text(x = -10, y = -10, s = \"Model Input image: \", fontsize = 14)\n",
" plt.imshow(img.reshape((64, 64)), cmap = plt.cm.gray) \n",
" "
]
},
{
@@ -684,7 +726,7 @@
"source": [
"# remember to delete your service after you are done using it!\n",
"\n",
"# aci_service.delete()"
"aci_service.delete()"
]
},
{
@@ -701,17 +743,38 @@
"- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n",
"\n",
"Next steps:\n",
"- If you have not already, check out another interesting ONNX/AML application that lets you set up a state-of-the-art [handwritten image classification model (MNIST)](https://github.com/Azure/MachineLearningNotebooks/tree/master/onnx/onnx-inference-mnist.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model for handwritten digit classification in an Azure ML virtual machine.\n",
"- If you have not already, check out another interesting ONNX/AML application that lets you set up a state-of-the-art [handwritten image classification model (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model for handwritten digit classification in an Azure ML virtual machine.\n",
"- Keep an eye out for an updated version of this tutorial that uses ONNX Runtime GPU.\n",
"- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)"
]
}
],
"metadata": {
"authors": [
{
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"Local"
],
"datasets": [
"Emotion FER"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy Facial Expression Recognition (FER+) with ONNX Runtime",
"index_order": 2,
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {
@@ -725,7 +788,12 @@
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"msauthor": "vinitra.swamy"
"msauthor": "vinitra.swamy",
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Facial Expression Recognition"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -0,0 +1,9 @@
name: onnx-inference-facial-expression-recognition-deploy
dependencies:
- pip:
- azureml-sdk
- azureml-widgets
- matplotlib
- numpy
- onnx<1.7.0
- opencv-python-headless

View File

@@ -8,6 +8,13 @@
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -22,9 +29,9 @@
"\n",
"#### Tutorial Objectives:\n",
"\n",
"1. Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n",
"2. Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n",
"3. Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML"
"- Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n",
"- Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n",
"- Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML"
]
},
{
@@ -34,31 +41,61 @@
"## Prerequisites\n",
"\n",
"### 1. Install Azure ML SDK and create a new workspace\n",
"Please follow [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook.\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this Notebook\n",
"### 2. Install additional packages needed for this tutorial notebook\n",
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
"\n",
"```sh\n",
"(myenv) $ pip install matplotlib onnx opencv-python\n",
"```\n",
"\n",
"**Debugging tip**: Make sure that you run the \"jupyter notebook\" command to launch this notebook after activating your virtual environment. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n",
"\n",
"### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n",
"\n",
"[Download the ONNX MNIST model and corresponding test data](https://www.cntk.ai/OnnxModels/mnist/opset_7/mnist.tar.gz) and place them in the same folder as this tutorial notebook. You can unzip the file through the following line of code.\n",
"In the following lines of code, we download [the trained ONNX MNIST model and corresponding test data](https://github.com/onnx/models/tree/master/vision/classification/mnist) and place them in the same folder as this tutorial notebook. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# urllib is a built-in Python library to download files from URLs\n",
"\n",
"```sh\n",
"(myenv) $ tar xvzf mnist.tar.gz\n",
"```\n",
"# Objective: retrieve the latest version of the ONNX MNIST model files from the\n",
"# ONNX Model Zoo and save it in the same folder as this tutorial\n",
"\n",
"More information can be found about the ONNX MNIST model on [github](https://github.com/onnx/models/tree/master/mnist). For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)."
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n",
"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"mnist-7.tar.gz\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# the ! magic command tells our jupyter notebook kernel to run the following line of \n",
"# code from the command line instead of the notebook kernel\n",
"\n",
"# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n",
"\n",
"!tar xvzf mnist-7.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Azure ML workspace\n",
"## Deploy a VM with your ONNX model in the Cloud\n",
"\n",
"### Load Azure ML workspace\n",
"\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
]
@@ -113,11 +150,11 @@
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path = model_dir + \"//model.onnx\",\n",
"model = Model.register(workspace = ws,\n",
" model_path = model_dir + \"/\" + \"model.onnx\",\n",
" model_name = \"mnist_1\",\n",
" tags = {\"onnx\": \"demo\"},\n",
" description = \"MNIST image classification CNN from ONNX Model Zoo\",\n",
" workspace = ws)"
" description = \"MNIST image classification CNN from ONNX Model Zoo\",)"
]
},
{
@@ -135,9 +172,9 @@
"metadata": {},
"outputs": [],
"source": [
"models = ws.models()\n",
"for m in models:\n",
" print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
"models = ws.models\n",
"for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
@@ -150,7 +187,7 @@
"source": [
"### ONNX MNIST Model Methodology\n",
"\n",
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous MNIST data set, provided as part of the [trained MNIST model](https://github.com/onnx/models/tree/master/mnist) in the ONNX model zoo.\n",
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous MNIST data set, provided as part of the [trained MNIST model](https://github.com/onnx/models/tree/master/vision/classification/mnist) in the ONNX model zoo.\n",
"\n",
"***Input: Handwritten Images from MNIST Dataset***\n",
"\n",
@@ -188,16 +225,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy our model on Azure ML"
"### Specify our Score and Environment Files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file.\n",
"\n",
"You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
"\n",
"### Write Score File\n",
"\n",
@@ -216,39 +251,52 @@
"import onnxruntime\n",
"import sys\n",
"import os\n",
"from azureml.core.model import Model\n",
"import time\n",
"\n",
"\n",
"def init():\n",
" global session, input_name, output_name\n",
" model = Model.get_model_path(model_name = 'mnist_1')\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.onnx')\n",
" session = onnxruntime.InferenceSession(model, None)\n",
" input_name = session.get_inputs()[0].name\n",
" output_name = session.get_outputs()[0].name \n",
" \n",
"\n",
"def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n",
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
"\n",
"def postprocess(result):\n",
" # We use argmax to pick the highest confidence label\n",
" return int(np.argmax(np.array(result).squeeze(), axis=0))\n",
" \n",
"def run(input_data):\n",
" '''Purpose: evaluate test input in Azure Cloud using onnxruntime.\n",
" We will call the run function later from our Jupyter Notebook \n",
" so our azure service can evaluate our model input in the cloud. '''\n",
"\n",
" try:\n",
" # load in our data, convert to readable format\n",
" data = np.array(json.loads(input_data)['data']).astype('float32')\n",
" data = preprocess(input_data)\n",
" \n",
" # start timer\n",
" start = time.time()\n",
" r = session.run([output_name], {input_name: data})[0]\n",
" \n",
" r = session.run([output_name], {input_name: data})\n",
" \n",
" #end timer\n",
" end = time.time()\n",
" result = choose_class(r[0])\n",
" result_dict = {\"result\": [result],\n",
" \"time_in_sec\": [end - start]}\n",
" \n",
" result = postprocess(r)\n",
" result_dict = {\"result\": result,\n",
" \"time_in_sec\": end - start}\n",
" except Exception as e:\n",
" result_dict = {\"error\": str(e)}\n",
" \n",
" return json.dumps(result_dict)\n",
" return result_dict\n",
"\n",
"def choose_class(result_prob):\n",
" \"\"\"We use argmax to determine the right label to choose from our output, after calling softmax on the 10 numbers we receive\"\"\"\n",
" \"\"\"We use argmax to determine the right label to choose from our output\"\"\"\n",
" return int(np.argmax(result_prob, axis=0))"
]
},
@@ -256,14 +304,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Write Environment File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This step creates a YAML file that specifies which dependencies we would like to see in our Linux Virtual Machine."
"### Write Environment File\n",
"\n",
"This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
@@ -274,11 +317,7 @@
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_pip_package(\"numpy\")\n",
"myenv.add_pip_package(\"azureml-core\")\n",
"myenv.add_pip_package(\"onnxruntime\")\n",
"\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime\", \"azureml-core\", \"azureml-defaults\"])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
@@ -288,9 +327,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the Container Image\n",
"\n",
"This step will likely take a few minutes."
"### Create Inference Configuration"
]
},
{
@@ -299,49 +336,19 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"test\",\n",
" tags = {\"demo\": \"onnx\"}) )\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"image = ContainerImage.create(name = \"onnxtest\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Debugging\n",
"\n",
"In case you need to debug your code, the next line of code accesses the log file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(image.image_build_log_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We're all set! Let's get our model chugging.\n",
"\n",
"## Deploy the container image"
"### Deploy the model"
]
},
{
@@ -362,7 +369,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will likely take a few minutes to run as well."
"The following cell will likely take a few minutes to run."
]
},
{
@@ -371,16 +378,9 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'onnx-demo-mnist'\n",
"print(\"Service\", aci_service_name)\n",
"\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
@@ -407,23 +407,29 @@
"\n",
"If you've made it this far, you've deployed a working VM with a handwritten digit classifier running in the cloud using Azure ML. Congratulations!\n",
"\n",
"Let's see how well our model deals with our test images."
"You can get the URL for the webservice with the code below. Let's now see how well our model deals with our test images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aci_service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Testing and Evaluation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Test Data\n",
"## Testing and Evaluation\n",
"\n",
"These are already in your directory from your ONNX model download (from the model zoo). If you didn't place your model and test data in the same directory as this notebook, edit the \"model_dir\" filename below."
"### Load Test Data\n",
"\n",
"These are already in your directory from your ONNX model download (from the model zoo).\n",
"\n",
"Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays."
]
},
{
@@ -515,7 +521,7 @@
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize = (16, 6), frameon=False)\n",
"plt.figure(figsize = (16, 6))\n",
"plt.subplot(1, 8, 1)\n",
"\n",
"plt.text(x = 0, y = -30, s = \"True Label: \", fontsize = 13, color = 'black')\n",
@@ -531,14 +537,14 @@
" input_data = json.dumps({'data': test_inputs[i].tolist()})\n",
" \n",
" # predict using the deployed model\n",
" r = json.loads(aci_service.run(input_data))\n",
" r = aci_service.run(input_data)\n",
" \n",
" if \"error\" in r:\n",
" print(r['error'])\n",
" break\n",
" \n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" result = r['result']\n",
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
" \n",
" ground_truth = int(np.argmax(test_outputs[i]))\n",
" \n",
@@ -579,21 +585,28 @@
"metadata": {},
"outputs": [],
"source": [
"# Preprocessing functions\n",
"# Preprocessing functions take your image and format it so it can be passed\n",
"# as input into our ONNX model\n",
"\n",
"import cv2\n",
"\n",
"def rgb2gray(rgb):\n",
" \"\"\"Convert the input image into grayscale\"\"\"\n",
" return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n",
"\n",
"def resize_img(img):\n",
" img = cv2.resize(img, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n",
" img.resize((1, 1, 28, 28))\n",
" return img\n",
"def resize_img(img_to_resize):\n",
" \"\"\"Resize image to MNIST model input dimensions\"\"\"\n",
" r_img = cv2.resize(img_to_resize, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n",
" r_img.resize((1, 1, 28, 28))\n",
" return r_img\n",
"\n",
"def preprocess(img):\n",
"def preprocess(img_to_preprocess):\n",
" \"\"\"Resize input images and convert them to grayscale.\"\"\"\n",
" grayscale = rgb2gray(img)\n",
" if img_to_preprocess.shape == (28, 28):\n",
" img_to_preprocess.resize((1, 1, 28, 28))\n",
" return img_to_preprocess\n",
" \n",
" grayscale = rgb2gray(img_to_preprocess)\n",
" processed_img = resize_img(grayscale)\n",
" return processed_img"
]
@@ -608,12 +621,11 @@
"# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n",
"\n",
"# Any PNG or JPG image file should work\n",
"# Make sure to include the entire path with // instead of /\n",
"\n",
"# e.g. your_test_image = \"C://Users//vinitra.swamy//Pictures//digit.png\"\n",
"\n",
"your_test_image = \"<path to file>\"\n",
"\n",
"# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/handwritten_digit.png\"\n",
"\n",
"import matplotlib.image as mpimg\n",
"\n",
"if your_test_image != \"<path to file>\":\n",
@@ -639,10 +651,10 @@
" input_data = json.dumps({'data': img.tolist()})\n",
"\n",
" try:\n",
" r = json.loads(aci_service.run(input_data))\n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" except Exception as e:\n",
" r = aci_service.run(input_data)\n",
" result = r['result']\n",
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
" except KeyError as e:\n",
" print(str(e))\n",
"\n",
" plt.figure(figsize = (16, 6))\n",
@@ -672,18 +684,7 @@
"\n",
"A convolution layer is a set of filters. Each filter is defined by a weight (**W**) matrix, and bias ($b$).\n",
"\n",
"![](https://www.cntk.ai/jup/cntk103d_filterset_v2.png)\n",
"\n",
"These filters are scanned across the image performing the dot product between the weights and corresponding input value ($x$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function. This process is illustrated in the following animation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Image(url=\"https://www.cntk.ai/jup/cntk103d_conv2d_final.gif\", width= 200)"
"These filters are scanned across the image performing the dot product between the weights and corresponding input value ($x$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function."
]
},
{
@@ -695,24 +696,6 @@
"The MNIST model from the ONNX Model Zoo uses maxpooling to update the weights in its convolutions, summarized by the graphic below. You can see the entire workflow of our pre-trained model in the following image, with our input images and our output probabilities of each of our 10 labels. If you're interested in exploring the logic behind creating a Deep Learning model further, please look at the [training tutorial for our ONNX MNIST Convolutional Neural Network](https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_103D_MNIST_ConvolutionalNeuralNetwork.ipynb). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Max-Pooling for Convolutional Neural Nets\n",
"\n",
"![](http://www.cntk.ai/jup/c103d_max_pooling.gif)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Pre-Trained Model Architecture\n",
"\n",
"![](http://www.cntk.ai/jup/conv103d_mnist-conv-mp.png)"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -721,7 +704,7 @@
"source": [
"# remember to delete your service after you are done using it!\n",
"\n",
"# aci_service.delete()"
"aci_service.delete()"
]
},
{
@@ -738,16 +721,37 @@
"- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n",
"\n",
"Next steps:\n",
"- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/tree/master/onnx/onnx-inference-emotion-recognition.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine with GPU support.\n",
"- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine.\n",
"- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)"
]
}
],
"metadata": {
"authors": [
{
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"Local"
],
"datasets": [
"MNIST"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy MNIST digit recognition with ONNX Runtime",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {
@@ -761,7 +765,12 @@
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"msauthor": "vinitra.swamy"
"msauthor": "vinitra.swamy",
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Image Classification"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -0,0 +1,9 @@
name: onnx-inference-mnist-deploy
dependencies:
- pip:
- azureml-sdk
- azureml-widgets
- matplotlib
- numpy
- onnx<1.7.0
- opencv-python-headless

View File

@@ -0,0 +1 @@
{"inputs": {"Input3": {"dims": ["1", "1", "28", "28"], "dataType": 1, "rawData": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAPwAAQEAAAAAAAAAAAAAAgEAAAABAAAAAAAAAMEEAAAAAAAAAAAAAYEEAAIA/AAAAAAAAmEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQEEAAAAAAAAAAAAA4EAAAAAAAACAPwAAIEEAAAAAAAAAQAAAAEAAAIBBAAAAAAAAQEAAAEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4EAAAABBAAAAAAAAAEEAAAAAAAAAAAAAAEEAAAAAAAAAAAAAmEEAAAAAAAAAAAAAgD8AAKhBAAAAAAAAgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAMEEAAAAAAAAAAAAAIEEAAEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQQQAAAAAAAHBBAAAgQQAA0EEAAAhCAACIQQAAmkIAADVDAAAyQwAADEIAAIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWQwAAfkMAAHpDAAB7QwAAc0MAAHxDAAB8QwAAf0MAADRCAADAQAAAAAAAAKBAAAAAAAAAEEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBAAACQQgAATUMAAH9DAABuQwAAc0MAAH9DAAB+QwAAe0MAAHhDAABJQwAARkMAAGRCAAAAAAAAmEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWkMAAH9DAABxQwAAf0MAAHlDAAB6QwAAe0MAAHpDAAB/QwAAf0MAAHJDAABgQwAAREIAAAAAAABAQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgD8AAABAAABAQAAAAEAAAABAAACAPwAAAAAAAIJCAABkQwAAf0MAAH5DAAB0QwAA7kIAAAhCAAAkQgAA3EIAAHpDAAB/QwAAeEMAAPhCAACgQQAAAAAAAAAAAAAAAAAAAAAAAAAAAACAPwAAgD8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBBAAAAAAAAeEIAAM5CAADiQgAA6kIAAAhCAAAAAAAAAAAAAAAAAABIQwAAdEMAAH9DAAB/QwAAAAAAAEBBAAAAAAAAAAAAAAAAAAAAAAAAAEAAAIA/AAAAAAAAAAAAAAAAAAAAAAAAgD8AAABAAAAAAAAAAAAAAABAAACAQAAAAAAAADBBAAAAAAAA4EAAAMBAAAAAAAAAlkIAAHRDAAB/QwAAf0MAAIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AAAAQAAAQEAAAIBAAACAQAAAAAAAAGBBAAAAAAAAAAAAAAAAAAAQQQAAAAAAAABAAAAAAAAAAAAAAAhCAAB/QwAAf0MAAH1DAAAgQQAAIEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AAAAQAAAQEAAAABAAAAAAAAAAAAAAEBAAAAAQAAAAAAAAFBBAAAwQQAAAAAAAAAAAAAAAAAAwEAAAEBBAADGQgAAf0MAAH5DAAB4QwAAcEEAAEBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AACAPwAAgD8AAAAAAAAAAAAAAAAAAAAAAACAPwAAgD8AAAAAAAAAAAAAoEAAAMBAAAAwQQAAAAAAAAAAAACIQQAAOEMAAHdDAAB/QwAAc0MAAFBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAABAAACAQAAAgEAAAAAAAAAwQQAAAAAAAExCAAC8QgAAqkIAAKBAAACgQAAAyEEAAHZDAAB2QwAAf0MAAFBDAAAAAAAAEEEAAAAAAAAAAAAAAAAAAAAAAACAQAAAgD8AAAAAAAAAAAAAgD8AAOBAAABwQQAAmEEAAMZCAADOQgAANkMAAD1DAABtQwAAfUMAAHxDAAA/QwAAPkMAAGNDAABzQwAAfEMAAFJDAACQQQAA4EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIBAAAAAAAAAAAAAAABCAADaQgAAOUMAAHdDAAB/QwAAckMAAH9DAAB0QwAAf0MAAH9DAAByQwAAe0MAAH9DAABwQwAAf0MAAH9DAABaQwAA+EIAABBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAD+QgAAf0MAAGtDAAB/QwAAf0MAAHdDAABlQwAAVEMAAHJDAAB6QwAAf0MAAH9DAAB4QwAAf0MAAH1DAAB5QwAAf0MAAHNDAAAqQwAAQEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMEEAAAAAAAAQQQAAfUMAAH9DAAB/QwAAaUMAAEpDAACqQgAAAAAAAFRCAABEQwAAbkMAAH9DAABjQwAAbkMAAA5DAADaQgAAQUMAAH9DAABwQwAAf0MAADRDAAAAAAAAAAAAAAAAAAAAAAAAwEAAAAAAAACwQQAAgD8AAHVDAABzQwAAfkMAAH9DAABZQwAAa0MAAGJDAABVQwAAdEMAAHtDAAB/QwAAb0MAAJpCAAAAAAAAAAAAAKBBAAA2QwAAd0MAAG9DAABzQwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIBAAAAlQwAAe0MAAH9DAAB1QwAAf0MAAHJDAAB9QwAAekMAAH9DAABFQwAA1kIAAGxCAAAAAAAAkEEAAABAAADAQAAAAAAAAFhCAAB/QwAAHkMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwEEAAAAAAAAAAAAAwEAAAAhCAAAnQwAAQkMAADBDAAA3QwAAJEMAADBCAAAAQAAAIEEAAMBAAADAQAAAAAAAAAAAAACgQAAAAAAAAIA/AAAAAAAAYEEAAABAAAAAAAAAAAAAAAAAAAAAAAAAIEEAAAAAAABgQQAAAAAAAEBBAAAAAAAAoEAAAAAAAACAPwAAAAAAAMBAAAAAAAAA4EAAAAAAAAAAAAAAAAAAAABBAAAAAAAAIEEAAAAAAACgQAAAAAAAAAAAAAAgQQAAAAAAAAAAAAAAAAAAAAAAAAAAAABgQQAAAAAAAIBAAAAAAAAAAAAAAMhBAAAAAAAAAAAAABBBAAAAAAAAAAAAABBBAAAAAAAAMEEAAAAAAACAPwAAAAAAAAAAAAAAQAAAAAAAAAAAAADgQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="}}, "outputFilter": ["Plus214_Output_0"]}

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register ONNX model and deploy as webservice\n",
"\n",
"Following this notebook, you will:\n",
"\n",
" - Learn how to register an ONNX in your Azure Machine Learning Workspace.\n",
" - Deploy your model as a web service in an Azure Container Instance."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create a workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"# Check core SDK version number.\n",
"print('SDK version:', azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"\n",
"Create a [Workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace%28class%29?view=azure-ml-py) object from your persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register model\n",
"\n",
"Register a file or folder as a model by calling [Model.register()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#register-workspace--model-path--model-name--tags-none--properties-none--description-none--datasets-none--model-framework-none--model-framework-version-none--child-paths-none-). For this example, we have provided a trained ONNX MNIST model(`mnist-model.onnx` in the notebook's directory).\n",
"\n",
"In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace. Also, marking this model with the scikit-learn framework will simplify deploying it as a web service, as we'll see later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"model = Model.register(workspace=ws,\n",
" model_name='mnist-sample', # Name of the registered model in your workspace.\n",
" model_path='mnist-model.onnx', # Local ONNX model to upload and register as a model.\n",
" model_framework=Model.Framework.ONNX , # Framework used to create the model.\n",
" model_framework_version='1.3', # Version of ONNX used to create the model.\n",
" description='Onnx MNIST model')\n",
"\n",
"print('Name:', model.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy model\n",
"\n",
"Deploy your model as a web service using [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). Web services take one or more models, load them in an environment, and run them on one of several supported deployment targets.\n",
"\n",
"For this example, we will deploy the ONNX model to an Azure Container Instance (ACI)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use a default environment (for supported models)\n",
"\n",
"The Azure Machine Learning service provides a default environment for supported model frameworks, including ONNX, based on the metadata you provided when registering your model. This is the easiest way to deploy your model.\n",
"\n",
"**Note**: This step can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Webservice\n",
"from azureml.exceptions import WebserviceException\n",
"\n",
"service_name = 'onnx-mnist-service'\n",
"\n",
"# Remove any existing service under the same name.\n",
"try:\n",
" Webservice(ws, service_name).delete()\n",
"except WebserviceException:\n",
" pass\n",
"\n",
"service = Model.deploy(ws, service_name, [model])\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After your model is deployed, perform a call to the web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}\n",
"\n",
"if service.auth_enabled:\n",
" headers['Authorization'] = 'Bearer '+ service.get_keys()[0]\n",
"elif service.token_auth_enabled:\n",
" headers['Authorization'] = 'Bearer '+ service.get_token()[0]\n",
"\n",
"scoring_uri = service.scoring_uri\n",
"print(scoring_uri)\n",
"with open('onnx-mnist-predict-input.json', 'rb') as data_file:\n",
" response = requests.post(\n",
" scoring_uri, data=data_file, headers=headers)\n",
"print(response.status_code)\n",
"print(response.elapsed)\n",
"print(response.json())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are finished testing your service, clean up the deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "vaidyas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: onnx-model-register-and-deploy
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,416 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ResNet50 Image Classification using ONNX and AzureML\n",
"\n",
"This example shows how to deploy the ResNet50 ONNX model as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
"\n",
"## What is ONNX\n",
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
"\n",
"## ResNet50 Details\n",
"ResNet classifies the major object in an input image into a set of 1000 pre-defined classes. For more information about the ResNet50 model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/vision/classification/resnet). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"To make the best use of your time, make sure you have done the following:\n",
"\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download pre-trained ONNX model from ONNX Model Zoo.\n",
"\n",
"Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz\"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"resnet50v2.tar.gz\")\n",
"\n",
"!tar xvzf resnet50v2.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploying as a web service with Azure ML"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load your Azure ML workspace\n",
"\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register your model with Azure ML\n",
"\n",
"Now we upload the model and register it in the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path = \"resnet50v2/resnet50v2.onnx\",\n",
" model_name = \"resnet50v2\",\n",
" tags = {\"onnx\": \"demo\"},\n",
" description = \"ResNet50v2 from ONNX Model Zoo\",\n",
" workspace = ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Displaying your registered models\n",
"\n",
"You can optionally list out all the models that you have registered in this workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"models = ws.models\n",
"for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Write scoring file\n",
"\n",
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import json\n",
"import time\n",
"import sys\n",
"import os\n",
"import numpy as np # we're going to use numpy to process input and output data\n",
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
"\n",
"def softmax(x):\n",
" x = x.reshape(-1)\n",
" e_x = np.exp(x - np.max(x))\n",
" return e_x / e_x.sum(axis=0)\n",
"\n",
"def init():\n",
" global session\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50v2.onnx')\n",
" session = onnxruntime.InferenceSession(model, None)\n",
"\n",
"def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n",
" img_data = np.array(json.loads(input_data_json)['data']).astype('float32')\n",
" \n",
" #normalize\n",
" mean_vec = np.array([0.485, 0.456, 0.406])\n",
" stddev_vec = np.array([0.229, 0.224, 0.225])\n",
" norm_img_data = np.zeros(img_data.shape).astype('float32')\n",
" for i in range(img_data.shape[0]):\n",
" norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]\n",
"\n",
" return norm_img_data\n",
"\n",
"def postprocess(result):\n",
" return softmax(np.array(result)).tolist()\n",
"\n",
"def run(input_data_json):\n",
" try:\n",
" start = time.time()\n",
" # load in our data which is expected as NCHW 224x224 image\n",
" input_data = preprocess(input_data_json)\n",
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
" result = session.run([], {input_name: input_data})\n",
" end = time.time() # stop timer\n",
" return {\"result\": postprocess(result),\n",
" \"time\": end - start}\n",
" except Exception as e:\n",
" result = str(e)\n",
" return {\"error\": result}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create inference configuration"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we create a YAML file that specifies which dependencies we would like to see in our container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime\", \"azureml-core\", \"azureml-defaults\"])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create the inference configuration object. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'demo': 'onnx'}, \n",
" description = 'web service for ResNet50 ONNX model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell will likely take a few minutes to run as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from random import randint\n",
"\n",
"aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aci_service.state != 'Healthy':\n",
" # run this command for debugging.\n",
" print(aci_service.get_logs())\n",
" aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"\n",
"If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aci_service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are eventually done using the web service, remember to delete it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aci_service.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"Local"
],
"datasets": [
"ImageNet"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy ResNet50 with ONNX Runtime",
"index_order": 4,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Image Classification"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: onnx-modelzoo-aml-deploy-resnet50
dependencies:
- pip:
- azureml-sdk

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,5 @@
name: onnx-train-pytorch-aml-deploy-mnist
dependencies:
- pip:
- azureml-sdk
- azureml-widgets

View File

@@ -0,0 +1,350 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
"We then test and delete the service, image and model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get workspace\n",
"Load existing workspace from the config file info."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Download the model\n",
"\n",
"Prior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. This cell will download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"import shutil\n",
"import tarfile\n",
"import tempfile\n",
"\n",
"from io import BytesIO\n",
"\n",
"model_url = \"http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz\"\n",
"\n",
"archive_prefix = \"./resnet_v1_fp32_savedmodel_NCHW_jpg/1538686758/\"\n",
"target_folder = \"resnet50\"\n",
"\n",
"if not os.path.exists(target_folder):\n",
" response = requests.get(model_url)\n",
" archive = tarfile.open(fileobj=BytesIO(response.content))\n",
" with tempfile.TemporaryDirectory() as temp_folder:\n",
" archive.extractall(temp_folder)\n",
" shutil.copytree(os.path.join(temp_folder, archive_prefix), target_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register the model\n",
"Register an existing trained model, add description and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"\n",
"model = Model.register(model_path=\"resnet50\", # This points to the local directory to upload.\n",
" model_name=\"resnet50\", # This is the name the model is registered as.\n",
" tags={'area': \"Image classification\", 'type': \"classification\"},\n",
" description=\"Image classification trained on Imagenet Dataset\",\n",
" workspace=ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Provision the AKS Cluster\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AksCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"aks-gpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print(\"Found existing gpu cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new gpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AksCompute.provisioning_configuration(cluster_purpose=AksCompute.ClusterPurpose.DEV_TEST,\n",
" agent_count=1,\n",
" vm_size=\"Standard_NV6\")\n",
" # Create the cluster with the specified name and configuration\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
" # Wait for the cluster to complete, show the output log\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy the model as a web service to AKS\n",
"\n",
"First create a scoring script"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import json\n",
"import os\n",
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
"from azureml.contrib.services.aml_response import AMLResponse\n",
"\n",
"def init():\n",
" global session\n",
" global input_name\n",
" global output_name\n",
" \n",
" session = tf.Session()\n",
"\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50')\n",
" model = tf.saved_model.loader.load(session, ['serve'], model_path)\n",
" if len(model.signature_def['serving_default'].inputs) > 1:\n",
" raise ValueError(\"This score.py only supports one input\")\n",
" input_name = [tensor.name for tensor in model.signature_def['serving_default'].inputs.values()][0]\n",
" output_name = [tensor.name for tensor in model.signature_def['serving_default'].outputs.values()]\n",
" \n",
"\n",
"@rawhttp\n",
"def run(request):\n",
" if request.method == 'POST':\n",
" reqBody = request.get_data(False)\n",
" resp = score(reqBody)\n",
" return AMLResponse(resp, 200)\n",
" if request.method == 'GET':\n",
" respBody = str.encode(\"GET is not supported\")\n",
" return AMLResponse(respBody, 405)\n",
" return AMLResponse(\"bad request\", 500)\n",
"\n",
"def score(data):\n",
" result = session.run(output_name, {input_name: [data]})\n",
" return json.dumps(result[1].tolist())\n",
"\n",
"if __name__ == \"__main__\":\n",
" init()\n",
" with open(\"test_image.jpg\", 'rb') as f:\n",
" content = f.read()\n",
" print(score(content))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now create the deployment configuration objects and deploy the model as a webservice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the web service configuration (using default here)\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AksWebservice\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE\n",
"\n",
"env = Environment('deploytocloudenv')\n",
"# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)\n",
"# for open-sourced GPU base images.\n",
"env.docker.base_image = DEFAULT_GPU_IMAGE\n",
"env.python.conda_dependencies = CondaDependencies.create(conda_packages=['tensorflow-gpu==1.12.0','numpy'],\n",
" pip_packages=['azureml-contrib-services', 'azureml-defaults'])\n",
"\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)\n",
"aks_config = AksWebservice.deploy_configuration()\n",
"\n",
"# # Enable token auth and disable (key) auth on the webservice\n",
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='gpu-rn50'\n",
"\n",
"aks_service = Model.deploy(workspace=ws,\n",
" name=aks_service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aks_config,\n",
" deployment_target=gpu_cluster)\n",
"\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service\n",
"We test the web sevice by passing the test images content."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import requests\n",
"\n",
"# if (key) auth is enabled, fetch keys and include in the request\n",
"key1, key2 = aks_service.get_keys()\n",
"\n",
"headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
"\n",
"# # if token auth is enabled, fetch token and include in the request\n",
"# access_token, fetch_after = aks_service.get_token()\n",
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
"\n",
"test_sample = open('snowleopardgaze.jpg', 'rb').read()\n",
"resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Clean up\n",
"Delete the service, image, model and compute target"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"model.delete()\n",
"gpu_cluster.delete()\n"
]
}
],
"metadata": {
"authors": [
{
"name": "vaidyas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,5 @@
name: production-deploy-to-aks-gpu
dependencies:
- pip:
- azureml-sdk
- tensorflow

View File

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 61 KiB

View File

@@ -9,12 +9,19 @@
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
"This notebook shows the steps for deploying a service: registering a model, provisioning a cluster with ssl (one time action), and deploying a service to it. \n",
"We then test and delete the service, image and model."
]
},
@@ -27,7 +34,6 @@
"from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n",
"from azureml.core.model import Model"
]
},
@@ -78,7 +84,7 @@
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" model_name = \"sklearn_model\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n",
@@ -90,41 +96,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create an image\n",
"Create an image using the registered model the script that will load and run the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"\n",
"def init():\n",
" global model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n",
" # this is a different behavior than before when the code is run locally, even though the code is the same.\n",
" model_path = Model.get_model_path('sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"result\": result.tolist()})"
"# Create the Environment\n",
"Create an environment that the model will be deployed with"
]
},
{
@@ -133,44 +106,114 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"Image with ridge regression model\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" )\n",
"\n",
"image = ContainerImage.create(name = \"myimage1\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
"conda_deps = CondaDependencies.create(conda_packages=['numpy', 'scikit-learn==0.19.1', 'scipy'], pip_packages=['azureml-defaults', 'inference-schema'])\n",
"myenv = Environment(name='myenv')\n",
"myenv.python.conda_dependencies = conda_deps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Provision the AKS Cluster\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
"#### Use a custom Docker image\n",
"\n",
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
"\n",
"Only supported with `python` runtime.\n",
"```python\n",
"# use an image available in public Container Registry without authentication\n",
"myenv.docker.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
"\n",
"# or, use an image available in a private Container Registry\n",
"myenv.docker.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
"myenv.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
"myenv.docker.base_image_registry.username = \"username\"\n",
"myenv.docker.base_image_registry.password = \"password\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Write the Entry Script\n",
"Write the script that will be used to predict on your model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score_ssl.py\n",
"import os\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from inference_schema.schema_decorators import input_schema, output_schema\n",
"from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"\n",
"standard_sample_input = {'a': 10, 'b': 9, 'c': 8, 'd': 7, 'e': 6, 'f': 5, 'g': 4, 'h': 3, 'i': 2, 'j': 1 }\n",
"standard_sample_output = {'outcome': 1}\n",
"\n",
"@input_schema('param', StandardPythonParameterType(standard_sample_input))\n",
"@output_schema(StandardPythonParameterType(standard_sample_output))\n",
"def run(param):\n",
" try:\n",
" raw_data = [param['a'], param['b'], param['c'], param['d'], param['e'], param['f'], param['g'], param['h'], param['i'], param['j']]\n",
" data = numpy.array([raw_data])\n",
" result = model.predict(data)\n",
" return { 'outcome' : result[0] }\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create the InferenceConfig\n",
"Create the inference config that will be used when deploying the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inf_config = InferenceConfig(entry_script='score_ssl.py', environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Provision the AKS Cluster with SSL\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-secure-web-service) for more details"
]
},
{
@@ -180,13 +223,18 @@
"outputs": [],
"source": [
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-9' \n",
"provisioning_config = AksCompute.provisioning_configuration()\n",
"# Leaf domain label generates a name using the formula\n",
"# \"<leaf-domain-label>######.<azure-region>.cloudapp.azure.net\"\n",
"# where \"######\" is a random series of characters\n",
"provisioning_config.enable_ssl(leaf_domain_label = \"contoso\", overwrite_existing_domain = True)\n",
"\n",
"aks_name = 'my-aks-ssl-1' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)"
" provisioning_configuration = provisioning_config)"
]
},
{
@@ -201,33 +249,6 @@
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Optional step: Attach existing AKS cluster\n",
"\n",
"If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"'''\n",
"# Use the default configuration (can also provide parameters to customize)\n",
"resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n",
"\n",
"create_name='my-existing-aks' \n",
"# Create the cluster\n",
"aks_target = AksCompute.attach(workspace=ws, name=create_name, resource_id=resource_id)\n",
"# Wait for the operation to complete\n",
"aks_target.wait_for_completion(True)\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -238,27 +259,27 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Set the web service configuration (using default here)\n",
"aks_config = AksWebservice.deploy_configuration()"
"metadata": {
"tags": [
"sample-deploy-to-aks"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-service-1'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
"aks_config = AksWebservice.deploy_configuration()\n",
"\n",
"aks_service_name ='aks-service-ssl-1'\n",
"\n",
"aks_service = Model.deploy(workspace=ws,\n",
" name=aks_service_name,\n",
" image = image,\n",
" models=[model],\n",
" inference_config=inf_config,\n",
" deployment_config=aks_config,\n",
" deployment_target = aks_target)\n",
" deployment_target=aks_target,\n",
" overwrite=True)\n",
"\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
@@ -267,8 +288,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service\n",
"We test the web sevice by passing data."
"# Test the web service using run method\n",
"We test the web sevice by passing data.\n",
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
@@ -280,14 +302,9 @@
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"standard_sample_input = json.dumps({'param': {'a': 10, 'b': 9, 'c': 8, 'd': 7, 'e': 6, 'f': 5, 'g': 4, 'h': 3, 'i': 2, 'j': 1 }})\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
"aks_service.run(input_data=standard_sample_input)"
]
},
{
@@ -306,12 +323,16 @@
"source": [
"%%time\n",
"aks_service.delete()\n",
"image.delete()\n",
"model.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "vaidyas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -327,7 +348,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.6.6"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,8 @@
name: production-deploy-to-aks-ssl
dependencies:
- pip:
- azureml-sdk
- matplotlib
- tqdm
- scipy
- sklearn

View File

@@ -0,0 +1,623 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
"We then test and delete the service, image and model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.model import Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get workspace\n",
"Load existing workspace from the config file info."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register the model\n",
"Register an existing trained model, add descirption and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create the Environment\n",
"Create an environment that the model will be deployed with"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"conda_deps = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.19.1','scipy'], pip_packages=['azureml-defaults', 'inference-schema'])\n",
"myenv = Environment(name='myenv')\n",
"myenv.python.conda_dependencies = conda_deps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use a custom Docker image\n",
"\n",
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
"\n",
"Only supported with `python` runtime.\n",
"```python\n",
"# use an image available in public Container Registry without authentication\n",
"myenv.docker.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
"\n",
"# or, use an image available in a private Container Registry\n",
"myenv.docker.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
"myenv.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
"myenv.docker.base_image_registry.username = \"username\"\n",
"myenv.docker.base_image_registry.password = \"password\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Write the Entry Script\n",
"Write the script that will be used to predict on your model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import os\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"\n",
"def init():\n",
" global model\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" # you can return any data type as long as it is JSON-serializable\n",
" return result.tolist()\n",
" except Exception as e:\n",
" error = str(e)\n",
" return error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create the InferenceConfig\n",
"Create the inference config that will be used when deploying the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inf_config = InferenceConfig(entry_script='score.py', environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Model Profiling\n",
"\n",
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
"\n",
"In order to profile your model you will need:\n",
"- a registered model\n",
"- an entry script\n",
"- an inference configuration\n",
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
"\n",
"Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset.\n",
"\n",
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
"\n",
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may want to register datasets using the register() method to your workspace so they can be shared with others, reused and referred to by name in your script.\n",
"You can try get the dataset first to see if it's already registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from azureml.core import Datastore\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.data import dataset_type_definitions\n",
"\n",
"dataset_name='sample_request_data'\n",
"\n",
"dataset_registered = False\n",
"try:\n",
" sample_request_data = Dataset.get_by_name(workspace = ws, name = dataset_name)\n",
" dataset_registered = True\n",
"except:\n",
" print(\"The dataset {} is not registered in workspace yet.\".format(dataset_name))\n",
"\n",
"if not dataset_registered:\n",
" input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\n",
" # create a string that can be put in the body of the request\n",
" serialized_input_json = json.dumps(input_json)\n",
" dataset_content = []\n",
" for i in range(100):\n",
" dataset_content.append(serialized_input_json)\n",
" sample_request_data = '\\n'.join(dataset_content)\n",
" file_name = \"{}.txt\".format(dataset_name)\n",
" f = open(file_name, 'w')\n",
" f.write(sample_request_data)\n",
" f.close()\n",
"\n",
" # upload the txt file created above to the Datastore and create a dataset from it\n",
" data_store = Datastore.get_default(ws)\n",
" data_store.upload_files(['./' + file_name], target_path='sample_request_data')\n",
" datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]\n",
" sample_request_data = Dataset.Tabular.from_delimited_files(\n",
" datastore_path,\n",
" separator='\\n',\n",
" infer_column_types=True,\n",
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
" sample_request_data = sample_request_data.register(workspace=ws,\n",
" name=dataset_name,\n",
" create_new_version=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.model import Model, InferenceConfig\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn==0.19.1',\n",
" 'scipy'\n",
"])\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"# if cpu and memory_in_gb parameters are not provided\n",
"# the model will be profiled on default configuration of\n",
"# 3.5CPU and 15GB memory\n",
"profile = Model.profile(ws,\n",
" 'sklearn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
" [model],\n",
" inference_config,\n",
" input_dataset=sample_request_data,\n",
" cpu=1.0,\n",
" memory_in_gb=0.5)\n",
"\n",
"# profiling is a long running operation and may take up to 25 min\n",
"profile.wait_for_completion(True)\n",
"details = profile.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Provision the AKS Cluster\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your AKS cluster\n",
"aks_name = 'my-aks-9' \n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" # Use the default configuration (can also provide parameters to customize)\n",
" prov_config = AksCompute.provisioning_configuration()\n",
"\n",
" # Create the cluster\n",
" aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)\n",
"\n",
"if aks_target.get_status() != \"Succeeded\":\n",
" aks_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create AKS Cluster in an existing virtual network (optional)\n",
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-enable-virtual-network#use-azure-kubernetes-service) for more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from azureml.core.compute import ComputeTarget, AksCompute\n",
"\n",
"# # Create the compute configuration and set virtual network information\n",
"# config = AksCompute.provisioning_configuration(location=\"eastus2\")\n",
"# config.vnet_resourcegroup_name = \"mygroup\"\n",
"# config.vnet_name = \"mynetwork\"\n",
"# config.subnet_name = \"default\"\n",
"# config.service_cidr = \"10.0.0.0/16\"\n",
"# config.dns_service_ip = \"10.0.0.10\"\n",
"# config.docker_bridge_cidr = \"172.17.0.1/16\"\n",
"\n",
"# # Create the compute target\n",
"# aks_target = ComputeTarget.create(workspace = ws,\n",
"# name = \"myaks\",\n",
"# provisioning_configuration = config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enable SSL on the AKS Cluster (optional)\n",
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-secure-web-service) for more details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# provisioning_config = AksCompute.provisioning_configuration(ssl_cert_pem_file=\"cert.pem\", ssl_key_pem_file=\"key.pem\", ssl_cname=\"www.contoso.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_target.wait_for_completion(show_output = True)\n",
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Optional step: Attach existing AKS cluster\n",
"\n",
"If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Use the default configuration (can also provide parameters to customize)\n",
"# resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n",
"\n",
"# create_name='my-existing-aks' \n",
"# # Create the cluster\n",
"# attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"# aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config)\n",
"# # Wait for the operation to complete\n",
"# aks_target.wait_for_completion(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy web service to AKS"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"sample-deploy-to-aks"
]
},
"outputs": [],
"source": [
"# Set the web service configuration (using default here)\n",
"aks_config = AksWebservice.deploy_configuration()\n",
"\n",
"# # Enable token auth and disable (key) auth on the webservice\n",
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"sample-deploy-to-aks"
]
},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-service-1'\n",
"\n",
"aks_service = Model.deploy(workspace=ws,\n",
" name=aks_service_name,\n",
" models=[model],\n",
" inference_config=inf_config,\n",
" deployment_config=aks_config,\n",
" deployment_target=aks_target)\n",
"\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service using run method\n",
"We test the web sevice by passing data.\n",
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service using raw HTTP request (optional)\n",
"Alternatively you can construct a raw HTTP request and send it to the service. In this case you need to explicitly pass the HTTP header. This process is shown in the next 2 cells."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # if (key) auth is enabled, retrieve the API keys. AML generates two keys.\n",
"# key1, Key2 = aks_service.get_keys()\n",
"# print(key1)\n",
"\n",
"# # if token auth is enabled, retrieve the token.\n",
"# access_token, refresh_after = aks_service.get_token()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# construct raw HTTP request and send to the service\n",
"# %%time\n",
"\n",
"# import requests\n",
"\n",
"# import json\n",
"\n",
"# test_sample = json.dumps({'data': [\n",
"# [1,2,3,4,5,6,7,8,9,10], \n",
"# [10,9,8,7,6,5,4,3,2,1]\n",
"# ]})\n",
"# test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"# # If (key) auth is enabled, don't forget to add key to the HTTP header.\n",
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
"\n",
"# # If token auth is enabled, don't forget to add token to the HTTP header.\n",
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
"\n",
"# resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)\n",
"\n",
"\n",
"# print(\"prediction:\", resp.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Clean up\n",
"Delete the service, image and model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"model.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "vaidyas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,8 @@
name: production-deploy-to-aks
dependencies:
- pip:
- azureml-sdk
- matplotlib
- tqdm
- scipy
- sklearn

Some files were not shown because too many files have changed in this diff Show More