Compare commits

...

944 Commits

Author SHA1 Message Date
amlrelsa-ms
249fb6bbb5 update samples from Release-82 as a part of SDK release 2021-01-25 19:03:14 +00:00
Harneet Virk
cda1f3e4cf Merge pull request #1289 from Azure/release_update/Release-81
update samples from Release-81 as a part of  SDK release
2021-01-11 12:52:48 -07:00
amlrelsa-ms
1d05efaac2 update samples from Release-81 as a part of SDK release 2021-01-11 19:35:54 +00:00
Harneet Virk
3adebd1127 Merge pull request #1262 from Azure/release_update/Release-80
update samples from Release-80 as a part of  SDK release
2020-12-11 16:49:33 -08:00
amlrelsa-ms
a6817063df update samples from Release-80 as a part of SDK release 2020-12-12 00:45:42 +00:00
Harneet Virk
a79f8c254a Merge pull request #1255 from Azure/release_update/Release-79
update samples from Release-79 as a part of  SDK release
2020-12-07 11:11:32 -08:00
amlrelsa-ms
fb4f287458 update samples from Release-79 as a part of SDK release 2020-12-07 19:09:59 +00:00
Harneet Virk
41366a4af0 Merge pull request #1238 from Azure/release_update/Release-78
update samples from Release-78 as a part of  SDK release
2020-11-11 13:00:22 -08:00
amlrelsa-ms
74deb14fac update samples from Release-78 as a part of SDK release 2020-11-11 19:32:32 +00:00
Harneet Virk
4ed1d445ae Merge pull request #1236 from Azure/release_update/Release-77
update samples from Release-77 as a part of  SDK release
2020-11-10 10:52:23 -08:00
amlrelsa-ms
b5c15db0b4 update samples from Release-77 as a part of SDK release 2020-11-10 18:46:23 +00:00
Harneet Virk
91d43bade6 Merge pull request #1235 from Azure/release_update_stablev2/Release-44
update samples from Release-44 as a part of 1.18.0 SDK stable release
2020-11-10 08:52:24 -08:00
amlrelsa-ms
bd750f5817 update samples from Release-44 as a part of 1.18.0 SDK stable release 2020-11-10 03:42:03 +00:00
mx-iao
637bcc5973 Merge pull request #1229 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 15:18:37 -10:00
Cody
ba741fb18d Update README.md 2020-11-03 17:16:28 -08:00
Harneet Virk
ac0ad8d487 Merge pull request #1228 from Azure/release_update/Release-76
update samples from Release-76 as a part of  SDK release
2020-11-03 16:12:15 -08:00
amlrelsa-ms
5019ad6c5a update samples from Release-76 as a part of SDK release 2020-11-03 22:31:02 +00:00
Cody
41a2ebd2b3 Merge pull request #1226 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 11:25:10 -08:00
Cody
53e3283d1d Update README.md 2020-11-03 11:17:41 -08:00
Harneet Virk
ba9c4c5465 Merge pull request #1225 from Azure/release_update/Release-75
update samples from Release-75 as a part of  SDK release
2020-11-03 11:11:11 -08:00
amlrelsa-ms
a6c65f00ec update samples from Release-75 as a part of SDK release 2020-11-03 19:07:12 +00:00
Cody
95072eabc2 Merge pull request #1221 from Azure/lostmygithubaccount-patch-2
Update README.md
2020-11-02 11:52:05 -08:00
Cody
12905ef254 Update README.md 2020-11-02 06:59:44 -08:00
Harneet Virk
4cf56eee91 Merge pull request #1217 from Azure/release_update/Release-74
update samples from Release-74 as a part of  SDK release
2020-10-30 17:27:02 -07:00
amlrelsa-ms
d345ff6c37 update samples from Release-74 as a part of SDK release 2020-10-30 22:20:10 +00:00
Harneet Virk
560dcac0a0 Merge pull request #1214 from Azure/release_update/Release-73
update samples from Release-73 as a part of  SDK release
2020-10-29 23:38:02 -07:00
amlrelsa-ms
322087a58c update samples from Release-73 as a part of SDK release 2020-10-30 06:37:05 +00:00
Harneet Virk
e255c000ab Merge pull request #1211 from Azure/release_update/Release-72
update samples from Release-72 as a part of  SDK release
2020-10-28 14:30:50 -07:00
amlrelsa-ms
7871e37ec0 update samples from Release-72 as a part of SDK release 2020-10-28 21:24:40 +00:00
Cody
58e584e7eb Update README.md (#1209) 2020-10-27 21:00:38 -04:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
Cody
4aad830f1c update readme 2020-10-22 09:13:20 -07:00
Harneet Virk
c1b125025a Merge pull request #1198 from harneetvirk/master
Fixing/Removing broken links
2020-10-20 12:30:46 -07:00
Harneet Virk
9f364f7638 Update README.md 2020-10-20 12:30:03 -07:00
Harneet Virk
4beb749a76 Fixing/Removing the broken links 2020-10-20 12:28:45 -07:00
Harneet Virk
04fe8c4580 Merge pull request #1191 from savitamittal1/patch-4
Update README.md
2020-10-17 08:48:20 -07:00
Harneet Virk
498018451a Merge pull request #1193 from savitamittal1/patch-6
Update automl-databricks-local-with-deployment.ipynb
2020-10-17 08:47:54 -07:00
savitamittal1
04305e33f0 Update automl-databricks-local-with-deployment.ipynb 2020-10-16 23:58:12 -07:00
savitamittal1
d22e76d5e0 Update README.md 2020-10-16 23:53:41 -07:00
Harneet Virk
d71c482f75 Merge pull request #1184 from Azure/release_update/Release-70
update samples from Release-70 as a part of  SDK 1.16.0 release
2020-10-12 22:24:25 -07:00
amlrelsa-ms
5775f8a78f update samples from Release-70 as a part of SDK release 2020-10-13 05:19:49 +00:00
Cody
aae823ecd8 Merge pull request #1181 from samuel100/quickstart-notebook
quickstart nb added
2020-10-09 10:54:32 -07:00
Sam Kemp
f1126e07f9 quickstart nb added 2020-10-09 10:35:19 +01:00
Harneet Virk
0e4b27a233 Merge pull request #1171 from savitamittal1/patch-2
Update automl-databricks-local-01.ipynb
2020-10-02 09:41:14 -07:00
Harneet Virk
0a3d5f68a1 Merge pull request #1172 from savitamittal1/patch-3
Update automl-databricks-local-with-deployment.ipynb
2020-10-02 09:41:02 -07:00
savitamittal1
a6fe2affcb Update automl-databricks-local-with-deployment.ipynb
fixed link to readme
2020-10-01 19:38:11 -07:00
savitamittal1
ce469ddf6a Update automl-databricks-local-01.ipynb
fixed link for readme
2020-10-01 19:36:06 -07:00
mx-iao
9fe459be79 Merge pull request #1166 from Azure/minxia/patch
patch for resume training notebook
2020-09-29 17:30:24 -07:00
mx-iao
89c35c8ed6 Update train-tensorflow-resume-training.ipynb 2020-09-29 17:28:17 -07:00
mx-iao
33168c7f5d Update train-tensorflow-resume-training.ipynb 2020-09-29 17:27:23 -07:00
Cody
1d0766bd46 Merge pull request #1165 from samuel100/quickstart-add
quickstart added
2020-09-29 13:13:36 -07:00
Sam Kemp
9903e56882 quickstart added 2020-09-29 21:09:55 +01:00
Harneet Virk
a039166b90 Merge pull request #1162 from Azure/release_update/Release-69
update samples from Release-69 as a part of  SDK 1.15.0 release
2020-09-28 23:54:05 -07:00
amlrelsa-ms
4e4bf48013 update samples from Release-69 as a part of SDK release 2020-09-29 06:48:31 +00:00
Harneet Virk
0a2408300a Merge pull request #1158 from Azure/release_update/Release-68
update samples from Release-68 as a part of  SDK release
2020-09-25 09:23:59 -07:00
amlrelsa-ms
d99c3f5470 update samples from Release-68 as a part of SDK release 2020-09-25 16:10:59 +00:00
Harneet Virk
3f62fe7d47 Merge pull request #1157 from Azure/release_update/Release-67
update samples from Release-67 as a part of  SDK release
2020-09-23 15:51:20 -07:00
amlrelsa-ms
6059c1dc0c update samples from Release-67 as a part of SDK release 2020-09-23 22:48:56 +00:00
Harneet Virk
8e2032fcde Merge pull request #1153 from Azure/release_update/Release-66
update samples from Release-66 as a part of  SDK release
2020-09-21 16:04:23 -07:00
amlrelsa-ms
824d844cd7 update samples from Release-66 as a part of SDK release 2020-09-21 23:02:01 +00:00
Harneet Virk
bb1c7db690 Merge pull request #1148 from Azure/release_update/Release-65
update samples from Release-65 as a part of  SDK release
2020-09-16 18:23:12 -07:00
amlrelsa-ms
8dad09a42f update samples from Release-65 as a part of SDK release 2020-09-17 01:14:32 +00:00
Harneet Virk
db2bf8ae93 Merge pull request #1137 from Azure/release_update/Release-64
update samples from Release-64 as a part of  SDK release
2020-09-09 15:31:51 -07:00
amlrelsa-ms
820c09734f update samples from Release-64 as a part of SDK release 2020-09-09 22:30:45 +00:00
Cody
a2a33c70a6 Merge pull request #1123 from oliverw1/patch-2
docs: bring docs in line with code
2020-09-02 11:12:31 -07:00
Cody
2ff791968a Merge pull request #1122 from oliverw1/patch-1
docs: Move unintended side columns below the main rows
2020-09-02 11:11:58 -07:00
Harneet Virk
7186127804 Merge pull request #1128 from Azure/release_update/Release-63
update samples from Release-63 as a part of  SDK release
2020-08-31 13:23:08 -07:00
amlrelsa-ms
b01c52bfd6 update samples from Release-63 as a part of SDK release 2020-08-31 20:00:07 +00:00
Oliver W
28be7bcf58 docs: bring docs in line with code
A non-existant name was being referred to, which only serves confusion.
2020-08-28 10:24:24 +02:00
Oliver W
37a9350fde Properly format markdown table
Remove the unintended two columns that appeared on the right side
2020-08-28 09:29:46 +02:00
Harneet Virk
5080053a35 Merge pull request #1120 from Azure/release_update/Release-62
update samples from Release-62 as a part of  SDK release
2020-08-27 17:12:05 -07:00
amlrelsa-ms
3c02102691 update samples from Release-62 as a part of SDK release 2020-08-27 23:28:05 +00:00
Sheri Gilley
07e1676762 Merge pull request #1010 from GinSiuCheng/patch-1
Include additional details on user authentication
2020-08-25 11:45:58 -05:00
Sheri Gilley
919a3c078f fix code blocks 2020-08-25 11:13:24 -05:00
Sheri Gilley
9b53c924ed add code block for better formatting 2020-08-25 11:09:56 -05:00
Sheri Gilley
04ad58056f fix quotes 2020-08-25 11:06:18 -05:00
Sheri Gilley
576bf386b5 fix quotes 2020-08-25 11:05:25 -05:00
Cody
7e62d1cfd6 Merge pull request #891 from Fokko/patch-1
Don't print the access token
2020-08-22 18:28:33 -07:00
Cody
ec67a569af Merge pull request #804 from omartin2010/patch-3
typo
2020-08-17 14:35:55 -07:00
Cody
6d1e80bcef Merge pull request #1031 from hyoshioka0128/patch-1
Typo "Mircosoft"→"Microsoft"
2020-08-17 14:32:44 -07:00
mx-iao
db00d9ad3c Merge pull request #1100 from Azure/lostmygithubaccount-patch-1
fix minor typo in how-to-use-azureml/README.md
2020-08-17 14:30:18 -07:00
Harneet Virk
d33c75abc3 Merge pull request #1104 from Azure/release_update/Release-61
update samples from Release-61 as a part of  SDK release
2020-08-17 10:59:39 -07:00
amlrelsa-ms
d0dc4836ae update samples from Release-61 as a part of SDK release 2020-08-17 17:45:26 +00:00
Cody
982f8fcc1d Update README.md 2020-08-14 15:25:39 -07:00
Akshaya Annavajhala
79739b5e1b Remove broken links (#1095)
* Remove broken links

* Update README.md
2020-08-10 19:35:41 -04:00
Harneet Virk
aac4fa1fb9 Merge pull request #1081 from Azure/release_update/Release-60
update samples from Release-60 as a part of  SDK 1.11.0 release
2020-08-04 00:04:38 -07:00
amlrelsa-ms
5b684070e1 update samples from Release-60 as a part of SDK release 2020-08-04 06:12:06 +00:00
Harneet Virk
0ab8b141ee Merge pull request #1078 from Azure/release_update/Release-59
update samples from Release-59 as a part of  SDK release
2020-07-31 10:52:22 -07:00
amlrelsa-ms
b9ef23ad4b update samples from Release-59 as a part of SDK release 2020-07-31 17:23:17 +00:00
Harneet Virk
7e2c1ca152 Merge pull request #1063 from Azure/release_update/Release-58
update samples from Release-58 as a part of  SDK release
2020-07-20 13:46:37 -07:00
amlrelsa-ms
d096535e48 update samples from Release-58 as a part of SDK release 2020-07-20 20:44:42 +00:00
Harneet Virk
f80512a6db Merge pull request #1056 from wchill/wchill-patch-1
Update README.md with KeyError: brand workaround
2020-07-15 10:22:18 -07:00
Eric Ahn
b54111620e Update README.md 2020-07-14 17:47:23 -07:00
Harneet Virk
8dd52ee2df Merge pull request #1036 from Azure/release_update/Release-57
update samples from Release-57 as a part of  SDK release
2020-07-06 15:06:14 -07:00
amlrelsa-ms
6c629f1eda update samples from Release-57 as a part of SDK release 2020-07-06 22:05:24 +00:00
Hiroshi Yoshioka
9c32ca9db5 Typo "Mircosoft"→"Microsoft"
https://docs.microsoft.com/en-us/samples/azure/machinelearningnotebooks/azure-machine-learning-service-example-notebooks/
2020-06-29 12:21:23 +09:00
Harneet Virk
053efde8c9 Merge pull request #1022 from Azure/release_update/Release-56
update samples from Release-56 as a part of  SDK release
2020-06-22 11:12:31 -07:00
amlrelsa-ms
5189691f06 update samples from Release-56 as a part of SDK release 2020-06-22 18:11:40 +00:00
Gin
745b4f0624 Include additional details on user authentication
Additional details should be included for user authentication esp. for enterprise users who may have more than one single aad tenant linked to a user.
2020-06-13 21:24:56 -04:00
Harneet Virk
fb900916e3 Update README.md 2020-06-11 13:26:04 -07:00
Harneet Virk
738347f3da Merge pull request #996 from Azure/release_update/Release-55
update samples from Release-55 as a part of  SDK release
2020-06-08 15:31:35 -07:00
amlrelsa-ms
34a67c1f8b update samples from Release-55 as a part of SDK release 2020-06-08 22:28:25 +00:00
Harneet Virk
34898828be Merge pull request #992 from Azure/release_update/Release-54
update samples from Release-54 as a part of  SDK release
2020-06-02 14:42:02 -07:00
vizhur
a7c3a0fdb8 update samples from Release-54 as a part of SDK release 2020-06-02 21:34:10 +00:00
Harneet Virk
6d11cdfa0a Merge pull request #984 from Azure/release_update/Release-53
update samples from Release-53 as a part of  SDK release
2020-05-26 19:59:58 -07:00
vizhur
11e8ed2bab update samples from Release-53 as a part of SDK release 2020-05-27 02:45:07 +00:00
Harneet Virk
12c06a4168 Merge pull request #978 from ahcan76/patch-1
Fix image paths in tutorial-1st-experiment-sdk-train.ipynb
2020-05-18 12:58:21 -07:00
ahcan76
1f75dc9725 Update tutorial-1st-experiment-sdk-train.ipynb
Fix the image path
2020-05-18 22:40:54 +03:00
Harneet Virk
1a1a42d525 Merge pull request #977 from Azure/release_update/Release-52
update samples from Release-52 as a part of  SDK release
2020-05-18 12:22:48 -07:00
vizhur
879a272a8d update samples from Release-52 as a part of SDK release 2020-05-18 19:21:05 +00:00
Harneet Virk
bc65bde097 Merge pull request #971 from Azure/release_update/Release-51
update samples from Release-51 as a part of  SDK release
2020-05-13 22:17:45 -07:00
vizhur
690bdfbdbe update samples from Release-51 as a part of SDK release 2020-05-14 05:03:47 +00:00
Harneet Virk
3c02bd8782 Merge pull request #967 from Azure/release_update/Release-50
update samples from Release-50 as a part of  SDK release
2020-05-12 19:57:40 -07:00
vizhur
5c14610a1c update samples from Release-50 as a part of SDK release 2020-05-13 02:45:40 +00:00
Harneet Virk
4e3afae6fb Merge pull request #965 from Azure/release_update/Release-49
update samples from Release-49 as a part of  SDK release
2020-05-11 19:25:28 -07:00
vizhur
a2144aa083 update samples from Release-49 as a part of SDK release 2020-05-12 02:24:34 +00:00
Harneet Virk
0e6334178f Merge pull request #963 from Azure/release_update/Release-46
update samples from Release-46 as a part of  SDK release
2020-05-11 14:49:34 -07:00
vizhur
4ec9178d22 update samples from Release-46 as a part of SDK release 2020-05-11 21:48:31 +00:00
Harneet Virk
2aa7c53b0c Merge pull request #962 from Azure/release_update_stablev2/Release-11
update samples from Release-11 as a part of 1.5.0 SDK stable release
2020-05-11 12:42:32 -07:00
vizhur
553fa43e17 update samples from Release-11 as a part of 1.5.0 SDK stable release 2020-05-11 18:59:22 +00:00
Harneet Virk
e98131729e Merge pull request #949 from Azure/release_update_stablev2/Release-8
update samples from Release-8 as a part of 1.4.0 SDK stable release
2020-04-27 11:00:37 -07:00
vizhur
fd2b09e2c2 update samples from Release-8 as a part of 1.4.0 SDK stable release 2020-04-27 17:44:41 +00:00
Harneet Virk
7970209069 Merge pull request #930 from Azure/release_update/Release-44
update samples from Release-44 as a part of  SDK release
2020-04-17 12:46:29 -07:00
vizhur
24f8651bb5 update samples from Release-44 as a part of SDK release 2020-04-17 19:45:37 +00:00
Harneet Virk
b881f78e46 Merge pull request #918 from Azure/release_update_stablev2/Release-6
update samples from Release-6 as a part of 1.3.0 SDK stable release
2020-04-13 09:23:38 -07:00
vizhur
057e22b253 update samples from Release-6 as a part of 1.3.0 SDK stable release 2020-04-13 16:22:23 +00:00
Fokko Driesprong
119fd0a8f6 Don't print the access token
That's never a good idea, no exceptions :)
2020-03-31 08:14:05 +02:00
Harneet Virk
c520bd1d41 Merge pull request #884 from Azure/release_update/Release-43
update samples from Release-43 as a part of  SDK release
2020-03-23 16:49:27 -07:00
vizhur
d3f1212440 update samples from Release-43 as a part of SDK release 2020-03-23 23:39:45 +00:00
Harneet Virk
b95a65eef4 Merge pull request #883 from Azure/release_update_stablev2/Release-3
update samples from Release-3 as a part of 1.2.0 SDK stable release
2020-03-23 16:21:53 -07:00
vizhur
2218af619f update samples from Release-3 as a part of 1.2.0 SDK stable release 2020-03-23 23:11:53 +00:00
Harneet Virk
0401128638 Merge pull request #878 from Azure/release_update/Release-42
update samples from Release-42 as a part of  SDK release
2020-03-20 11:14:02 -07:00
vizhur
59fcb54998 update samples from Release-42 as a part of SDK release 2020-03-20 18:10:08 +00:00
Harneet Virk
e0ea99a6bb Merge pull request #862 from Azure/release_update/Release-41
update samples from Release-41 as a part of  SDK release
2020-03-13 14:57:58 -07:00
vizhur
b06f5ce269 update samples from Release-41 as a part of SDK release 2020-03-13 21:57:04 +00:00
Harneet Virk
ed0ce9e895 Merge pull request #856 from Azure/release_update/Release-40
update samples from Release-40 as a part of  SDK release
2020-03-12 12:28:18 -07:00
vizhur
71053d705b update samples from Release-40 as a part of SDK release 2020-03-12 19:25:26 +00:00
Harneet Virk
77f98bf75f Merge pull request #852 from Azure/release_update_stable/Release-6
update samples from Release-6 as a part of 1.1.5 SDK stable release
2020-03-11 15:37:59 -06:00
vizhur
e443fd1342 update samples from Release-6 as a part of 1.1.5rc0 SDK stable release 2020-03-11 19:51:02 +00:00
Harneet Virk
2165cf308e update samples from Release-25 as a part of 1.1.2rc0 SDK experimental release (#829)
Co-authored-by: vizhur <vizhur@live.com>
2020-03-02 15:42:04 -05:00
Olivier Martin
d4a486827d typo 2020-02-17 17:16:47 -05:00
Harneet Virk
3d6caa10a3 Merge pull request #801 from Azure/release_update/Release-39
update samples from Release-39 as a part of  SDK release
2020-02-13 19:03:36 -07:00
vizhur
4df079db1c update samples from Release-39 as a part of SDK release 2020-02-14 02:01:41 +00:00
Sander Vanhove
67d0b02ef9 Fix broken link in README (#797) 2020-02-13 08:20:28 -05:00
Harneet Virk
4e7b3784d5 Merge pull request #788 from Azure/release_update/Release-38
update samples from Release-38 as a part of  SDK release
2020-02-11 13:16:15 -07:00
vizhur
ed91e39d7e update samples from Release-38 as a part of SDK release 2020-02-11 20:00:16 +00:00
Harneet Virk
a09a1a16a7 Merge pull request #780 from Azure/release_update/Release-37
update samples from Release-37 as a part of  SDK release
2020-02-07 21:52:34 -07:00
vizhur
9662505517 update samples from Release-37 as a part of SDK release 2020-02-08 04:49:27 +00:00
Harneet Virk
8e103c02ff Merge pull request #779 from Azure/release_update/Release-36
update samples from Release-36 as a part of  SDK release
2020-02-07 21:40:57 -07:00
vizhur
ecb5157add update samples from Release-36 as a part of SDK release 2020-02-08 04:35:14 +00:00
Shané Winner
d7d23d5e7c Update index.md 2020-02-05 22:41:22 -08:00
Harneet Virk
83a21ba53a update samples from Release-35 as a part of SDK release (#765)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-05 20:03:41 -05:00
Harneet Virk
3c9cb89c1a update samples from Release-18 as a part of 1.1.0rc0 SDK experimental release (#760)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-04 22:19:52 -05:00
Sheri Gilley
cca7c2e26f add cell metadata 2020-02-04 11:31:07 -06:00
Harneet Virk
e895d7c2bf update samples - test (#758)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-31 15:19:58 -05:00
Shané Winner
3588eb9665 Update index.md 2020-01-23 15:46:43 -08:00
Harneet Virk
a09e726f31 update samples - test (#748)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-23 16:50:29 -05:00
Shané Winner
4fb1d9ee5b Update index.md 2020-01-22 11:38:24 -08:00
Harneet Virk
b05ff80e9d update samples from Release-169 as a part of 1.0.85 SDK release (#742)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-21 18:00:15 -05:00
Shané Winner
512630472b Update index.md 2020-01-08 14:52:23 -08:00
vizhur
ae1337fe70 Merge pull request #724 from Azure/release_update/Release-167
update samples from Release-167 as a part of 1.0.83 SDK release
2020-01-06 15:38:25 -05:00
vizhur
c95f970dc8 update samples from Release-167 as a part of 1.0.83 SDK release 2020-01-06 20:16:21 +00:00
Shané Winner
9b9d112719 Update index.md 2019-12-24 07:40:48 -08:00
vizhur
fe8fcd4b48 Merge pull request #712 from Azure/release_update/Release-31
update samples - test
2019-12-23 20:28:02 -05:00
vizhur
296ae01587 update samples - test 2019-12-24 00:42:48 +00:00
Shané Winner
8f4efe15eb Update index.md 2019-12-10 09:05:23 -08:00
vizhur
d179080467 Merge pull request #690 from Azure/release_update/Release-163
update samples from Release-163 as a part of 1.0.79 SDK release
2019-12-09 15:41:03 -05:00
vizhur
0040644e7a update samples from Release-163 as a part of 1.0.79 SDK release 2019-12-09 20:09:30 +00:00
Shané Winner
8aa04307fb Update index.md 2019-12-03 10:24:18 -08:00
Shané Winner
a525da4488 Update index.md 2019-11-27 13:08:21 -08:00
Shané Winner
e149565a8a Merge pull request #679 from Azure/release_update/Release-30
update samples - test
2019-11-27 13:05:00 -08:00
vizhur
75610ec31c update samples - test 2019-11-27 21:02:21 +00:00
Shané Winner
0c2c450b6b Update index.md 2019-11-25 14:34:48 -08:00
Shané Winner
0d548eabff Merge pull request #677 from Azure/release_update/Release-29
update samples - test
2019-11-25 14:31:50 -08:00
vizhur
e4029801e6 update samples - test 2019-11-25 22:24:09 +00:00
Shané Winner
156974ee7b Update index.md 2019-11-25 11:42:53 -08:00
Shané Winner
1f05157d24 Merge pull request #676 from Azure/release_update/Release-160
update samples from Release-160 as a part of 1.0.76 SDK release
2019-11-25 11:39:27 -08:00
vizhur
2214ea8616 update samples from Release-160 as a part of 1.0.76 SDK release 2019-11-25 19:28:19 +00:00
Sheri Gilley
b54b2566de Merge pull request #667 from Azure/sdk-codetest
remove deprecated auto_prepare_environment
2019-11-21 09:25:15 -06:00
Shané Winner
d658c85208 Update index.md 2019-11-12 14:59:15 -08:00
vizhur
a5f627a9b6 Merge pull request #655 from Azure/release_update/Release-28
update samples - test
2019-11-12 17:11:45 -05:00
vizhur
a8b08bdff0 update samples - test 2019-11-12 21:53:12 +00:00
Shané Winner
0dc3f34b86 Update index.md 2019-11-11 14:49:44 -08:00
Shané Winner
9ba7d5e5bb Update index.md 2019-11-11 14:48:05 -08:00
Shané Winner
c6ad2f8ec0 Merge pull request #654 from Azure/release_update/Release-158
update samples from Release-158 as a part of 1.0.74 SDK release
2019-11-11 10:25:18 -08:00
vizhur
33d6def8c3 update samples from Release-158 as a part of 1.0.74 SDK release 2019-11-11 16:57:02 +00:00
Shané Winner
69d4344dff Update index.md 2019-11-04 10:09:41 -08:00
Shané Winner
34aeec1439 Update index.md 2019-11-04 10:08:10 -08:00
Shané Winner
a9b9ebbf7d Merge pull request #641 from Azure/release_update/Release-27
update samples - test
2019-11-04 10:02:25 -08:00
vizhur
41fa508d53 update samples - test 2019-11-04 17:57:28 +00:00
Shané Winner
e1bfa98844 Update index.md 2019-11-04 08:41:15 -08:00
Shané Winner
2bcee9aa20 Update index.md 2019-11-04 08:40:29 -08:00
Shané Winner
37541b1071 Merge pull request #638 from Azure/release_update/Release-26
update samples - test
2019-11-04 08:31:59 -08:00
Shané Winner
4aff1310a7 Merge branch 'master' into release_update/Release-26 2019-11-04 08:31:37 -08:00
Shané Winner
51ecb7c54f Update index.md 2019-11-01 10:38:46 -07:00
Shané Winner
4e7fc7c82c Update index.md 2019-11-01 10:36:02 -07:00
vizhur
4ed3f0767a update samples - test 2019-11-01 14:48:01 +00:00
vizhur
46ec74f8df Merge pull request #627 from jingyanwangms/jingywa/lightgbm-notebook
add Lightgbm Estimator notebook
2019-10-22 20:54:33 -04:00
Jingyan Wang
8d2e362a10 add Lightgbm notebook 2019-10-22 17:40:32 -07:00
vizhur
86c1b3d760 adding missing files for rapids 2019-10-21 12:20:15 -04:00
Shané Winner
41dc05952f Update index.md 2019-10-15 16:37:53 -07:00
vizhur
df2e08e4a3 Merge pull request #622 from Azure/release_update/Release-25
update samples - test
2019-10-15 18:34:28 -04:00
vizhur
828a976907 update samples - test 2019-10-15 22:01:55 +00:00
vizhur
1a373f11a0 Merge pull request #621 from Azure/ak/revert-db-overwrite
Revert automatic overwrite of databricks content
2019-10-15 16:07:37 -04:00
Akshaya Annavajhala (AK)
60de701207 revert overwrites 2019-10-15 12:33:31 -07:00
Akshaya Annavajhala (AK)
5841fa4a42 revert overwrites 2019-10-15 12:27:56 -07:00
Shané Winner
659fb7abc3 Merge pull request #619 from Azure/release_update/Release-153
update samples from Release-153 as a part of 1.0.69 SDK release
2019-10-14 15:39:40 -07:00
vizhur
2e404cfc3a update samples from Release-153 as a part of 1.0.69 SDK release 2019-10-14 22:30:58 +00:00
Shané Winner
5fcf4887bc Update index.md 2019-10-06 11:44:35 -07:00
Shané Winner
1e7f3117ae Update index.md 2019-10-06 11:44:01 -07:00
Shané Winner
bbb3f85da9 Update README.md 2019-10-06 11:33:56 -07:00
Shané Winner
c816dfb479 Update index.md 2019-10-06 11:29:58 -07:00
Shané Winner
8c128640b1 Update index.md 2019-10-06 11:28:34 -07:00
vizhur
4d2b937846 Merge pull request #600 from Azure/release_update/Release-24
Fix for Tensorflow 2.0 related Notebook Failures
2019-10-02 16:27:31 -04:00
vizhur
5492f52faf update samples - test 2019-10-02 20:23:54 +00:00
Shané Winner
735db9ebe7 Update index.md 2019-10-01 09:59:10 -07:00
Shané Winner
573030b990 Update README.md 2019-10-01 09:52:10 -07:00
Shané Winner
392a059000 Update index.md 2019-10-01 09:44:37 -07:00
Shané Winner
3580e54fbb Update index.md 2019-10-01 09:42:20 -07:00
Shané Winner
2017bcd716 Update index.md 2019-10-01 09:41:33 -07:00
Roope Astala
4a3f8e7025 Merge pull request #594 from Azure/release_update/Release-149
update samples from Release-149 as a part of 1.0.65 SDK release
2019-09-30 13:29:57 -04:00
vizhur
45880114db update samples from Release-149 as a part of 1.0.65 SDK release 2019-09-30 17:08:52 +00:00
Roope Astala
314bad72a4 Merge pull request #588 from skaarthik/rapids
updating to use AML base image and system managed dependencies
2019-09-25 07:44:31 -04:00
Kaarthik Sivashanmugam
f252308005 updating to use AML base image and system managed dependencies 2019-09-24 20:47:15 -07:00
Kaarthik Sivashanmugam
6622a6c5f2 Merge pull request #1 from Azure/master
merge latest changes from Azure/MLNB repo
2019-09-24 20:40:43 -07:00
Roope Astala
6b19e2f263 Merge pull request #587 from Azure/akshaya-a-patch-3
Update README.md to remove confusing reference
2019-09-24 16:13:16 -04:00
Akshaya Annavajhala
42fd4598cb Update README.md 2019-09-24 15:28:30 -04:00
Roope Astala
476d945439 Merge pull request #580 from akshaya-a/master
Add documentation on the preview ADB linking experience
2019-09-24 09:31:45 -04:00
Shané Winner
e96bb9bef2 Delete manage-runs.yml 2019-09-22 20:37:17 -07:00
Shané Winner
2be4a5e54d Delete manage-runs.ipynb 2019-09-22 20:37:07 -07:00
Shané Winner
247a25f280 Delete hello_with_delay.py 2019-09-22 20:36:50 -07:00
Shané Winner
5d9d8eade6 Delete hello_with_children.py 2019-09-22 20:36:39 -07:00
Shané Winner
dba978e42a Delete hello.py 2019-09-22 20:36:29 -07:00
Shané Winner
7f4101c33e Delete run_details.PNG 2019-09-22 20:36:12 -07:00
Shané Winner
62b0d5df69 Delete run_history.png 2019-09-22 20:36:01 -07:00
Shané Winner
f10b55a1bc Delete logging-api.ipynb 2019-09-22 20:35:47 -07:00
Shané Winner
da9e86635e Delete logging-api.yml 2019-09-22 20:35:36 -07:00
Shané Winner
9ca6388996 Delete datasets-diff.ipynb 2019-09-19 14:14:59 -07:00
Akshaya Annavajhala
3ce779063b address PR feedback 2019-09-18 15:48:42 -04:00
Akshaya Annavajhala
ce635ce4fe add the word mlflow 2019-09-18 13:25:41 -04:00
Akshaya Annavajhala
f08e68c8e9 add linking docs 2019-09-18 11:08:46 -04:00
Shané Winner
93a1d232db Update index.md 2019-09-17 10:00:57 -07:00
Shané Winner
923483528c Update index.md 2019-09-17 09:59:23 -07:00
Shané Winner
cbeacb2ab2 Delete sklearn_regression_model.pkl 2019-09-17 09:37:44 -07:00
Shané Winner
c928c50707 Delete score.py 2019-09-17 09:37:34 -07:00
Shané Winner
efb42bacf9 Delete register-model-deploy-local.ipynb 2019-09-17 09:37:26 -07:00
Shané Winner
d8f349a1ae Delete register-model-deploy-local-advanced.ipynb 2019-09-17 09:37:17 -07:00
Shané Winner
96a61fdc78 Delete myenv.yml 2019-09-17 09:37:08 -07:00
Shané Winner
ff8128f023 Delete helloworld.txt 2019-09-17 09:36:59 -07:00
Shané Winner
8260302a68 Delete dockerSharedDrive.JPG 2019-09-17 09:36:50 -07:00
Shané Winner
fbd7f4a55b Delete README.md 2019-09-17 09:36:41 -07:00
Shané Winner
d4e4206179 Delete helloworld.txt 2019-09-17 09:35:38 -07:00
Shané Winner
a98b918feb Delete model-register-and-deploy.ipynb 2019-09-17 09:35:29 -07:00
Shané Winner
890490ec70 Delete model-register-and-deploy.yml 2019-09-17 09:35:17 -07:00
Shané Winner
c068c9b979 Delete myenv.yml 2019-09-17 09:34:54 -07:00
Shané Winner
f334a3516f Delete score.py 2019-09-17 09:34:44 -07:00
Shané Winner
96248d8dff Delete sklearn_regression_model.pkl 2019-09-17 09:34:27 -07:00
Shané Winner
c42e865700 Delete README.md 2019-09-17 09:29:20 -07:00
vizhur
9233ce089a Merge pull request #577 from Azure/release_update/Release-146
update samples from Release-146 as a part of 1.0.62 SDK release
2019-09-16 19:44:43 -04:00
vizhur
6bb1e2a3e3 update samples from Release-146 as a part of 1.0.62 SDK release 2019-09-16 23:21:57 +00:00
Shané Winner
e1724c8a89 Merge pull request #573 from lostmygithubaccount/master
adding timeseries dataset example notebook
2019-09-16 11:00:30 -07:00
Shané Winner
446e0768cc Delete datasets-diff.ipynb 2019-09-16 10:53:16 -07:00
Cody Peterson
8a2f114a16 adding timeseries dataset example notebook 2019-09-13 08:30:26 -07:00
Shané Winner
80c0d4d30f Merge pull request #570 from trevorbye/master
new pipeline tutorial
2019-09-11 09:28:40 -07:00
Trevor Bye
e8f4708a5a adding index metadata 2019-09-11 09:24:41 -07:00
Trevor Bye
fbaeb84204 adding tutorial 2019-09-11 09:02:06 -07:00
Trevor Bye
da1fab0a77 removing dprep file from old deleted tutorial 2019-09-10 12:31:57 -07:00
Shané Winner
94d2890bb5 Update index.md 2019-09-06 06:37:35 -07:00
Shané Winner
4d1ec4f7d4 Update index.md 2019-09-06 06:30:54 -07:00
Shané Winner
ace3153831 Update index.md 2019-09-06 06:28:50 -07:00
Shané Winner
58bbfe57b2 Update index.md 2019-09-06 06:15:36 -07:00
vizhur
11ea00b1d9 Update index.md 2019-09-06 09:14:30 -04:00
Shané Winner
b81efca3e5 Update index.md 2019-09-06 06:13:03 -07:00
vizhur
d7ceb9bca2 Update index.md 2019-09-06 09:08:02 -04:00
Shané Winner
17730dc69a Merge pull request #564 from MayMSFT/patch-1
Update file-dataset-img-classification.ipynb
2019-09-04 13:31:08 -07:00
May Hu
3a029d48a2 Update file-dataset-img-classification.ipynb
made edit on the sdk version
2019-09-04 13:25:10 -07:00
vizhur
06d43956f3 Merge pull request #558 from Azure/release_update/Release-144
update samples from Release-144 as a part of 1.0.60 SDK release
2019-09-03 22:09:33 -04:00
vizhur
a1cb9b33a5 update samples from Release-144 as a part of 1.0.60 SDK release 2019-09-03 22:39:55 +00:00
Shané Winner
fdc3fe2a53 Delete README.md 2019-08-29 10:22:24 -07:00
Shané Winner
628b35912c Delete train-remote.yml 2019-08-29 10:22:15 -07:00
Shané Winner
3f4cc22e94 Delete train-remote.ipynb 2019-08-29 10:22:07 -07:00
Shané Winner
18d7afb707 Delete train_diabetes.py 2019-08-29 10:21:59 -07:00
Shané Winner
cd35ca30d4 Delete train-local.ipynb 2019-08-29 10:21:48 -07:00
Shané Winner
30eae0b46c Delete train-local.yml 2019-08-29 10:21:40 -07:00
Shané Winner
f16951387f Delete train.py 2019-08-29 10:21:27 -07:00
Shané Winner
0d8de29147 Delete train-and-deploy-pytorch.ipynb 2019-08-29 10:21:16 -07:00
Shané Winner
836354640c Delete train-and-deploy-pytorch.yml 2019-08-29 10:21:08 -07:00
Shané Winner
6162e80972 Delete deploy-model.yml 2019-08-29 10:20:55 -07:00
Shané Winner
fe9fe3392d Delete deploy-model.ipynb 2019-08-29 10:20:46 -07:00
Shané Winner
5ec6d8861b Delete auto-ml-dataprep-remote-execution.yml 2019-08-27 11:19:38 -07:00
Shané Winner
ae188f324e Delete auto-ml-dataprep-remote-execution.ipynb 2019-08-27 11:19:27 -07:00
Shané Winner
4c30c2bdb9 Delete auto-ml-dataprep.yml 2019-08-27 11:19:00 -07:00
Shané Winner
b891440e2d Delete auto-ml-dataprep.ipynb 2019-08-27 11:18:50 -07:00
Shané Winner
784827cdd2 Update README.md 2019-08-27 09:23:40 -07:00
vizhur
0957af04ca Merge pull request #545 from Azure/imatiach-msft-patch-1
add dataprep dependency to notebook
2019-08-23 13:14:30 -04:00
Ilya Matiach
a3bdd193d1 add dataprep dependency to notebook
add dataprep dependency to train-explain-model-on-amlcompute-and-deploy.ipynb notebook for azureml-explain-model package
2019-08-23 13:11:36 -04:00
Shané Winner
dff09970ac Update README.md 2019-08-23 08:38:01 -07:00
Shané Winner
abc7d21711 Update README.md 2019-08-23 05:28:45 +00:00
Shané Winner
ec12ef635f Delete azure-ml-datadrift.ipynb 2019-08-21 10:32:40 -07:00
Shané Winner
81b3e6f09f Delete azure-ml-datadrift.yml 2019-08-21 10:32:32 -07:00
Shané Winner
cc167dceda Delete score.py 2019-08-21 10:32:23 -07:00
Shané Winner
bc52a6d8ee Delete datasets-diff.ipynb 2019-08-21 10:31:50 -07:00
Shané Winner
5bbbdbe73c Delete Titanic.csv 2019-08-21 10:31:38 -07:00
Shané Winner
fd4de05ddd Delete train.py 2019-08-21 10:31:26 -07:00
Shané Winner
9eaab2189d Delete datasets-tutorial.ipynb 2019-08-21 10:31:15 -07:00
Shané Winner
12147754b2 Delete datasets-diff.ipynb 2019-08-21 10:31:05 -07:00
Shané Winner
90ef263823 Delete README.md 2019-08-21 10:30:54 -07:00
Shané Winner
143590cfb4 Delete new-york-taxi_scale-out.ipynb 2019-08-21 10:30:39 -07:00
Shané Winner
40379014ad Delete new-york-taxi.ipynb 2019-08-21 10:30:29 -07:00
Shané Winner
f7b0e99fa1 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-08-21 10:30:18 -07:00
Shané Winner
7a7ac48411 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-08-21 10:30:04 -07:00
Shané Winner
50107c5b1e Delete part-00007-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:51 -07:00
Shané Winner
e41d7e6819 Delete part-00006-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:36 -07:00
Shané Winner
691e038e84 Delete part-00005-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:18 -07:00
Shané Winner
426e79d635 Delete part-00004-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:29:02 -07:00
Shané Winner
326677e87f Delete part-00003-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:45 -07:00
Shané Winner
44988e30ae Delete part-00002-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:31 -07:00
Shané Winner
646ae37384 Delete part-00001-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:18 -07:00
Shané Winner
457e29a663 Delete part-00000-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-08-21 10:28:03 -07:00
Shané Winner
2771edfb2c Delete _SUCCESS 2019-08-21 10:27:45 -07:00
Shané Winner
f0001ec322 Delete adls-dpreptestfiles.crt 2019-08-21 10:27:31 -07:00
Shané Winner
d3e02a017d Delete chicago-aldermen-2015.csv 2019-08-21 10:27:05 -07:00
Shané Winner
a0ebed6876 Delete crime-dirty.csv 2019-08-21 10:26:55 -07:00
Shané Winner
dc0ab6db47 Delete crime-spring.csv 2019-08-21 10:26:45 -07:00
Shané Winner
ea7900f82c Delete crime-winter.csv 2019-08-21 10:26:35 -07:00
Shané Winner
0cb3fd180d Delete crime.parquet 2019-08-21 10:26:26 -07:00
Shané Winner
b05c3e46bb Delete crime.txt 2019-08-21 10:26:17 -07:00
Shané Winner
a1b7d298d3 Delete crime.xlsx 2019-08-21 10:25:41 -07:00
Shané Winner
cc5516c3b3 Delete crime_duplicate_headers.csv 2019-08-21 10:25:32 -07:00
Shané Winner
4fb6070b89 Delete crime.zip 2019-08-21 10:25:23 -07:00
Shané Winner
1b926cdf53 Delete crime-full.csv 2019-08-21 10:25:13 -07:00
Shané Winner
72fc00fb65 Delete crime.dprep 2019-08-21 10:24:56 -07:00
Shané Winner
ddc6b57253 Delete ADLSgen2-datapreptest.crt 2019-08-21 10:24:47 -07:00
Shané Winner
e8b3b98338 Delete crime_fixed_width_file.txt 2019-08-21 10:24:38 -07:00
Shané Winner
66325a1405 Delete crime_multiple_separators.csv 2019-08-21 10:24:29 -07:00
Shané Winner
0efbeaf4b8 Delete json.json 2019-08-21 10:24:12 -07:00
Shané Winner
11d487fb28 Merge pull request #542 from Azure/sgilley/update-deploy
change deployment to model-centric approach
2019-08-21 10:22:13 -07:00
Shané Winner
073e319ef9 Delete large_dflow.json 2019-08-21 10:21:41 -07:00
Shané Winner
3ed75f28d1 Delete map_func.py 2019-08-21 10:21:23 -07:00
Shané Winner
bfc0367f54 Delete median_income.csv 2019-08-21 10:21:14 -07:00
Shané Winner
075eeb583f Delete median_income_transformed.csv 2019-08-21 10:21:05 -07:00
Shané Winner
b7531d3b9e Delete parquet.parquet 2019-08-21 10:20:55 -07:00
Shané Winner
41dc3bd1cf Delete secrets.dprep 2019-08-21 10:20:45 -07:00
Shané Winner
b790b385a4 Delete stream-path.csv 2019-08-21 10:20:36 -07:00
Shané Winner
8700328fe9 Delete summarize.ipynb 2019-08-21 10:17:21 -07:00
Shané Winner
adbd2c8200 Delete subsetting-sampling.ipynb 2019-08-21 10:17:12 -07:00
Shané Winner
7d552effb0 Delete split-column-by-example.ipynb 2019-08-21 10:17:01 -07:00
Shané Winner
bc81d2a5a7 Delete semantic-types.ipynb 2019-08-21 10:16:52 -07:00
Shané Winner
7620de2d91 Delete secrets.ipynb 2019-08-21 10:16:42 -07:00
Shané Winner
07a43a0444 Delete replace-fill-error.ipynb 2019-08-21 10:16:33 -07:00
Shané Winner
f4d5874e09 Delete replace-datasource-replace-reference.ipynb 2019-08-21 10:16:23 -07:00
Shané Winner
8a0b4d24bd Delete random-split.ipynb 2019-08-21 10:16:14 -07:00
Shané Winner
636f19be1f Delete quantile-transformation.ipynb 2019-08-21 10:16:04 -07:00
Shané Winner
0fd7f7d9b2 Delete open-save-dataflows.ipynb 2019-08-21 10:15:54 -07:00
Shané Winner
ab6c66534f Delete one-hot-encoder.ipynb 2019-08-21 10:15:45 -07:00
Shané Winner
faccf13759 Delete min-max-scaler.ipynb 2019-08-21 10:15:36 -07:00
Shané Winner
4c6a28e4ed Delete label-encoder.ipynb 2019-08-21 10:15:25 -07:00
Shané Winner
64ad88e2cb Delete join.ipynb 2019-08-21 10:15:17 -07:00
Shané Winner
969ac90d39 Delete impute-missing-values.ipynb 2019-08-21 10:12:12 -07:00
Shané Winner
fb977c1e95 Delete fuzzy-group.ipynb 2019-08-21 10:12:03 -07:00
Shané Winner
d5ba3916f7 Delete filtering.ipynb 2019-08-21 10:11:53 -07:00
Shané Winner
f7f1087337 Delete external-references.ipynb 2019-08-21 10:11:43 -07:00
Shané Winner
47ea2dbc03 Delete derive-column-by-example.ipynb 2019-08-21 10:11:33 -07:00
Shané Winner
bd2cf534e5 Delete datastore.ipynb 2019-08-21 10:11:24 -07:00
Shané Winner
65f1668d69 Delete data-profile.ipynb 2019-08-21 10:11:16 -07:00
Shané Winner
e0fb7df0aa Delete data-ingestion.ipynb 2019-08-21 10:11:06 -07:00
Shané Winner
7047f76299 Delete custom-python-transforms.ipynb 2019-08-21 10:10:56 -07:00
Shané Winner
c39f2d5eb6 Delete column-type-transforms.ipynb 2019-08-21 10:10:45 -07:00
Shané Winner
5fda69a388 Delete column-manipulations.ipynb 2019-08-21 10:10:36 -07:00
Shané Winner
87ce954eef Delete cache.ipynb 2019-08-21 10:10:26 -07:00
Shané Winner
ebbeac413a Delete auto-read-file.ipynb 2019-08-21 10:10:15 -07:00
Shané Winner
a68bbaaab4 Delete assertions.ipynb 2019-08-21 10:10:05 -07:00
Shané Winner
8784dc979f Delete append-columns-and-rows.ipynb 2019-08-21 10:09:55 -07:00
Shané Winner
f8047544fc Delete add-column-using-expression.ipynb 2019-08-21 10:09:44 -07:00
Shané Winner
eeb2a05e4f Delete working-with-file-streams.ipynb 2019-08-21 10:09:33 -07:00
Shané Winner
6db9d7bd8b Delete writing-data.ipynb 2019-08-21 10:09:19 -07:00
Shané Winner
80e2fde734 Delete getting-started.ipynb 2019-08-21 10:09:04 -07:00
Shané Winner
ae4f5d40ee Delete README.md 2019-08-21 10:08:53 -07:00
Shané Winner
5516edadfd Delete README.md 2019-08-21 10:08:13 -07:00
Sheri Gilley
475afbf44b change deployment to model-centric approach 2019-08-21 09:50:49 -05:00
Shané Winner
197eaf1aab Merge pull request #541 from Azure/sdgilley/update-tutorial
Update img-classification-part1-training.ipynb
2019-08-20 15:59:24 -07:00
Sheri Gilley
184680f1d2 Update img-classification-part1-training.ipynb
updated explanation of datastore
2019-08-20 17:52:45 -05:00
Shané Winner
474f58bd0b Merge pull request #540 from trevorbye/master
removing tutorials for single combined tutorial
2019-08-20 15:22:47 -07:00
Trevor Bye
22c8433897 removing tutorials for single combined tutorial 2019-08-20 12:09:21 -07:00
Josée Martens
822cdd0f01 Update issue templates 2019-08-20 08:35:00 -05:00
Josée Martens
6e65d42986 Update issue templates 2019-08-20 08:26:45 -05:00
Harneet Virk
4c0cbac834 Merge pull request #537 from Azure/release_update/Release-141
update samples from Release-141 as a part of 1.0.57 SDK release
2019-08-19 18:32:44 -07:00
vizhur
44a7481ed1 update samples from Release-141 as a part of 1.0.57 SDK release 2019-08-19 23:33:44 +00:00
Ilya Matiach
8f418b216d Merge pull request #526 from imatiach-msft/ilmat/remove-old-explain-dirs
removing old explain model directories
2019-08-13 12:37:00 -04:00
Ilya Matiach
2d549ecad3 removing old directories 2019-08-13 12:31:51 -04:00
Josée Martens
4dbb024529 Update issue templates 2019-08-11 18:02:17 -05:00
Josée Martens
142a1a510e Update issue templates 2019-08-11 18:00:12 -05:00
vizhur
2522486c26 Merge pull request #519 from wamartin-aml/master
Add dataprep dependency
2019-08-08 09:34:36 -04:00
Walter Martin
6d5226e47c Add dataprep dependency 2019-08-08 09:31:18 -04:00
Shané Winner
e7676d7cdc Delete README.md 2019-08-07 13:14:39 -07:00
Shané Winner
a84f6636f1 Delete README.md 2019-08-07 13:14:24 -07:00
Roope Astala
41be10d1c1 Delete authentication-in-azure-ml.ipynb 2019-08-07 10:12:48 -04:00
vizhur
429eb43914 Merge pull request #513 from Azure/release_update/Release-139
update samples from Release-139 as a part of 1.0.55 SDK release
2019-08-05 16:22:25 -04:00
vizhur
c0dae0c645 update samples from Release-139 as a part of 1.0.55 SDK release 2019-08-05 18:39:19 +00:00
Shané Winner
e4d9a2b4c5 Delete score.py 2019-07-29 09:33:11 -07:00
Shané Winner
7648e8f516 Delete readme.md 2019-07-29 09:32:55 -07:00
Shané Winner
b5ed94b4eb Delete azure-ml-datadrift.ipynb 2019-07-29 09:32:47 -07:00
Shané Winner
85e487f74f Delete new-york-taxi_scale-out.ipynb 2019-07-28 00:38:05 -07:00
Shané Winner
c0a5b2de79 Delete new-york-taxi.ipynb 2019-07-28 00:37:56 -07:00
Shané Winner
0a9e076e5f Delete stream-path.csv 2019-07-28 00:37:44 -07:00
Shané Winner
e3b974811d Delete secrets.dprep 2019-07-28 00:37:33 -07:00
Shané Winner
381d1a6f35 Delete parquet.parquet 2019-07-28 00:37:20 -07:00
Shané Winner
adaa55675e Delete median_income_transformed.csv 2019-07-28 00:37:12 -07:00
Shané Winner
5e3c592d4b Delete median_income.csv 2019-07-28 00:37:02 -07:00
Shané Winner
9c6f1e2571 Delete map_func.py 2019-07-28 00:36:52 -07:00
Shané Winner
bd1bedd563 Delete large_dflow.json 2019-07-28 00:36:43 -07:00
Shané Winner
9716f3614e Delete json.json 2019-07-28 00:36:30 -07:00
Shané Winner
d2c72ca149 Delete crime_multiple_separators.csv 2019-07-28 00:36:19 -07:00
Shané Winner
4f62f64207 Delete crime_fixed_width_file.txt 2019-07-28 00:36:10 -07:00
Shané Winner
16473eb33e Delete crime_duplicate_headers.csv 2019-07-28 00:36:01 -07:00
Shané Winner
d10474c249 Delete crime.zip 2019-07-28 00:35:51 -07:00
Shané Winner
6389cc16f9 Delete crime.xlsx 2019-07-28 00:35:41 -07:00
Shané Winner
bc0a8e0152 Delete crime.txt 2019-07-28 00:35:30 -07:00
Shané Winner
39384aea52 Delete crime.parquet 2019-07-28 00:35:20 -07:00
Shané Winner
5bf4b0bafe Delete crime.dprep 2019-07-28 00:35:11 -07:00
Shané Winner
f22adb7949 Delete crime-winter.csv 2019-07-28 00:35:00 -07:00
Shané Winner
8409ab7133 Delete crime-spring.csv 2019-07-28 00:34:50 -07:00
Shané Winner
32acd55774 Delete crime-full.csv 2019-07-28 00:34:39 -07:00
Shané Winner
7f65c1a255 Delete crime-dirty.csv 2019-07-28 00:34:27 -07:00
Shané Winner
bc7ccc7ef3 Delete chicago-aldermen-2015.csv 2019-07-28 00:34:17 -07:00
Shané Winner
1cc79a71e9 Delete adls-dpreptestfiles.crt 2019-07-28 00:34:05 -07:00
Shané Winner
c0bec5f110 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-07-28 00:33:51 -07:00
Shané Winner
77e5664482 Delete part-00000-34f8a7a7-c3cd-4926-92b2-ba2dcd3f95b7.gz.parquet 2019-07-28 00:33:38 -07:00
Shané Winner
e2eb64372a Delete part-00007-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:23 -07:00
Shané Winner
03cbb6a3a2 Delete part-00006-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:12 -07:00
Shané Winner
44d3d998a8 Delete part-00005-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:33:00 -07:00
Shané Winner
c626f37057 Delete part-00004-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:48 -07:00
Shané Winner
0175574864 Delete part-00003-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:37 -07:00
Shané Winner
f6e8d57da3 Delete part-00002-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:25 -07:00
Shané Winner
01cd31ce44 Delete part-00001-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:13 -07:00
Shané Winner
eb2024b3e0 Delete part-00000-0b08e77b-f17a-4c20-972c-aa382e830fca-c000.csv 2019-07-28 00:32:01 -07:00
Shané Winner
6bce41b3d7 Delete _SUCCESS 2019-07-28 00:31:49 -07:00
Shané Winner
bbdabbb552 Delete writing-data.ipynb 2019-07-28 00:31:32 -07:00
Shané Winner
65343fc263 Delete working-with-file-streams.ipynb 2019-07-28 00:31:22 -07:00
Shané Winner
b6b27fded6 Delete summarize.ipynb 2019-07-28 00:26:56 -07:00
Shané Winner
7e492cbeb6 Delete subsetting-sampling.ipynb 2019-07-28 00:26:41 -07:00
Shané Winner
4cc8f4c6af Delete split-column-by-example.ipynb 2019-07-28 00:26:25 -07:00
Shané Winner
9fba46821b Delete semantic-types.ipynb 2019-07-28 00:26:11 -07:00
Shané Winner
a45954a58f Delete secrets.ipynb 2019-07-28 00:25:58 -07:00
Shané Winner
f16dfb0e5b Delete replace-fill-error.ipynb 2019-07-28 00:25:45 -07:00
Shané Winner
edabbf9031 Delete replace-datasource-replace-reference.ipynb 2019-07-28 00:25:32 -07:00
Shané Winner
63d1d57dfb Delete random-split.ipynb 2019-07-28 00:25:21 -07:00
Shané Winner
10f7004161 Delete quantile-transformation.ipynb 2019-07-28 00:25:10 -07:00
Shané Winner
86ba4e7406 Delete open-save-dataflows.ipynb 2019-07-28 00:24:54 -07:00
Shané Winner
33bda032b8 Delete one-hot-encoder.ipynb 2019-07-28 00:24:43 -07:00
Shané Winner
0fd4bfbc56 Delete min-max-scaler.ipynb 2019-07-28 00:24:32 -07:00
Shané Winner
3fe08c944e Delete label-encoder.ipynb 2019-07-28 00:24:21 -07:00
Shané Winner
d587ea5676 Delete join.ipynb 2019-07-28 00:24:08 -07:00
Shané Winner
edd8562102 Delete impute-missing-values.ipynb 2019-07-28 00:23:55 -07:00
Shané Winner
5ac2c63336 Delete fuzzy-group.ipynb 2019-07-28 00:23:41 -07:00
Shané Winner
1f4e4cdda2 Delete filtering.ipynb 2019-07-28 00:23:28 -07:00
Shané Winner
2e245c1691 Delete external-references.ipynb 2019-07-28 00:23:11 -07:00
Shané Winner
e1b09f71fa Delete derive-column-by-example.ipynb 2019-07-28 00:22:54 -07:00
Shané Winner
8e2220d397 Delete datastore.ipynb 2019-07-28 00:22:43 -07:00
Shané Winner
f74ccf5048 Delete data-profile.ipynb 2019-07-28 00:22:32 -07:00
Shané Winner
97a6d9ca43 Delete data-ingestion.ipynb 2019-07-28 00:22:21 -07:00
Shané Winner
a0ff1c6b64 Delete custom-python-transforms.ipynb 2019-07-28 00:22:11 -07:00
Shané Winner
08f15ef4cf Delete column-type-transforms.ipynb 2019-07-28 00:21:58 -07:00
Shané Winner
7160416c0b Delete column-manipulations.ipynb 2019-07-28 00:21:47 -07:00
Shané Winner
218fed3d65 Delete cache.ipynb 2019-07-28 00:21:35 -07:00
Shané Winner
b8499dfb98 Delete auto-read-file.ipynb 2019-07-28 00:21:22 -07:00
Shané Winner
6bfd472cc2 Delete assertions.ipynb 2019-07-28 00:20:55 -07:00
Shané Winner
ecefb229e9 Delete append-columns-and-rows.ipynb 2019-07-28 00:20:40 -07:00
Shané Winner
883ad806ba Delete add-column-using-expression.ipynb 2019-07-28 00:20:22 -07:00
Shané Winner
848b5bc302 Delete getting-started.ipynb 2019-07-28 00:19:59 -07:00
Shané Winner
58087b53a0 Delete README.md 2019-07-28 00:19:45 -07:00
Shané Winner
ff4d5450a7 Delete README.md 2019-07-28 00:19:29 -07:00
Shané Winner
e2b2b89842 Delete datasets-tutorial.ipynb 2019-07-28 00:19:13 -07:00
Shané Winner
390be2ba24 Delete train.py 2019-07-28 00:19:00 -07:00
Shané Winner
cd1258f81d Delete Titanic.csv 2019-07-28 00:18:41 -07:00
Shané Winner
8a0b48ea48 Delete README.md 2019-07-28 00:18:14 -07:00
Roope Astala
b0dc904189 Merge pull request #502 from msdavx/patch-1
Add demo notebook for datasets diff attribute.
2019-07-26 19:16:13 -04:00
msdavx
82bede239a Add demo notebook for datasets diff attribute. 2019-07-26 11:10:37 -07:00
vizhur
774517e173 Merge pull request #500 from Azure/release_update/Release-137
update samples from Release-137 as a part of 1.0.53 SDK release
2019-07-25 16:36:25 -04:00
Shané Winner
c3ce2bc7fe Delete README.md 2019-07-25 13:28:15 -07:00
Shané Winner
5dd09a1f7c Delete README.md 2019-07-25 13:28:01 -07:00
vizhur
ee1da0ee19 update samples from Release-137 as a part of 1.0.53 SDK release 2019-07-24 22:37:36 +00:00
Paula Ledgerwood
ddfce6b24c Merge pull request #498 from Azure/revert-461-master
Revert "Finetune SSD VGG"
2019-07-24 14:25:43 -07:00
Paula Ledgerwood
31dfc3dc55 Revert "Finetune SSD VGG" 2019-07-24 14:08:00 -07:00
Paula Ledgerwood
168c45b188 Merge pull request #461 from borisneal/master
Finetune SSD VGG
2019-07-24 14:07:15 -07:00
fierval
159948db67 moving notice.txt 2019-07-24 08:50:41 -07:00
fierval
d842731a3b remove tf prereq item 2019-07-23 14:58:51 -07:00
fierval
7822fd4c13 notice + attribution for anchors 2019-07-23 14:49:20 -07:00
fierval
d9fbe4cd87 new folder structure 2019-07-22 10:31:22 -07:00
Shané Winner
a64f4d331a Merge pull request #488 from trevorbye/master
adding new notebook
2019-07-18 10:40:36 -07:00
Trevor Bye
c41f449208 adding new notebook 2019-07-18 10:27:21 -07:00
vizhur
4fe8c1702d Merge pull request #486 from Azure/release_update/Release-22
Fix for automl remote env
2019-07-12 19:18:13 -04:00
vizhur
18cd152591 update samples - test 2019-07-12 22:51:17 +00:00
vizhur
4170a394ed Merge pull request #474 from Azure/release_update/Release-132
update samples from Release-132 as a part of 1.0.48 SDK release
2019-07-09 19:14:29 -04:00
vizhur
475ea36106 update samples from Release-132 as a part of 1.0.48 SDK release 2019-07-09 22:02:57 +00:00
Roope Astala
9e0fc4f0e7 Merge pull request #459 from datashinobi/yassine/datadrift2
fix link to config nb & settingwithcopywarning
2019-07-03 12:41:31 -04:00
fierval
b025816c92 remove config.json 2019-07-02 17:32:56 -07:00
fierval
c75e820107 ssd vgg 2019-07-02 17:23:56 -07:00
Yassine Khelifi
e97e4742ba fix link to config nb & settingwithcopywarning 2019-07-02 16:56:21 +00:00
Roope Astala
14ecfb0bf3 Merge pull request #448 from jeff-shepherd/master
Update new notebooks to use dataprep and add sql files
2019-06-27 09:07:47 -04:00
Jeff Shepherd
61b396be4f Added sql files 2019-06-26 14:26:01 -07:00
Jeff Shepherd
3d2552174d Updated notebooks to use dataprep 2019-06-26 14:23:20 -07:00
Roope Astala
cd3c980a6e Merge pull request #447 from Azure/release-1.0.45
Merged notebook changes from release 1.0.45
2019-06-26 16:32:09 -04:00
Heather Shapiro
249bcac3c7 Merged notebook changes from release 1.0.45 2019-06-26 14:39:09 -04:00
Roope Astala
4a6bcebccc Update configuration.ipynb 2019-06-21 09:35:13 -04:00
Roope Astala
56e0ebc5ac Merge pull request #438 from rastala/master
add pipeline scripts
2019-06-19 18:56:42 -04:00
rastala
2aa39f2f4a add pipeline scripts 2019-06-19 18:55:32 -04:00
Roope Astala
4d247c1877 Merge pull request #437 from rastala/master
pytorch with mlflow
2019-06-19 17:23:06 -04:00
rastala
f6682f6f6d pytorch with mlflow 2019-06-19 17:21:52 -04:00
Roope Astala
26ecf25233 Merge pull request #436 from rastala/master
Update readme
2019-06-19 11:52:23 -04:00
Roope Astala
44c3a486c0 update readme 2019-06-19 11:49:49 -04:00
Roope Astala
c574f429b8 update readme 2019-06-19 11:48:52 -04:00
Roope Astala
77d557a5dc Merge pull request #435 from ganzhi/jamgan/drift
Add demo notebook for AML Data Drift
2019-06-17 16:39:46 -04:00
James Gan
13dedec4a4 Make it in same folder as internal repo 2019-06-17 13:38:27 -07:00
James Gan
6f5c52676f Add notebook to demo data drift 2019-06-17 13:33:30 -07:00
James Gan
90c105537c Add demo notebook for AML Data Drift 2019-06-17 13:31:08 -07:00
Roope Astala
ef264b1073 Merge pull request #434 from rastala/master
update pytorch
2019-06-17 11:57:29 -04:00
Roope Astala
824ac5e021 update pytorch 2019-06-17 11:56:42 -04:00
Roope Astala
e9a7b95716 Merge pull request #421 from csteegz/csteegz-add-warning
Add warning for using prediction client on azure notebooks
2019-06-13 20:27:34 -04:00
Roope Astala
789ee26357 Merge pull request #431 from jeff-shepherd/master
Fixed path for auto-ml-remote-amlcompute notebook
2019-06-13 16:56:25 -04:00
Jeff Shepherd
fc541706e7 Fixed path for auto-ml-remote-amlcompute 2019-06-13 13:12:32 -07:00
Roope Astala
64b8aa2a55 Merge pull request #429 from jeff-shepherd/master
Removed deprecated notebooks from readme
2019-06-13 14:40:57 -04:00
Jeff Shepherd
d3dc35dbb6 Removed deprecated notebooks from readme 2019-06-13 11:03:25 -07:00
Roope Astala
b55ac368e7 Merge pull request #428 from rastala/master
update cluster creation
2019-06-13 12:16:30 -04:00
Roope Astala
de162316d7 update cluster creation 2019-06-13 12:14:58 -04:00
Roope Astala
4ecc58dfe2 Merge pull request #427 from rastala/master
dockerfile
2019-06-12 10:24:34 -04:00
Roope Astala
daf27a76e4 dockerfile 2019-06-12 10:23:34 -04:00
Roope Astala
a05444845b Merge pull request #426 from rastala/master
version 1.0.43
2019-06-12 10:09:08 -04:00
Roope Astala
79c9f50c15 version 1.0.43 2019-06-12 10:08:35 -04:00
Roope Astala
67e10e0f6b Merge pull request #417 from lan-tang/patch-1
Create readme.md in data-drift
2019-06-11 13:47:55 -04:00
Roope Astala
1ef0331a0f Merge pull request #423 from rastala/master
add sklearn estimator
2019-06-11 11:30:37 -04:00
Roope Astala
5e91c836b9 add sklearn estimator 2019-06-11 11:29:56 -04:00
Colin Versteeg
661762854a add warning to training 2019-06-10 16:51:33 -07:00
Colin Versteeg
fbc90ba74f add to quickstart 2019-06-10 16:50:59 -07:00
Colin Versteeg
0d9c83d0a8 Update accelerated-models-object-detection.ipynb 2019-06-10 16:48:17 -07:00
Colin Versteeg
ca4cab1de9 Merge pull request #1 from Azure/master
pull from master
2019-06-10 16:45:12 -07:00
Roope Astala
ddbb3c45f6 Merge pull request #420 from rastala/master
mlflow integration preview
2019-06-10 15:12:36 -04:00
rastala
8eed4e39d0 mlflow integration preview 2019-06-10 15:10:57 -04:00
Lan Tang
b37c0297db Create readme.md 2019-06-07 12:32:32 -07:00
Roope Astala
968cc798d0 Update README.md 2019-06-05 12:15:33 -04:00
Roope Astala
5c9ca452fb Create README.md 2019-06-05 12:15:19 -04:00
Shané Winner
5e82680272 Update README.md 2019-05-31 10:58:39 -07:00
Roope Astala
41841fc8c0 Update README.md 2019-05-31 13:00:41 -04:00
Roope Astala
896bf63736 Merge pull request #397 from rastala/master
dockerfile
2019-05-29 11:05:18 -04:00
Roope Astala
d4751bf6ec dockerfile 2019-05-29 11:04:19 -04:00
Roope Astala
3531fe8a21 Merge pull request #396 from rastala/master
version 1.0.41
2019-05-29 11:01:15 -04:00
Roope Astala
db6ae67940 version 1.0.41 2019-05-29 10:59:59 -04:00
Shané Winner
2a479bb01e Merge pull request #395 from imatiach-msft/ilmat/fix-typo
fix typo
2019-05-28 14:02:33 -07:00
Ilya Matiach
d05eec92af fix typo 2019-05-28 16:59:59 -04:00
Josée Martens
70fdab0a28 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:45:04 -05:00
Josée Martens
7ce5a43b58 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:44:35 -05:00
Josée Martens
d2a9dbb582 Update auto-ml-classification-with-deployment.ipynb 2019-05-24 13:43:38 -05:00
Roope Astala
a5d774683d Merge pull request #390 from rastala/master
fix default cluster creation in config notebook
2019-05-23 12:30:09 -04:00
Roope Astala
0e850f0917 fix default cluster creation in config notebook 2019-05-23 12:27:53 -04:00
Shané Winner
59f34b7179 Delete configtest.ipynb 2019-05-22 10:47:50 -07:00
Shané Winner
2a3cb69004 Create configtest.ipynb 2019-05-22 10:41:16 -07:00
Shané Winner
42894ff81a Delete LICENSE.txt 2019-05-22 10:22:05 -07:00
Shané Winner
2163cab50b Delete LICENSE.txt 2019-05-22 10:21:42 -07:00
Shané Winner
255edb04c0 Rename LICENSE.txt to LICENSE 2019-05-22 10:13:08 -07:00
Shané Winner
cfce079278 Rename LICENSES to LICENSE.txt 2019-05-22 10:06:31 -07:00
Shané Winner
ae6f067c81 Deleted index.html
cleaning up root directory
2019-05-22 10:04:23 -07:00
Shané Winner
1b7ff724f3 Deleted pr.md
Contents of this file moved to the README in the root directory.
2019-05-22 10:03:40 -07:00
Shané Winner
8bba850db1 moved the content in the pr.md file
moved the content in the pr.md file to under 'Projects using Azure Machine Learning'
2019-05-21 07:51:28 -07:00
Shané Winner
b9e35ea0cb Create LICENSE 2019-05-21 07:44:10 -07:00
Shané Winner
ffa28aa89c Delete sdk 2019-05-21 07:43:06 -07:00
Shané Winner
6ab85a20e3 Create LICENSES 2019-05-21 07:42:07 -07:00
Shané Winner
486c44d157 Create sdk 2019-05-21 07:39:43 -07:00
Shané Winner
cd80040dd8 Delete Licenses 2019-05-21 07:39:03 -07:00
Shané Winner
465a5b13b1 Create Licenses 2019-05-21 07:38:52 -07:00
Shané Winner
dcd2d58880 Added notice on the data/telemetry 2019-05-20 14:44:43 -07:00
Roope Astala
93bf4393f2 Merge pull request #381 from jeff-shepherd/master
Revert change to default amlcompute cluster
2019-05-16 15:35:43 -04:00
Jeff Shepherd
d6ebb484a6 Revert change to default amlcomputecluster to support existing resource
groups
2019-05-16 12:27:23 -07:00
Roope Astala
35afd43193 Merge pull request #372 from rogerhe/master
adding macOS specific yml. Install nomkl to workaround openmp issue
2019-05-14 19:07:42 -04:00
Roope Astala
2d68535de2 Merge pull request #376 from rastala/master
version 1.0.39
2019-05-14 16:04:09 -04:00
Roope Astala
0d448892a3 version check 2019-05-14 16:03:39 -04:00
Roope Astala
2d41c00488 version 1.0.39 2019-05-14 16:01:14 -04:00
Roger He
22597ac684 adding macOS specific yml. Install nomkl to workaround openmp issue 2019-05-09 16:51:51 -07:00
Josée Martens
8b1bffc200 Update README.md 2019-05-08 12:36:49 -05:00
Josée Martens
a240ac319f Update README.md 2019-05-08 12:27:57 -05:00
Josée Martens
83cfe3b9b3 Update README.md 2019-05-08 12:25:41 -05:00
Paula Ledgerwood
dcce6f227f Merge pull request #360 from Azure/paledger/update-readme
Update readme/cluster location from PM's instructions
2019-05-06 10:08:22 -07:00
Paula Ledgerwood
5328186d68 Update python kernel version 2019-05-06 09:45:20 -07:00
Paula Ledgerwood
7ccaa2cf57 Update readme from PM's instructions 2019-05-06 09:41:54 -07:00
Shané Winner
56b0664b6b Update img-classification-part1-training.ipynb 2019-05-05 17:47:31 -07:00
Shané Winner
4c1167edc4 Update img-classification-part1-training.ipynb 2019-05-05 17:45:48 -07:00
Shané Winner
eb643fe213 Update README.md 2019-05-05 17:26:29 -07:00
Shané Winner
5faa9d293c Update README.md 2019-05-05 15:34:27 -07:00
Shané Winner
32e2b5f647 Update train-hyperparameter-tune-deploy-with-tensorflow.ipynb 2019-05-05 15:32:19 -07:00
Shané Winner
ae25654882 Update train-hyperparameter-tune-deploy-with-pytorch.ipynb 2019-05-05 15:29:42 -07:00
Shané Winner
0ca05093bd Update train-hyperparameter-tune-deploy-with-keras.ipynb 2019-05-05 15:28:16 -07:00
Shané Winner
5e39582de3 Update train-hyperparameter-tune-deploy-with-chainer.ipynb 2019-05-05 15:24:14 -07:00
Shané Winner
6b6a6da9dc Update tensorboard.ipynb 2019-05-05 15:22:28 -07:00
Shané Winner
cba2c6b9e2 Update how-to-use-estimator.ipynb 2019-05-05 15:20:50 -07:00
Shané Winner
58557abd20 Update export-run-history-to-tensorboard.ipynb 2019-05-05 15:18:48 -07:00
Shané Winner
59452a3141 Update distributed-tensorflow-with-parameter-server.ipynb 2019-05-05 15:17:15 -07:00
Shané Winner
463718e26b Update distributed-tensorflow-with-horovod.ipynb 2019-05-05 15:15:13 -07:00
Shané Winner
9ea0ba5131 Update distributed-pytorch-with-horovod.ipynb 2019-05-05 15:13:28 -07:00
Shané Winner
2804a8d859 Update distributed-cntk-with-custom-docker.ipynb 2019-05-05 15:11:51 -07:00
Shané Winner
4761b668ff Update distributed-chainer.ipynb 2019-05-05 15:09:28 -07:00
Shané Winner
c4163017c2 Update using-environments.ipynb 2019-05-05 00:11:40 -07:00
Shané Winner
71e8e9bd23 Update train-within-notebook.ipynb 2019-05-05 00:09:26 -07:00
Shané Winner
6ff06dd137 Update train-on-remote-vm.ipynb 2019-05-05 00:06:23 -07:00
Shané Winner
73db8ae04d Update train-on-local.ipynb 2019-05-04 23:52:01 -07:00
Shané Winner
3637dce58a Update train-on-amlcompute.ipynb 2019-05-04 23:48:16 -07:00
Shané Winner
23771fc599 added tracking pixel and edited config text 2019-05-04 21:08:10 -07:00
Shané Winner
5f04a467b7 added tracking pixel 2019-05-04 21:03:08 -07:00
Shané Winner
532f65c998 added tracking pixel and edited config text 2019-05-04 20:59:50 -07:00
Shané Winner
f36dda0c2d added tracking pixel and edited the config text 2019-05-04 20:54:32 -07:00
Shané Winner
c7b56929bc added tracking pixel and edited config text 2019-05-04 20:50:57 -07:00
Shané Winner
5f19d75a42 added tracking pixel and edited the config text 2019-05-04 20:48:04 -07:00
Shané Winner
a1968aafa2 updated config text and added tracking pixel 2019-05-04 20:43:54 -07:00
Shané Winner
6b82991017 edited config text and added tracking pixel 2019-05-04 20:40:23 -07:00
Shané Winner
725013511e added tracking pixel 2019-05-04 20:34:58 -07:00
Shané Winner
6a20160173 added tracking pixel 2019-05-04 20:02:01 -07:00
Shané Winner
137db8aec0 added tracking pixel 2019-05-04 19:49:50 -07:00
Shané Winner
b7b10c394b added tracking pixel 2019-05-04 19:47:28 -07:00
Shané Winner
46206716a4 added tracking pixel 2019-05-04 19:44:23 -07:00
Shané Winner
92bb98ac62 added tracking pixel 2019-05-04 19:41:33 -07:00
Shané Winner
b398c24262 added tracking pixel 2019-05-04 19:38:28 -07:00
Shané Winner
e0618302e3 added tracking pixel 2019-05-04 19:35:57 -07:00
Shané Winner
b6cddafa3e edited config text and added the pixel tracker 2019-05-04 19:31:59 -07:00
Shané Winner
4188bd2474 updated the config text and added the tracking pixel 2019-05-04 19:25:26 -07:00
Shané Winner
69126edfcb update config text and added tracking pixel 2019-05-04 19:20:46 -07:00
Shané Winner
4e14c35b9b added pixel tracker 2019-05-04 16:31:07 -07:00
Shané Winner
1608c19aa6 updated tracking pixel and and config text 2019-05-04 15:12:53 -07:00
Shané Winner
46b8611b74 tracking pixel and edited config text 2019-05-04 15:08:57 -07:00
Shané Winner
fbb01bde70 update the config text and added pixel tracker server 2019-05-04 15:01:35 -07:00
Shané Winner
cefe2f0811 updated the config text and added the tracking pixel 2019-05-04 14:58:45 -07:00
Shané Winner
42e0a31f88 updated the config text and the tracking pixel 2019-05-04 14:54:37 -07:00
Shané Winner
8b0998ac9f updated the config text and the tracking pixel 2019-05-04 14:49:29 -07:00
Shané Winner
046c6051fb updated config text and added tracking pixel 2019-05-04 14:38:39 -07:00
Shané Winner
bdb7db15ef updated tracking pixel and the config text 2019-05-04 14:35:28 -07:00
Shané Winner
b13139f103 update the config text and the tracking pixel 2019-05-04 14:31:25 -07:00
Shané Winner
8adb206ae3 updated config text and pixel tracker 2019-05-04 13:56:09 -07:00
Shané Winner
484b6bbb7a updated the config text and pixel server 2019-05-04 13:51:12 -07:00
Shané Winner
55ef0bda6a updated config text 2019-05-04 13:46:43 -07:00
Shané Winner
1401cdef33 updated config text 2019-05-04 13:41:34 -07:00
Shané Winner
5d02206cbd updated with tracking pixel 2019-05-04 13:34:11 -07:00
Shané Winner
c24b65d4ae updated with tracking pixel 2019-05-04 13:32:14 -07:00
Shané Winner
57c5ef318f updated with pixel tracker 2019-05-04 13:25:11 -07:00
Shané Winner
ba033d72f8 Update train-in-spark.ipynb 2019-05-04 09:33:07 -07:00
Shané Winner
aa657ac528 Update manage-runs.ipynb 2019-05-04 09:29:00 -07:00
Shané Winner
7d8289679d added the tracking pixel and the edited the config text 2019-05-04 08:40:18 -07:00
Shané Winner
a7c3db0560 Update model-register-and-deploy.ipynb 2019-05-03 23:21:58 -07:00
Shané Winner
e548847881 pixel text and config text update 2019-05-03 23:20:57 -07:00
Shané Winner
08c6b1f4ed tracking pixel test 2019-05-03 23:15:28 -07:00
Shané Winner
78abb65f5e updated configuration text 2019-05-03 23:08:55 -07:00
Shané Winner
3c6c090732 Update README.md 2019-05-03 22:54:31 -07:00
Shané Winner
513e36d9b2 updated the config verbiage and tracking pixel 2019-05-03 22:54:02 -07:00
Ilya Matiach
9db91a7fb8 Merge pull request #351 from imatiach-msft/ilmat/update-raw-features-notebook
Update raw features explanation notebook
2019-05-03 12:47:28 -04:00
Roope Astala
d9b26b655b Merge pull request #356 from rastala/master
how to use environments
2019-05-03 10:27:33 -04:00
Roope Astala
cb8dc41766 how to use environments 2019-05-03 10:25:39 -04:00
Ilya Matiach
9c9b4bb122 Update raw features explanation notebook 2019-05-02 14:29:53 -04:00
Roope Astala
f5c896c70f Merge pull request #345 from csteegz/add-gpu-deploy
Create production-deploy-to-aks-gpu.ipynb
2019-05-02 14:13:50 -04:00
Colleen Forbes
3b572eddb2 Merge pull request #350 from MayMSFT/master
add dataset tutorial
2019-05-02 09:33:25 -07:00
May Hu
51523db294 add dataset tutorial 2019-05-02 09:07:11 -07:00
Ilya Matiach
3b4998941c Merge pull request #348 from imatiach-msft/ilmat/update-explain-model-nb
updating model explanation notebooks
2019-04-30 17:27:44 -04:00
Ilya Matiach
6cdbfb8722 updating model explanation notebooks 2019-04-30 17:12:54 -04:00
Colin Versteeg
c086bd69c7 Create production-deploy-to-aks-gpu.ipynb
Add deploy to aks GPU notebook
2019-04-29 16:26:42 -07:00
Shané Winner
279c9b8dc4 Pixel Tracker 2019-04-29 11:27:03 -07:00
Shané Winner
98589fe335 Testing Pixel Tracker 2019-04-29 11:16:08 -07:00
Shané Winner
77f21058a2 Testing Pixel Tracker 2019-04-29 11:04:05 -07:00
Roope Astala
baa65d0886 Merge pull request #343 from Azure/paledger/add-accel-models
Initial commit to add AccelModels notebooks from AzureMlCli repo
2019-04-29 13:56:06 -04:00
Paula Ledgerwood
0fffa11b2a Update links and code formatting 2019-04-29 10:20:55 -07:00
Paula Ledgerwood
20ec225343 Initial commit to add notebooks from AzureMlCli repo 2019-04-26 11:16:33 -07:00
Roope Astala
845e9d653e Merge pull request #342 from rastala/master
dockerfile 1.0.33
2019-04-26 14:01:55 -04:00
Roope Astala
639ef81636 dockerfile 1.0.33 2019-04-26 13:57:46 -04:00
Roope Astala
60158bf41a Merge pull request #341 from rastala/master
version 1.0.33
2019-04-26 13:45:47 -04:00
Roope Astala
8dbbb01b8a version 1.0.33 2019-04-26 13:44:15 -04:00
Roope Astala
6e6b2b0c48 Merge pull request #340 from rastala/master
add readme
2019-04-26 09:41:49 -04:00
Roope Astala
85f5721bf8 add readme 2019-04-26 09:40:24 -04:00
Shané Winner
6a7dd741e7 Pixel server added 2019-04-23 13:48:23 -07:00
Shané Winner
314218fc89 Added pixel server 2019-04-23 13:47:06 -07:00
Shané Winner
b50d2725c7 Added pixel server 2019-04-23 13:46:06 -07:00
Shané Winner
9a2f448792 Added pixel server 2019-04-23 13:45:05 -07:00
Shané Winner
dd620f19fd Pixel server added 2019-04-23 13:43:41 -07:00
Shané Winner
8116d31da4 Pixel Server added 2019-04-23 13:40:26 -07:00
Shané Winner
ef29dc1fa5 Added Pixel Server 2019-04-23 13:39:18 -07:00
Shané Winner
97b345cb33 Implemented Pixel Server 2019-04-23 13:37:41 -07:00
Shané Winner
282250e670 Implementing Pixel Server 2019-04-23 13:36:24 -07:00
Shané Winner
acef60c5b3 Testing pixel web app 2019-04-23 13:15:04 -07:00
Shané Winner
bfb444eb15 Testing Pixel Tracker 2019-04-23 13:07:48 -07:00
Shané Winner
6277659bf2 Testing Pixel Server 2019-04-23 11:48:55 -07:00
Shané Winner
1645e12712 Testing Tracking Pixel 2019-04-23 11:15:53 -07:00
Roope Astala
cc4a32e70b Merge pull request #337 from jeff-shepherd/master
Updated automl_setup scripts
2019-04-23 13:50:09 -04:00
Jeff Shepherd
997a35aed5 Updated automl_setup scripts 2019-04-23 10:40:33 -07:00
Roope Astala
dd6317a4a0 Merge pull request #336 from rastala/master
adding work-with-data
2019-04-23 10:05:08 -04:00
Roope Astala
82d8353d54 adding work-with-data 2019-04-23 10:04:32 -04:00
Shané Winner
59a01c17a0 Testing the pixel tracker 2019-04-22 14:45:09 -07:00
Shané Winner
e31e1d9af3 Implemented a test pixel tracker 2019-04-22 14:41:32 -07:00
Roope Astala
d38b9db255 Merge pull request #334 from rastala/master
docker update
2019-04-22 15:43:28 -04:00
Roope Astala
761ad88c93 docker update 2019-04-22 15:43:02 -04:00
Roope Astala
644729e5db Merge pull request #333 from rastala/master
version 1.0.30
2019-04-22 15:40:11 -04:00
Roope Astala
e2b1b3fcaa version 1.0.30 2019-04-22 15:39:18 -04:00
Roope Astala
dc692589a9 Merge pull request #326 from rastala/master
update aks notebook
2019-04-18 16:19:51 -04:00
Roope Astala
624b4595b5 update aks notebook 2019-04-18 16:18:33 -04:00
Roope Astala
0ed85c33c2 Delete release.json 2019-04-18 10:01:50 -04:00
Roope Astala
5b01de605f Merge pull request #318 from savitamittal1/hdinotebook
Sample HDI notebook
2019-04-18 10:01:26 -04:00
Savitam
c351ac988a Sample HDI notebook
sample HDI notebook
2019-04-15 12:35:34 -07:00
Josée Martens
759ec3934c Delete yt_cover.png 2019-04-15 12:06:25 -05:00
Josée Martens
b499b88a85 Delete python36.png 2019-04-15 12:06:16 -05:00
Josée Martens
5f4edac3c1 Update NBSETUP.md 2019-04-15 12:00:31 -05:00
Josée Martens
edfce0d936 Update README.md 2019-04-12 17:28:16 -05:00
Josée Martens
1516c7fc24 Update README.md
testing for search
2019-04-12 17:19:55 -05:00
Roope Astala
389fb668ce Add files via upload 2019-04-10 11:12:55 -04:00
Josée Martens
647d5e72a5 Merge pull request #307 from Azure/vizhur-patch-2
Create googled8147fb6c0788258.html
2019-04-09 15:21:51 -05:00
vizhur
43ac4c84bb Create googled8147fb6c0788258.html 2019-04-09 16:19:47 -04:00
Roope Astala
8a1a82b50a Merge pull request #303 from rastala/master
dockerfile and missing config update
2019-04-08 15:38:13 -04:00
Roope Astala
72f386298c dockerfile and missing config update 2019-04-08 15:37:48 -04:00
Roope Astala
41d697e298 Merge pull request #302 from rastala/master
version 1.0.23
2019-04-08 15:35:50 -04:00
Roope Astala
c3ce932029 version 1.0.23 2019-04-08 15:34:51 -04:00
Roope Astala
a956162114 Merge pull request #290 from rastala/master
update aks deployment notebook
2019-04-03 10:53:51 -04:00
Roope Astala
cb5a178e40 Merge branch 'master' of github.com:rastala/MachineLearningNotebooks 2019-04-03 10:52:40 -04:00
Roope Astala
d81c336c59 update production deploy to aks 2019-04-03 10:52:15 -04:00
Roope Astala
4244a24d81 Merge pull request #287 from jeff-shepherd/master
Fixed line termination on automl_setup_linux.sh
2019-04-03 09:21:35 -04:00
Jeff Shepherd
3b488555e5 Added back automl_setup_linux.sh with correct line termination 2019-04-02 16:24:05 -07:00
Jeff Shepherd
6abc478f33 Removed automl_setup_linux.sh 2019-04-02 16:23:11 -07:00
Roope Astala
666c2579eb Merge pull request #285 from jeff-shepherd/master
Corrected line termination for automl_setup_mac.sh
2019-04-02 09:19:53 -04:00
Jeff Shepherd
5af3aa4231 Fixed line termination 2019-04-01 16:19:00 -07:00
Jeff Shepherd
e48d828ab0 Removed automl_setup_mac.sh 2019-04-01 16:17:56 -07:00
Jeff Shepherd
44aa636c21 Merge branch 'master' of https://github.com/Azure/MachineLearningNotebooks 2019-04-01 16:07:11 -07:00
Jeff Shepherd
4678f9adc3 Merge branch 'master' of https://github.com/jeff-shepherd/MachineLearningNotebooks 2019-04-01 16:04:46 -07:00
Jeff Shepherd
5bf85edade Added automl_setup_mac.sh with correct line termination 2019-04-01 16:03:39 -07:00
Jeff Shepherd
94f381e884 Removed automl_setup_mac.sh 2019-04-01 16:02:53 -07:00
Roope Astala
ea1b7599c3 Merge pull request #267 from rastala/master
add automl files
2019-03-25 19:26:07 -04:00
Roope Astala
6b8a6befde add automl files 2019-03-25 19:25:38 -04:00
Roope Astala
c1511b7b74 Merge pull request #266 from rastala/master
1.0.21 dockerfile
2019-03-25 15:10:05 -04:00
Roope Astala
8f007a3333 1.0.21 dockerfile 2019-03-25 15:09:39 -04:00
Roope Astala
5ad3ca00e8 Merge pull request #265 from rastala/master
version 1.0.21
2019-03-25 15:07:09 -04:00
Roope Astala
556a41e223 version 1.0.21 2019-03-25 15:06:08 -04:00
Roope Astala
407b8929d0 Merge pull request #259 from jeff-shepherd/master
Added example of printing model hyperparameters
2019-03-19 09:40:25 -04:00
Jeff Shepherd
18a11bbd8d Added model printing example 2019-03-18 16:31:48 -07:00
Roope Astala
8b439a9f7c Merge pull request #256 from rastala/master
update RAPIDS 2
2019-03-18 12:09:33 -04:00
rastala
75c393a221 update RAPIDS 2 2019-03-18 12:08:10 -04:00
Roope Astala
be7176fe06 Merge pull request #255 from rastala/master
update RAPIDS sample
2019-03-18 11:42:51 -04:00
rastala
7b41675355 update RAPIDS sample 2019-03-18 11:40:43 -04:00
Jeff Shepherd
fa7685f6fa Added example of printing model hyperparameters 2019-03-15 13:18:17 -07:00
Roope Astala
6b444b1467 Merge pull request #248 from rastala/master
dockerfile 1.0.18
2019-03-11 15:33:07 -04:00
Roope Astala
c9767473ae dockerfile 1.0.18 2019-03-11 15:32:30 -04:00
Roope Astala
648b48fc0c Merge pull request #247 from rastala/master
version 1.0.18
2019-03-11 15:23:44 -04:00
Roope Astala
04db5d93e2 version 1.0.18 2019-03-11 15:22:38 -04:00
Roope Astala
4e10935701 version 1.0.18 2019-03-11 15:21:35 -04:00
Roope Astala
f737db499d Delete googleade5d7141b3f2910.html 2019-03-05 17:01:36 -05:00
Roope Astala
6b66da1558 Merge pull request #238 from rastala/master
fix link in configuration notebook
2019-03-05 17:00:31 -05:00
Roope Astala
8647aea9d9 fix link in configuration notebook 2019-03-05 16:59:38 -05:00
Roope Astala
3ee2dc3258 Merge pull request #233 from jeff-shepherd/master
Setup updated to fix remote run
2019-02-26 15:34:15 -05:00
Jeff Shepherd
9f7c4ce668 Setup updated to fix remote run 2019-02-26 11:59:20 -08:00
hning86
036ca6ac75 dockerfile 1.0.17 2019-02-26 10:57:07 -05:00
Roope Astala
0b8817ee1c Merge pull request #229 from rastala/master
version 1.0.17
2019-02-25 16:12:51 -05:00
Roope Astala
b7b5576b15 version 1.0.17 2019-02-25 16:12:02 -05:00
Hai Ning
c082b72b71 Update pr.md 2019-02-23 21:55:59 -05:00
Hai Ning
673e76d431 Merge pull request #186 from gison93/master
Fix typos
2019-02-20 23:18:15 -05:00
Hai Ning
c518a04a19 Merge pull request #203 from davidefiocco/patch-1
Typo fix
2019-02-20 23:17:14 -05:00
Hai Ning
2f34888716 Update README.md 2019-02-20 07:52:14 -05:00
Roope Astala
6ca0088991 Merge pull request #218 from jeff-shepherd/master
Fixed broken links to configuration notebook
2019-02-15 14:47:49 -05:00
Jeff Shepherd
40e3856786 Removed subsampling reference, which is not published yet 2019-02-15 11:35:45 -08:00
Jeff Shepherd
ddd025e83e Fixed links to configuration notebook. 2019-02-15 11:31:10 -08:00
Hai Ning
ece4242c8f Update README.md 2019-02-15 12:57:08 -05:00
Hai Ning
4bca2bd7db Merge pull request #217 from nishankgu/patch-1
Update README.md
2019-02-15 12:52:59 -05:00
Nishank
a927dbfa31 Update README.md 2019-02-14 14:22:05 -08:00
hning86
280c718f53 keras sample 2019-02-14 16:59:08 -05:00
Hai Ning
bf1ac2b26a Update NBSETUP.md 2019-02-14 11:02:01 -05:00
Roope Astala
954c2afbce Merge pull request #214 from rongduan-zhu/master
Updated Azure Databricks Automated ML notebook from master
2019-02-13 14:06:48 -05:00
Rongduan Zhu
fbf1ea5f1a updated notebook from latest master 2019-02-13 11:02:27 -08:00
Roope Astala
84b72d904b Merge pull request #210 from rastala/master
tutorial update
2019-02-11 16:07:47 -05:00
Roope Astala
82bb9fcac3 tutorial update 2019-02-11 16:07:10 -05:00
Roope Astala
5c6bbacd47 Merge pull request #209 from rastala/master
adb readme update
2019-02-11 15:52:34 -05:00
Roope Astala
90aaeea113 adb readme update 2019-02-11 15:51:50 -05:00
Roope Astala
eeab7284c9 Merge pull request #208 from rastala/master
few missing files
2019-02-11 15:48:22 -05:00
Roope Astala
02fd9b685c few missing files 2019-02-11 15:47:37 -05:00
hning86
d5c923b446 dockerfile updated 2019-02-11 15:21:56 -05:00
Roope Astala
f16bf27e26 Merge pull request #207 from rastala/master
release 1.0.15
2019-02-11 15:18:00 -05:00
Roope Astala
c7bec58593 update version 2019-02-11 15:17:40 -05:00
Roope Astala
cca3996eb4 release 1.0.15 2019-02-11 15:12:30 -05:00
Davide Fiocco
210efe022a Typo fix 2019-02-08 20:23:12 +01:00
Roope Astala
5fd14bac30 Merge pull request #199 from rastala/master
update automl databricks
2019-02-06 11:53:35 -05:00
Roope Astala
3fa409543b update automl databricks 2019-02-06 11:53:00 -05:00
Josée Martens
42f2822b61 Adding file to enable search performance tracking.
@rastala
2019-02-04 14:36:40 -06:00
Roope Astala
48afbe1cab Delete release.json 2019-01-31 16:07:08 -05:00
Roope Astala
1298c55dd4 Merge pull request #193 from rastala/master
fix broken link
2019-01-31 15:45:01 -05:00
Roope Astala
0aa1b248f4 fix broken link 2019-01-31 15:44:22 -05:00
Roope Astala
3012b8f5a8 Merge pull request #192 from rastala/master
add authentication notebook
2019-01-31 15:41:40 -05:00
Roope Astala
501c55bcaf add authentication notebook 2019-01-31 15:40:51 -05:00
hning86
1a38f50221 docker instructions 2019-01-31 15:16:36 -05:00
hning86
cc64be8d6f text update 2019-01-31 14:29:31 -05:00
hning86
a0127a2a64 dockerfile instruction 2019-01-31 11:46:06 -05:00
Hai Ning
7eb966bf79 Merge pull request #191 from Azure/dockerfiles
Dockerfiles
2019-01-31 10:54:55 -05:00
Roope Astala
9118f2c7ce Merge pull request #190 from rastala/master
fix NBSETUP
2019-01-31 09:33:17 -05:00
Roope Astala
0e3198f311 fix NBSETUP 2019-01-31 09:32:30 -05:00
hning86
0fdab91b97 dockefile reorg 2019-01-31 09:21:06 -05:00
hning86
b54be912d8 dockerfiles added 2019-01-30 17:04:18 -05:00
Roope Astala
3d0c7990ff Merge pull request #189 from rastala/master
update tutorial readme
2019-01-30 14:28:24 -05:00
Roope Astala
6e1ce29a94 Merge remote-tracking branch 'upstream/master' 2019-01-30 14:26:25 -05:00
Roope Astala
0d26c9986a update tutorials README 2019-01-30 14:25:17 -05:00
gison93
100ab10797 add pipeline validation 2019-01-29 14:50:00 +01:00
gison93
1307efe7bc fix typo
remove trailing \u00c2\u00a0 from variable and notebook_path
2019-01-29 14:34:07 +01:00
gison93
08d0b8cf08 fix typo
Bloband -> Blob and
2019-01-29 12:42:48 +01:00
Roope Astala
0514eee64b Merge pull request #182 from rastala/master
version 1.0.10
2019-01-28 18:10:20 -05:00
Roope Astala
4b6e34fdc0 Update train-within-notebook.ipynb 2019-01-28 18:09:36 -05:00
Roope Astala
e01216d85b Update configuration.ipynb 2019-01-28 18:08:41 -05:00
Roope Astala
b00f75edd8 version 1.0.10 2019-01-28 15:30:17 -05:00
Hai Ning
06aba388c6 Update azure-ml-with-nvidia-rapids.ipynb 2019-01-24 10:09:31 -05:00
Roope Astala
3018461dfc Merge pull request #176 from rastala/master
update tutorials
2019-01-22 14:25:28 -05:00
Roope Astala
0d91f2d697 update tutorials 2019-01-22 14:24:31 -05:00
Roope Astala
a14cb635f0 Merge pull request #175 from rastala/master
RAPIDS sample
2019-01-22 13:44:55 -05:00
Roope Astala
88f6a966cc RAPIDS sample 2019-01-22 13:32:59 -05:00
Hai Ning
4f76a844c6 Update README.md 2019-01-18 01:18:44 -05:00
Hai Ning
c1573ff949 Update NBSETUP.md 2019-01-18 01:15:53 -05:00
Hai Ning
d1b18b3771 Update NBSETUP.md 2019-01-18 01:09:13 -05:00
Roope Astala
e1a948f4cd Merge pull request #168 from rastala/master
version 1.0.8
2019-01-14 12:14:02 -08:00
Roope Astala
3ca40c0817 version 1.0.8 2019-01-14 15:13:30 -05:00
Roope Astala
f724cb4d9b Merge pull request #166 from jeff-shepherd/master
Fixed broken links in tutorials
2019-01-08 12:01:50 -08:00
Jeff Shepherd
094b4b3b13 Fixed broken links in tutorials 2019-01-08 11:58:03 -08:00
Roope Astala
d09942f521 Merge pull request #165 from rastala/master
databricks update
2019-01-08 09:24:11 -08:00
Roope Astala
0c9e527174 databricks update 2019-01-08 12:23:15 -05:00
Roope Astala
e2640e54da Merge pull request #160 from rastala/master
Create aml-pipelines-concept.png
2019-01-02 12:03:13 -08:00
Roope Astala
d348baf8a1 Create aml-pipelines-concept.png 2019-01-02 15:02:25 -05:00
Roope Astala
b41e11e30d Merge pull request #159 from jeff-shepherd/master
Removed databricks notebook link
2019-01-02 11:56:15 -08:00
Jeff Shepherd
c1aa951867 Removed databricks notebook link 2019-01-02 11:45:52 -08:00
Roope Astala
5fe5f06e07 Merge pull request #158 from rastala/master
Create Databricks_AMLSDK_1-4_6.dbc
2019-01-02 10:52:24 -08:00
Roope Astala
e8a09c49b1 Create Databricks_AMLSDK_1-4_6.dbc 2019-01-02 13:51:29 -05:00
Roope Astala
fb6a73a790 Merge pull request #145 from rastala/master
fix databricks
2018-12-20 13:11:17 -08:00
Roope Astala
c2968b6526 fix databricks 2018-12-20 16:10:27 -05:00
Roope Astala
2ac62ae1ed Merge pull request #144 from rastala/master
Version 1.0.6
2018-12-20 12:43:17 -08:00
Roope Astala
cad5d5c97c more files 2018-12-20 15:42:03 -05:00
Roope Astala
d8cf73503e update to version 1.0.6 2018-12-20 15:40:20 -05:00
Hai Ning
4a2d6d637a Update README.md 2018-12-13 11:28:02 -05:00
Hai Ning
0348e54e21 Update README.md 2018-12-13 11:27:41 -05:00
Hai Ning
a97d147d01 Create README.md 2018-12-13 11:26:58 -05:00
Hai Ning
c4ceac032b Update README.md 2018-12-08 10:11:11 -05:00
Hai Ning
1d3dff5634 Update README.md 2018-12-08 10:10:37 -05:00
Roope Astala
096dd424db Merge pull request #125 from jeff-shepherd/master
Added to troubleshooting section and fixed paths
2018-12-07 11:03:59 -08:00
Jeff Shepherd
fdefea5e82 Added to troubleshooting section 2018-12-07 10:55:45 -08:00
Roope Astala
15fb283b78 Merge pull request #124 from rastala/master
add NBSETUP
2018-12-07 10:29:46 -08:00
Roope Astala
142514a255 add NBSETUP 2018-12-07 13:29:08 -05:00
Roope Astala
26678677b3 Merge pull request #122 from rastala/master
Update distributed pytorch notebook
2018-12-07 07:00:04 -08:00
Roope Astala
2e2a943f5b Update distributed pytorch notebook 2018-12-07 09:40:29 -05:00
Roope Astala
94ba3f3665 Merge pull request #117 from rastala/master
update automl notebooks
2018-12-04 16:12:49 -08:00
rastala
da02f93fc2 update automl notebooks 2018-12-04 19:11:42 -05:00
Roope Astala
d39af379f8 Merge pull request #116 from rastala/master
add adb readme
2018-12-04 15:34:49 -08:00
rastala
6304fb1eb1 add adb readme 2018-12-04 18:34:10 -05:00
Hai Ning
b35d14ab72 Update img-classification-part1-training.ipynb 2018-12-04 13:27:19 -05:00
Roope Astala
e791456f34 Merge pull request #115 from rastala/master
Fix broken link
2018-12-04 08:52:17 -08:00
rastala
9b93e13426 another patch 3 2018-12-04 11:51:05 -05:00
rastala
3d640393aa another patch 2 2018-12-04 11:47:06 -05:00
Roope Astala
34d50b0427 Merge pull request #114 from rastala/master
another patch
2018-12-04 08:43:29 -08:00
rastala
060a53d256 another patch 2018-12-04 11:42:51 -05:00
Roope Astala
78fba3ceea Merge pull request #113 from rastala/master
notebook patches
2018-12-04 08:26:00 -08:00
rastala
01dc3d0a5b notebook patches 2018-12-04 11:24:50 -05:00
Heather Spetalnick (Shapiro)
ce7ca94a9a fix aks link 2018-12-04 10:33:07 -05:00
Hai Ning
b8877f1f92 Merge pull request #111 from Azure/jamiemaclennan-patch-1
Fix link
2018-12-04 10:30:22 -05:00
Hai Ning
b19ec15601 Merge pull request #112 from Azure/jamiemaclennan-patch-2
fix links
2018-12-04 10:29:50 -05:00
Jamie MacLennan
e99de11c25 fix links 2018-12-04 10:28:19 -05:00
Jamie MacLennan
212c2e8bf0 Fix link 2018-12-04 10:24:10 -05:00
Hai Ning
2aed0a32bf Update README.md 2018-12-04 08:55:46 -05:00
Hai Ning
25456b84f0 Update train-within-notebook.ipynb 2018-12-04 08:52:23 -05:00
Hai Ning
897ae13de1 Update train-on-remote-vm.ipynb 2018-12-04 08:52:06 -05:00
Hai Ning
e329729e0f Update train-on-local.ipynb 2018-12-04 08:51:45 -05:00
Hai Ning
aacdede890 Update logging-api.ipynb 2018-12-04 08:51:00 -05:00
Hai Ning
0471f2f8db Update logging-api.ipynb 2018-12-04 08:49:46 -05:00
Hai Ning
a8a525e704 Update logging-api.ipynb 2018-12-04 08:48:56 -05:00
Hai Ning
e37f3fa206 Update README.md 2018-12-04 08:18:27 -05:00
Roope Astala
2c4d6b5188 Merge pull request #110 from rastala/master
fix azure-databricks
2018-12-03 19:21:31 -08:00
rastala
008befcce2 fix azure-databricks 2018-12-03 22:20:06 -05:00
Roope Astala
b4895bf1f8 Merge pull request #109 from jeff-shepherd/master
Added setup and configuration files
2018-12-03 16:46:42 -08:00
Jeff Shepherd
a27cdbd478 Added setup and configuration files 2018-12-03 16:36:14 -08:00
Roope Astala
a408f3f2cb Merge pull request #108 from rastala/master
big update
2018-12-03 17:47:29 -05:00
rastala
ab2de17978 one more file 2018-12-03 17:38:46 -05:00
rastala
a63f5084e0 big update 2 2018-12-03 17:38:20 -05:00
rastala
d26a4b0323 big update 2018-12-02 21:50:53 -05:00
rastala
3c49d861df license updates 2018-12-02 21:47:13 -05:00
Roope Astala
613db3158d Merge pull request #93 from yanrez/master
Make pipeline notebooks links in readme
2018-11-28 12:33:12 -05:00
yanrez
c3a8c36297 Make pipeline notebooks links in readme 2018-11-22 17:13:31 -08:00
Roope Astala
e7ce245674 Merge pull request #92 from dipankar-ray/master
updated pipeline notebooks with expanded tutorial
2018-11-22 10:15:55 -05:00
Dipankar Ray
ef5844fffd updated pipeline notebooks with expanded tutorial 2018-11-21 20:00:07 -08:00
Roope Astala
e039b98ee6 Merge pull request #91 from rastala/master
automl notebook update
2018-11-21 20:46:21 -05:00
rastala
05713689e0 automl notebook update 2018-11-21 20:45:17 -05:00
Roope Astala
7bb906b53c Merge pull request #87 from rastala/master
Update to version 0.1.80
2018-11-20 11:02:28 -05:00
rastala
5726fe3ddb Version 0.1.80 2018-11-20 11:00:48 -05:00
rastala
d10b1fa796 Revert "Updated notebook folders"
This reverts commit 06728004b6.
2018-11-20 10:39:48 -05:00
rastala
d7127de03c Revert "Update tutorials/README.md"
This reverts commit 50787f4ccc.
2018-11-20 10:39:34 -05:00
Roope Astala
50787f4ccc Update tutorials/README.md 2018-11-19 13:35:11 -05:00
Roope Astala
06728004b6 Updated notebook folders 2018-11-19 13:28:49 -05:00
Roope Astala
f5bcc55fe3 Merge pull request #74 from yueguoguo/master
Typo in README
2018-11-09 09:51:01 -05:00
Roope Astala
f23fb58200 Merge pull request #77 from rastala/master
Fix autoscale
2018-11-09 09:47:46 -05:00
Roope Astala
dbce7b8db2 Fix autoscase 2018-11-09 09:47:01 -05:00
Roope Astala
303090adf6 Merge pull request #76 from rastala/master
Update 00.configuration.ipynb
2018-11-09 09:33:07 -05:00
Roope Astala
b091d1f5f1 Update 00.configuration.ipynb
Create computes in 00.configuration, and link to tutorial
2018-11-09 09:31:25 -05:00
Hai Ning
803d69c539 Update 03.train-hyperparameter-tune-deploy-with-tensorflow.ipynb 2018-11-07 13:54:11 -05:00
Zhang Le
37848e9686 Merge pull request #1 from yueguoguo/yueguoguo-patch-1
Typo in README
2018-11-07 13:18:31 +08:00
Zhang Le
7d9227441e Typo in README
Typo of `psutil`.
2018-11-07 13:17:53 +08:00
Roope Astala
21c454b0f2 Merge pull request #72 from rastala/master
Add logging API notebook
2018-11-06 12:46:39 -05:00
Roope Astala
c7b0960ae4 Add logging API notebook 2018-11-06 12:46:05 -05:00
Roope Astala
14e11fefd6 Delete .gitignore 2018-11-06 12:31:53 -05:00
Roope Astala
4deaeb04cf Delete 05.train-in-spark-checkpoint.ipynb 2018-11-06 12:31:32 -05:00
Roope Astala
ee78323df2 Delete 03.train-on-aci-checkpoint.ipynb 2018-11-06 12:31:18 -05:00
Roope Astala
89c2622938 Delete 02.train-on-local-checkpoint.ipynb 2018-11-06 12:31:03 -05:00
Roope Astala
96b352e3be Delete 04.train-on-remote-vm-checkpoint.ipynb 2018-11-06 12:30:43 -05:00
Roope Astala
5280201f93 Merge pull request #70 from wchill/fix_macos_sigsegv
Fix segfault under certain conditions when running AutoML pipelines on MacOS
2018-11-05 19:04:14 -05:00
Eric Ahn
3825fd2c10 Fix segfault under certain conditions on MacOS 2018-11-05 15:06:38 -08:00
Roope Astala
b936dd3505 Merge pull request #69 from rastala/master
New SDK version 0.1.74
2018-11-05 15:28:40 -05:00
Roope Astala
7339c95ea0 New SDK version 2018-11-05 15:27:36 -05:00
Hai Ning
32102e2aac Update pipeline-batch-scoring.ipynb 2018-11-02 14:18:38 -04:00
Hai Ning
a043769197 Update pr.md 2018-10-29 22:23:49 -04:00
Hai Ning
a0f3727cf4 Update pr.md 2018-10-29 22:23:39 -04:00
Roope Astala
0e8b42f8c7 Delete snowleopardgaze.jpg 2018-10-26 16:53:47 -04:00
hning86
2daafdbca1 logging api sample 2018-10-26 14:02:05 -04:00
Roope Astala
fec2e97310 Merge pull request #62 from rastala/master
Fix link in 01 getting started
2018-10-26 10:27:42 -04:00
Roope Astala
1a79e53935 Fix link in 01 getting started 2018-10-26 10:26:38 -04:00
Hai Ning
900cc7a76b remove json.loads 2018-10-25 13:03:10 -04:00
Roope Astala
3148e52258 Merge pull request #60 from rastala/master
fix json output
2018-10-25 12:48:28 -04:00
Roope Astala
dda402db83 fix json output 2018-10-25 12:47:38 -04:00
Roope Astala
603f4a6434 Merge pull request #58 from rastala/master
Tutorial fixes
2018-10-24 13:47:05 -04:00
Roope Astala
114449dd9b Tutorial fixes 2018-10-24 13:45:15 -04:00
Roope Astala
de20b6c40e Merge pull request #55 from Azure/sdgilley-patch-1
Update 03.auto-train-models.ipynb
2018-10-22 12:43:20 -04:00
Hai Ning
886ece1089 Update pr.md 2018-10-22 11:23:49 -04:00
Sheri Gilley
0dfe00d05a Update 03.auto-train-models.ipynb
fix link
2018-10-22 10:04:46 -05:00
hning86
7a6fb8067f auto updated from HaiGPU 2018-10-22 01:50:11 -04:00
hning86
bb439ab2fd removed empty folder 2018-10-22 01:41:05 -04:00
hning86
ea3abdde4f auto updated from HaiGPU 2018-10-22 01:39:38 -04:00
Hai Ning
2e4eb8785c Update pr.md 2018-10-18 15:29:26 -04:00
Hai Ning
bfccb07dae Update pr.md 2018-10-18 15:27:36 -04:00
Hai Ning
94cd37e9fb Update README.md 2018-10-18 14:49:28 -04:00
Hai Ning
cdeb4dddab Update README.md 2018-10-18 14:47:44 -04:00
Hai Ning
e12637098a Update README.md 2018-10-18 14:47:19 -04:00
Hai Ning
d5f8811f4f YT cover 2018-10-18 14:46:08 -04:00
Hai Ning
92d36a2db4 Delete ytimg_png.PNG 2018-10-18 14:45:53 -04:00
Hai Ning
c5c76e8187 Update pr.md 2018-10-18 14:45:12 -04:00
Hai Ning
833d1d0f4e Update pr.md 2018-10-18 14:44:59 -04:00
Hai Ning
dd0c0264a2 Update README.md 2018-10-18 14:43:15 -04:00
Hai Ning
52368bad81 Update README.md 2018-10-18 14:42:48 -04:00
Hai Ning
604f6c18be Update README.md 2018-10-18 14:42:23 -04:00
Hai Ning
829bc297f2 Update README.md 2018-10-18 14:41:45 -04:00
Hai Ning
9e5101ea8c Update README.md 2018-10-18 14:41:34 -04:00
Hai Ning
37e96f2ad6 youtube cover 2018-10-18 14:40:17 -04:00
Roope Astala
d0c9bb330a Merge pull request #39 from cforbe/master
Adding dataprep notebook
2018-10-18 12:39:01 -04:00
Colleen Forbes
b4c7932640 Update README.md 2018-10-17 15:44:30 -07:00
Roope Astala
8fed628390 Merge pull request #53 from rastala/master
Update automl setup
2018-10-17 17:38:28 -04:00
rastala
d940aca06d Update automl setup 2018-10-17 17:37:01 -04:00
Hai Ning
beb97b1d9f Update README.md 2018-10-17 12:00:37 -04:00
Colleen
e7e9923cfb updating README.md 2018-10-03 16:46:51 -07:00
Colleen
b5482fcd4b Adding dataprep notebook 2018-10-03 09:58:55 -07:00
712 changed files with 114910 additions and 17018 deletions

View File

@@ -1,7 +0,0 @@
.ipynb_checkpoints
azureml-logs
.azureml
.git
outputs
azureml-setup
docs

View File

@@ -1,3 +0,0 @@
{
"python.pythonPath": "C:\\Users\\sgilley\\.azureml\\envs\\jan3\\python.exe"
}

View File

@@ -1,236 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 00. Installation and configuration\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning Workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook to use an existing workspace or create a new workspace.\n",
"\n",
"## What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Access Azure Subscription\n",
"\n",
"In order to create an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com).\n",
"\n",
"### 2. If you're running on your own local environment, install Azure ML SDK and other libraries\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, check the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Make sure your subscription is registered to use ACI\n",
"Azure Machine Learning makes use of Azure Container Instance (ACI). You need to ensure your subscription has been registered to use ACI in order be able to deploy a dev/test web service. If you have run through the quickstart experience you have already performed this step. Otherwise you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands.\n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your Azure Machine Learning workspace\n",
"\n",
"### Option 1: You have workspace already\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"If you have a workspace created another way, [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#create-workspace-configuration-file) describe how to get your subscription and workspace information.\n",
"\n",
"If this cell succeeds, you're done configuring this library! Otherwise continue to follow the instructions in the rest of the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"subscription_id ='<subscription-id>'\n",
"resource_group ='<resource-group>'\n",
"workspace_name = '<workspace-name>'\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" ws.write_config()\n",
" print('Workspace configuration succeeded. You are all set!')\n",
"except:\n",
" print('Workspace not found. Run the cells below.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 2: You don't have workspace yet\n",
"\n",
"\n",
"#### Requirements\n",
"\n",
"Inside your Azure subscription, you will need access to a _resource group_, which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and the following information:\n",
"* A name for your workspace\n",
"* Your subscription id\n",
"* The resource group name\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. BatchAI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Supported Azure Regions\n",
"Specify a region where your workspace will be located from the list of [Azure Machine Learning regions](https://linktoregions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", subscription_id)\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", resource_group)\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", workspace_name)\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", workspace_region)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create the workspace\n",
"This cell will create an AML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail when:\n",
"1. You do not have permission to create a workspace in the resource group\n",
"2. You do not have permission to create a resource group if it's non-existing.\n",
"2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,816 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 01. Train in the Notebook & Deploy Model to ACI\n",
"\n",
"* Load workspace\n",
"* Train a simple regression model directly in the Notebook python kernel\n",
"* Record run history\n",
"* Find the best model in run history and download it.\n",
"* Deploy the model as an Azure Container Instance (ACI)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"1. Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't. \n",
"\n",
"2. Install following pre-requisite libraries to your conda environment and restart notebook.\n",
"```shell\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"3. Check that ACI is registered for your Azure Subscription. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider show -n Microsoft.ContainerInstance -o table"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If ACI is not registered, run following command to register it. Note that you have to be a subscription owner, or this command will fail."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!az provider register -n Microsoft.ContainerInstance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Validate Azure ML SDK installation and get version number for debugging purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"from azureml.core import Experiment, Run, Workspace\n",
"import azureml.core\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set experiment name\n",
"Choose a name for experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-in-notebook'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start a training run in local Notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load diabetes dataset, a well-known small dataset that comes with scikit-learn\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.externals import joblib\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
"data = {\n",
" \"train\":{\"X\": X_train, \"y\": y_train}, \n",
" \"test\":{\"X\": X_test, \"y\": y_test}\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Train a simple Ridge model\n",
"Train a very simple Ridge regression model in scikit-learn, and save it as a pickle file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"reg = Ridge(alpha = 0.03)\n",
"reg.fit(X=data['train']['X'], y=data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add experiment tracking\n",
"Now, let's add Azure ML experiment logging, and upload persisted model into run record as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"local run",
"outputs upload"
]
},
"outputs": [],
"source": [
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"run = experiment.start_logging()\n",
"\n",
"run.tag(\"Description\",\"My first run!\")\n",
"run.log('alpha', 0.03)\n",
"reg = Ridge(alpha=0.03)\n",
"reg.fit(data['train']['X'], data['train']['y'])\n",
"preds = reg.predict(data['test']['X'])\n",
"run.log('mse', mean_squared_error(data['test']['y'], preds))\n",
"joblib.dump(value=reg, filename='model.pkl')\n",
"run.upload_file(name='outputs/model.pkl', path_or_stream='./model.pkl')\n",
"\n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can browse to the recorded run. Please make sure you use Chrome to navigate the run history page."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple parameter sweep\n",
"Sweep over alpha values of a sklearn ridge model, and capture metrics and trained model in the Azure ML experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import os\n",
"from tqdm import tqdm\n",
"\n",
"model_name = \"model.pkl\"\n",
"\n",
"# list of numbers from 0 to 1.0 with a 0.05 interval\n",
"alphas = np.arange(0.0, 1.0, 0.05)\n",
"\n",
"# try a bunch of alpha values in a Linear Regression (Ridge) model\n",
"for alpha in tqdm(alphas):\n",
" # create a bunch of runs, each train a model with a different alpha value\n",
" with experiment.start_logging() as run:\n",
" # Use Ridge algorithm to build a regression model\n",
" reg = Ridge(alpha=alpha)\n",
" reg.fit(X=data[\"train\"][\"X\"], y=data[\"train\"][\"y\"])\n",
" preds = reg.predict(X=data[\"test\"][\"X\"])\n",
" mse = mean_squared_error(y_true=data[\"test\"][\"y\"], y_pred=preds)\n",
"\n",
" # log alpha, mean_squared_error and feature names in run history\n",
" run.log(name=\"alpha\", value=alpha)\n",
" run.log(name=\"mse\", value=mse)\n",
" run.log_list(name=\"columns\", value=columns)\n",
"\n",
" with open(model_name, \"wb\") as file:\n",
" joblib.dump(value=reg, filename=file)\n",
" \n",
" # upload the serialized model into run history record\n",
" run.upload_file(name=\"outputs/\" + model_name, path_or_stream=model_name)\n",
"\n",
" # now delete the serialized model from local folder since it is already uploaded to run history \n",
" os.remove(path=model_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# now let's take a look at the experiment in Azure portal.\n",
"experiment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select best model from the experiment\n",
"Load all experiment run metrics recursively from the experiment into a dictionary object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runs = {}\n",
"run_metrics = {}\n",
"\n",
"for r in tqdm(experiment.get_runs()):\n",
" metrics = r.get_metrics()\n",
" if 'mse' in metrics.keys():\n",
" runs[r.id] = r\n",
" run_metrics[r.id] = metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now find the run with the lowest Mean Squared Error value"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse'])\n",
"best_run = runs[best_run_id]\n",
"print('Best run is:', best_run_id)\n",
"print('Metrics:', run_metrics[best_run_id])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags to your runs to make them easier to catalog"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"best_run.tag(key=\"Description\", value=\"The best one\")\n",
"best_run.get_tags()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plot MSE over alpha\n",
"\n",
"Let's observe the best model visually by plotting the MSE values over alpha values:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"best_alpha = run_metrics[best_run_id]['alpha']\n",
"min_mse = run_metrics[best_run_id]['mse']\n",
"\n",
"alpha_mse = np.array([(run_metrics[k]['alpha'], run_metrics[k]['mse']) for k in run_metrics.keys()])\n",
"sorted_alpha_mse = alpha_mse[alpha_mse[:,0].argsort()]\n",
"\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'r--')\n",
"plt.plot(sorted_alpha_mse[:,0], sorted_alpha_mse[:,1], 'bo')\n",
"\n",
"plt.xlabel('alpha', fontsize = 14)\n",
"plt.ylabel('mean squared error', fontsize = 14)\n",
"plt.title('MSE over alpha', fontsize = 16)\n",
"\n",
"# plot arrow\n",
"plt.arrow(x = best_alpha, y = min_mse + 39, dx = 0, dy = -26, ls = '-', lw = 0.4,\n",
" width = 0, head_width = .03, head_length = 8)\n",
"\n",
"# plot \"best run\" text\n",
"plt.text(x = best_alpha - 0.08, y = min_mse + 50, s = 'Best Run', fontsize = 14)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the best model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find the model file saved in the run record of best run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"for f in best_run.get_file_names():\n",
" print(f)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can register this model in the model registry of the workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the model has been registered properly. If you have done this several times you'd see the version number auto-increases each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"models = Model.list(workspace=ws, name='best_model')\n",
"for m in models:\n",
" print(m.name, m.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also download the registered model. Afterwards, you should see a `model.pkl` file in the current directory. You can then use it for local testing if you'd like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"download file"
]
},
"outputs": [],
"source": [
"# remove the model file if it is already on disk\n",
"if os.path.isfile('model.pkl'): \n",
" os.remove('model.pkl')\n",
"# download the model\n",
"model.download(target_dir=\"./\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scoring script\n",
"\n",
"Now we are ready to build a Docker image and deploy the model in it as a web service. The first step is creating the scoring script. For convenience, we have created the scoring script for you. It is printed below as text, but you can also run `%pfile ./score.py` in a cell to show the file.\n",
"\n",
"Tbe scoring script consists of two functions: `init` that is used to load the model to memory when starting the container, and `run` that makes the prediction when web service is called. Please pay special attention to how the model is loaded in the `init()` function. When Docker image is built for this model, the actual model file is downloaded and placed on disk, and `get_model_path` function returns the local path where the model is placed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./score.py', 'r') as scoring_script:\n",
" print(scoring_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create environment dependency file\n",
"\n",
"We need a environment dependency file `myenv.yml` to specify which libraries are needed by the scoring script when building the Docker image for web service deployment. We can manually create this file, or we can use the `CondaDependencies` API to automatically create this file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_conda_package(\"scikit-learn\")\n",
"print(myenv.serialize_to_string())\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy web service into an Azure Container Instance\n",
"The deployment process takes the registered model and your scoring scrip, and builds a Docker image. It then deploys the Docker image into Azure Container Instance as a running container with an HTTP endpoint readying for scoring calls. Read more about [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/).\n",
"\n",
"Note ACI is great for quick and cost-effective dev/test deployment scenarios. For production workloads, please use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Please follow in struction in [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML.\n",
" \n",
"** Note: ** The web service creation can take 6-7 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice, Webservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
" memory_gb=1, \n",
" tags={'sample name': 'AML 101'}, \n",
" description='This is a great example.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the below `WebService.deploy_from_model()` function takes a model object registered under the workspace. It then bakes the model file in the Docker image so it can be looked-up using the `Model.get_model_path()` function in `score.py`. \n",
"\n",
"If you have a local model file instead of a registered model object, you can also use the `WebService.deploy()` function which would register the model and then deploy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"image_config = ContainerImage.image_configuration(execution_script=\"score.py\", \n",
" runtime=\"python\", \n",
" conda_file=\"myenv.yml\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# this will take 5-10 minutes to finish\n",
"# you can also use \"az container list\" command to find the ACI being deployed\n",
"service = Webservice.deploy_from_model(name='my-aci-svc',\n",
" deployment_config=aciconfig,\n",
" models=[model],\n",
" image_config=image_config,\n",
" workspace=ws)\n",
"\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Test web service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"print('web service is hosted in ACI:', service.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the `run` API to call the web service with one row of data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import json\n",
"# score the first row from the test set.\n",
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
"service.run(input_data = test_samples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Feed the entire test set and calculate the errors (residual values)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"# score the entire test set.\n",
"test_samples = json.dumps({'data': X_test.tolist()})\n",
"\n",
"result = json.loads(service.run(input_data = test_samples))['result']\n",
"residual = result - y_test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also send raw HTTP request to test the web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"\n",
"# 2 rows of input data, each with 10 made-up numerical features\n",
"input_data = \"{\\\"data\\\": [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\"\n",
"\n",
"headers = {'Content-Type':'application/json'}\n",
"\n",
"# for AKS deployment you'd need to the service key in the header as well\n",
"# api_key = service.get_key()\n",
"# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} \n",
"\n",
"resp = requests.post(service.scoring_uri, input_data, headers = headers)\n",
"print(resp.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Residual graph\n",
"Plot a residual value graph to chart the errors on the entire test set. Observe the nice bell curve."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Residual Values', fontsize = 18)\n",
"\n",
"f.set_figheight(6)\n",
"f.set_figwidth(14)\n",
"\n",
"a0.plot(residual, 'bo', alpha=0.4);\n",
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
"a0.set_ylabel('residue values', fontsize=14)\n",
"a0.set_xlabel('test data set', fontsize=14)\n",
"\n",
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');\n",
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete ACI to clean up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Deleting ACI is super fast!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"service.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,27 +0,0 @@
import pickle
import json
import numpy as np
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
def init():
global model
# note here "best_model" is the name of the model registered under the workspace
# this call should return the path to the model.pkl file on the local disk.
model_path = Model.get_model_path(model_name='best_model')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = np.array(data)
result = model.predict(data)
return json.dumps({"result": result.tolist()})
except Exception as e:
result = str(e)
return json.dumps({"error": result})

View File

@@ -1,470 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 02. Train locally\n",
"* Create or load workspace.\n",
"* Create scripts locally.\n",
"* Create `train.py` in a folder, along with a `my.lib` file.\n",
"* Configure & execute a local run in a user-managed Python environment.\n",
"* Configure & execute a local run in a system-managed Python environment.\n",
"* Configure & execute a local run in a Docker environment.\n",
"* Query run metrics to find the best model\n",
"* Register model for operationalization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-local'\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train.py`\n",
"\n",
"`train.py` is already created for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note `train.py` also references a `mylib.py` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./mylib.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run\n",
"### User-managed environment\n",
"Below, we use a user-managed run, which means you are responsible to ensure all the necessary packages are available in the Python environment you choose to run the script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"\n",
"# Editing a run configuration property on-fly.\n",
"run_config_user_managed = RunConfiguration()\n",
"\n",
"run_config_user_managed.environment.python.user_managed_dependencies = True\n",
"\n",
"# You can choose a specific Python environment by pointing to a Python path \n",
"#run_config.environment.python.interpreter_path = '/home/johndoe/miniconda3/envs/sdk2/bin/python'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit script to run in the user-managed environment\n",
"Note whole script folder is submitted for execution, including the `mylib.py` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory='./', script='train.py', run_config=run_config_user_managed)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Block to wait till run finishes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### System-managed environment\n",
"You can also ask the system to build a new conda environment and execute your scripts in it. The environment is built once and will be reused in subsequent executions as long as the conda dependencies remain unchanged. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"run_config_system_managed = RunConfiguration()\n",
"\n",
"run_config_system_managed.environment.python.user_managed_dependencies = False\n",
"run_config_system_managed.auto_prepare_environment = True\n",
"\n",
"# Specify conda dependencies with scikit-learn\n",
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"run_config_system_managed.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit script to run in the system-managed environment\n",
"A new conda environment is built based on the conda dependencies object. If you are running this for the first time, this might take up to 5 mninutes. But this conda environment is reused so long as you don't change the conda dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_system_managed)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Block and wait till run finishes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Docker-based execution\n",
"**IMPORTANT**: You must have Docker engine installed locally in order to use this execution mode. If your kernel is already running in a Docker container, such as **Azure Notebooks**, this mode will **NOT** work.\n",
"\n",
"You can also ask the system to pull down a Docker image and execute your scripts in it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"run_config_docker = RunConfiguration()\n",
"\n",
"run_config_docker.environment.python.user_managed_dependencies = False\n",
"run_config_docker.auto_prepare_environment = True\n",
"run_config_docker.environment.docker.enabled = True\n",
"run_config_docker.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"# Specify conda dependencies with scikit-learn\n",
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"run_config_docker.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Submit script to run in the system-managed environment\n",
"A new conda environment is built based on the conda dependencies object. If you are running this for the first time, this might take up to 5 mninutes. But this conda environment is reused so long as you don't change the conda dependencies.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_docker)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Get run history details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query run metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history",
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's find the model that has the lowest MSE value logged."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"best_alpha = metrics['alpha'][np.argmin(metrics['mse'])]\n",
"\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" best_alpha\n",
"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also list all the files that are associated with this run record"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_file_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We know the model `ridge_0.40.pkl` is the best performing model from the eariler queries. So let's register it with the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# supply a model name, and the full path to the serialized model file.\n",
"model = run.register_model(model_name='best_ridge_model', model_path='./outputs/ridge_0.40.pkl')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.version, model.url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can deploy this model following the example in the 01 notebook."
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,45 +0,0 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import os
import numpy as np
import mylib
os.makedirs('./outputs', exist_ok=True)
X, y = load_diabetes(return_X_y=True)
run = Run.get_context()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=0)
data = {"train": {"X": X_train, "y": y_train},
"test": {"X": X_test, "y": y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = mylib.get_alphas()
for alpha in alphas:
# Use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data["train"]["X"], data["train"]["y"])
preds = reg.predict(data["test"]["X"])
mse = mean_squared_error(preds, data["test"]["y"])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
# save model in the outputs folder so it automatically get uploaded
with open(model_file_name, "wb") as file:
joblib.dump(value=reg, filename=os.path.join('./outputs/',
model_file_name))
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))

View File

@@ -1,289 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 03. Train on Azure Container Instance\n",
"\n",
"* Create Workspace\n",
"* Create `train.py` in the project folder.\n",
"* Configure an ACI (Azure Container Instance) run\n",
"* Execute in ACI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-aci'\n",
"experiment = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote execution on ACI\n",
"\n",
"The training script `train.py` is already created for you. Let's have a look."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure for using ACI\n",
"Linux-based ACI is available in `West US`, `East US`, `West Europe`, `North Europe`, `West US 2`, `Southeast Asia`, `Australia East`, `East US 2`, and `Central US` regions. See details [here](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas#region-availability)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"configure run"
]
},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# signal that you want to use ACI to execute script.\n",
"run_config.target = \"containerinstance\"\n",
"\n",
"# ACI container group is only supported in certain regions, which can be different than the region the Workspace is in.\n",
"run_config.container_instance.region = 'eastus2'\n",
"\n",
"# set the ACI CPU and Memory \n",
"run_config.container_instance.cpu_cores = 1\n",
"run_config.container_instance.memory_gb = 2\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit the Experiment\n",
"Finally, run the training job on the ACI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time \n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory='./',\n",
" script='train.py',\n",
" run_config=run_config)\n",
"\n",
"run = experiment.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"# Show run details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# show all the files stored within the run record\n",
"run.get_file_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can take a model produced here, register it and then deploy as a web service."
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,425 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 10. Register Model, Create Image and Deploy Service\n",
"\n",
"This example shows how to deploy a web service in step-by-step fashion:\n",
"\n",
" 1. Register model\n",
" 2. Query versions of models and select one to deploy\n",
" 3. Create Docker image\n",
" 4. Query versions of images\n",
" 5. Deploy the image as web service\n",
" \n",
"**IMPORTANT**:\n",
" * This notebook requires you to first complete \"01.SDK-101-Train-and-Deploy-to-ACI.ipynb\" Notebook\n",
" \n",
"The 101 Notebook taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can add tags and descriptions to your models. Note you need to have a `sklearn_linreg_model.pkl` file in the current directory. This file is generated by the 01 notebook. The below call registers that file as a model with the same name `sklearn_linreg_model.pkl` in the workspace.\n",
"\n",
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"import sklearn\n",
"\n",
"library_version = \"sklearn\"+sklearn.__version__.replace(\".\",\"x\")\n",
"\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\",\n",
" model_name = \"sklearn_regression_model.pkl\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\", 'version': library_version},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can explore the registered models within your workspace and query by tag. Models are versioned. If you call the register_model command many times with same model name, you will get multiple versions of the model with increasing version numbers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"regression_models = Model.list(workspace=ws, tags=['area'])\n",
"for m in regression_models:\n",
" print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can pick a specific model to deploy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.description, model.version, sep = '\\t')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Docker Image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show `score.py`. Note that the `sklearn_regression_model.pkl` in the `get_model_path` call is referring to a model named `sklearn_linreg_model.pkl` registered under the workspace. It is NOT referenceing the local file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"\n",
"def init():\n",
" global model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n",
" # this is a different behavior than before when the code is run locally, even though the code is the same.\n",
" model_path = Model.get_model_path('sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"result\": result.tolist()})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that following command can take few minutes. \n",
"\n",
"You can add tags and descriptions to images. Also, an image can contain multiple models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script=\"score.py\",\n",
" conda_file=\"myenv.yml\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Image with ridge regression model\")\n",
"\n",
"image = Image.create(name = \"myimage1\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"List images by tag and find out the detailed build log for debugging."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create image"
]
},
"outputs": [],
"source": [
"for i in Image.list(workspace = ws,tags = [\"area\"]):\n",
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy image as web service on Azure Container Instance\n",
"\n",
"Note that the service creation can take few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}, \n",
" description = 'Predict diabetes using regression model')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'my-aci-service-2'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test web service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the web service with some dummy input data to get a prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aci_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete ACI to clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"aci_service.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "raymondl"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,452 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enabling Data Collection for Models in Production\n",
"With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity:\n",
"\n",
"* Monitor data drifts as production data enters your model\n",
"* Make better decisions on when to retrain or optimize your model\n",
"* Retrain your model with the data collected\n",
"\n",
"## What data is collected?\n",
"* Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS)\n",
"* Model predictions using production input data.\n",
"\n",
"**Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product.\n",
"\n",
"## What is different compared to standard production deployment process?\n",
"1. Update scoring file.\n",
"2. Update yml file with new dependency.\n",
"3. Update aks configuration.\n",
"4. Build new image and deploy it. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Import your dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace, Run\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n",
"from azureml.core.model import Model\n",
"\n",
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Set up your configuration and create a workspace\n",
"Follow Notebook 00 instructions to do this.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Register Model\n",
"Register an existing trained model, add descirption and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. *Update your scoring file with Data Collection*\n",
"The file below, compared to the file used in notebook 11, has the following changes:\n",
"### a. Import the module\n",
"```python \n",
"from azureml.monitoring import ModelDataCollector```\n",
"### b. In your init function add:\n",
"```python \n",
"global inputs_dc, prediction_d\n",
"inputs_dc = ModelDataCollector(\"best_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\", \"feat3\", \"feat4\", \"feat5\", \"Feat6\"])\n",
"prediction_dc = ModelDataCollector(\"best_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"])```\n",
" \n",
"* Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide \"raw\" data versus \"processed\".\n",
"* CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.)\n",
"* Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created.\n",
"\n",
"### c. In your run function add:\n",
"```python\n",
"inputs_dc.collect(data)\n",
"prediction_dc.collect(result)```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy \n",
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"from azureml.monitoring import ModelDataCollector\n",
"import time\n",
"\n",
"def init():\n",
" global model\n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n",
" # this call should return the path to the model.pkl file on the local disk.\n",
" model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
" global inputs_dc, prediction_dc\n",
" # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n",
" inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n",
" # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n",
" prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n",
" \n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" global inputs_dc, prediction_dc\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
" inputs_dc.collect(data) #this call is saving our input data into our blob\n",
" prediction_dc.collect(result)#this call is saving our prediction data into our blob\n",
" print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"result\": result.tolist()})\n",
" except Exception as e:\n",
" result = str(e)\n",
" print (result + time.strftime(\"%H:%M:%S\"))\n",
" return json.dumps({\"error\": result})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. *Update your myenv.yml file with the required module*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"myenv.add_pip_package(\"azureml-monitoring\")\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Create your new Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n",
" description = \"Image with ridge regression model\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" )\n",
"\n",
"image = ContainerImage.create(name = \"myimage1\",\n",
" # this is the model object\n",
" models = [model],\n",
" image_config = image_config,\n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Deploy to AKS service"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so (Notebook 11)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-test1' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_target.wait_for_completion(show_output = True)\n",
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you already have a cluster you can attach the service to it:"
]
},
{
"cell_type": "markdown",
"metadata": {
"scrolled": true
},
"source": [
"```python \n",
" %%time\n",
" resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
" create_name= 'myaks4'\n",
" aks_target = AksCompute.attach(workspace = ws, \n",
" name = create_name, \n",
" resource_id=resource_id)\n",
" ## Wait for the operation to complete\n",
" aks_target.wait_for_provisioning(True)```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration*\n",
"In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Set the web service configuration\n",
"aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Deploy your service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-w-dc2'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. Test your service and send some data\n",
"**Note**: It will take around 15 mins for your data to appear in your blob.\n",
"The data will appear in your Azure Blob following this format:\n",
"\n",
"/modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,54,6,7,8,88,10], \n",
" [10,9,8,37,36,45,4,33,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. Validate you data and analyze it\n",
"You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear):\n",
"\n",
"/modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv \n",
"\n",
"For doing further analysis you have multiple options:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### a. Create DataBricks cluter and connect it to your blob\n",
"https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template \"Azure Blob Storage Import Example Notebook\".\n",
"\n",
"\n",
"Here is an example for setting up the file location to extract the relevant data:\n",
"\n",
"<code> file_location = \"wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv\" \n",
"file_type = \"csv\"</code>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### b. Connect Blob to Power Bi (Small Data only)\n",
"1. Download and Open PowerBi Desktop\n",
"2. Select “Get Data” and click on “Azure Blob Storage” >> Connect\n",
"3. Add your storage account and enter your storage key.\n",
"4. Select the container where your Data Collection is stored and click on Edit. \n",
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"6. Click on the double arrow aside the “Content” column to combine the files. \n",
"7. Click OK and the data will preload.\n",
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Disable Data Collection"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aks_service.update(collect_model_data=False)"
]
}
],
"metadata": {
"authors": [
{
"name": "marthalc"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.10"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.10" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.15"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.15" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.17"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.17" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.18"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.18" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.2"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.2" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.21"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.21" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.23"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.23" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.30"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.30" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.33"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.33" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.41"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.41" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.43"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.43" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.6"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.6" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.8"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.8" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,14 @@
This software is made available to you on the condition that you agree to
[your agreement][1] governing your use of Azure.
If you do not have an existing agreement governing your use of Azure, you agree that
your agreement governing use of Azure is the [Microsoft Online Subscription Agreement][2]
(which incorporates the [Online Services Terms][3]).
By using the software you agree to these terms. This software may collect data
that is transmitted to Microsoft. Please see the [Microsoft Privacy Statement][4]
to learn more about how Microsoft processes personal data.
[1]: https://azure.microsoft.com/en-us/support/legal/
[2]: https://azure.microsoft.com/en-us/support/legal/subscription-agreement/
[3]: http://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
[4]: http://go.microsoft.com/fwlink/?LinkId=248681

95
NBSETUP.md Normal file
View File

@@ -0,0 +1,95 @@
# Set up your notebook environment for Azure Machine Learning
To run the notebooks in this repository use one of following options.
## **Option 1: Use Azure Notebooks**
Azure Notebooks is a hosted Jupyter-based notebook service in the Azure cloud. Azure Machine Learning Python SDK is already pre-installed in the Azure Notebooks `Python 3.6` kernel.
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks
1. Follow the instructions in the [Configuration](configuration.ipynb) notebook to create and connect to a workspace
1. Open one of the sample notebooks
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook by choosing Kernel > Change Kernel > Python 3.6 from the menus.
## **Option 2: Use your own notebook server**
### Quick installation
We recommend you create a Python virtual environment ([Miniconda](https://conda.io/miniconda.html) preferred but [virtualenv](https://virtualenv.pypa.io/en/latest/) works too) and install the SDK in it.
```sh
# install just the base SDK
pip install azureml-sdk
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# below steps are optional
# install the base SDK, Jupyter notebook server and tensorboard
pip install azureml-sdk[notebooks,tensorboard]
# install model explainability component
pip install azureml-sdk[interpret]
# install automated ml components
pip install azureml-sdk[automl]
# install experimental features (not ready for production use)
pip install azureml-sdk[contrib]
```
Note the _extras_ (the keywords inside the square brackets) can be combined. For example:
```sh
# install base SDK, Jupyter notebook and automated ml components
pip install azureml-sdk[notebooks,automl]
```
### Full instructions
[Install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)
Please make sure you start with the [Configuration](configuration.ipynb) notebook to create and connect to a workspace.
### Video walkthrough:
[!VIDEO https://youtu.be/VIsXeTuW3FU]
## **Option 3: Use Docker**
You need to have Docker engine installed locally and running. Open a command line window and type the following command.
__Note:__ We use version `1.0.10` below as an exmaple, but you can replace that with any available version number you like.
```sh
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# change current directory to the folder
# where Dockerfile of the specific SDK version is located.
cd MachineLearningNotebooks/Dockerfiles/1.0.10
# build a Docker image with the a name (azuremlsdk for example)
# and a version number tag (1.0.10 for example).
# this can take several minutes depending on your computer speed and network bandwidth.
docker build . -t azuremlsdk:1.0.10
# launch the built Docker container which also automatically starts
# a Jupyter server instance listening on port 8887 of the host machine
docker run -it -p 8887:8887 azuremlsdk:1.0.10
```
Now you can point your browser to http://localhost:8887. We recommend that you start from the `configuration.ipynb` notebook at the root directory.
If you need additional Azure ML SDK components, you can either modify the Docker files before you build the Docker images to add additional steps, or install them through command line in the live container after you build the Docker image. For example:
```sh
# install the core SDK and automated ml components
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[interpret]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]
```
Drag and Drop
The image will be downloaded by Fatkun

102
README.md
View File

@@ -1,47 +1,77 @@
Get the full documentation for Azure Machine Learning service at:
# Azure Machine Learning service example notebooks
https://docs.microsoft.com/azure/machine-learning/service/
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
<br>
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
# Sample notebooks for Azure Machine Learning service
To run the notebooks in this repository use one of these methods:
## Use Azure Notebooks - Jupyter based notebooks in the Azure cloud
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
1. Follow the instructions in the [00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks.
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook.
![set kernel to Python 3.6](images/python36.png)
![Azure ML Workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/media/concept-azure-machine-learning-architecture/workflow.png)
## **Use your own notebook server**
## Quick installation
```sh
pip install azureml-sdk
```
Read more detailed instructions on [how to set up your environment](./NBSETUP.md) using Azure Notebook service, your own Jupyter notebook server, or Docker.
1. Setup a Jupyter Notebook server and [install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python).
1. Clone [this repository](https://aka.ms/aml-notebooks).
1. You may need to install other packages for specific notebooks
1. Start your notebook server.
1. Follow the instructions in the [00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks.
## How to navigate and use the example notebooks?
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, you should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
This [index](./index.md) should assist in navigating the Azure Machine Learning notebook samples and encourage efficient retrieval of topics and content.
> Note: **Looking for automated machine learning samples?**
> For your convenience, you can use an installation script instead of the steps below for the automated ML notebooks. Go to the [automl folder README](automl/README.md) and follow the instructions. The script installs all packages needed for notebooks in that folder.
If you want to...
# Contributing
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
* ...learn about experimentation and tracking run history: [track and monitor experiments](./how-to-use-azureml/track-and-monitor-experiments).
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy models as a realtime scoring service, first learn the basics by [deploying to Azure Container Instance](./how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service: [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
## Tutorials
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## How to use Azure ML
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
- [Training with ML and DL frameworks](./how-to-use-azureml/ml-frameworks) - Examples demonstrating how to build and train machine learning models at scale on Azure ML and perform hyperparameter tuning.
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
---
## Documentation
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
* [Python SDK reference](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
* Azure ML Data Prep SDK [overview](https://aka.ms/data-prep-sdk), [Python SDK reference](https://aka.ms/aml-data-prep-apiref), and [tutorials and how-tos](https://aka.ms/aml-data-prep-notebooks).
---
## Community Repository
Visit this [community repository](https://github.com/microsoft/MLOps/tree/master/examples) to find useful end-to-end sample notebooks. Also, please follow these [contribution guidelines](https://github.com/microsoft/MLOps/blob/master/contributing.md) when contributing to this repository.
## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users:
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
- [UMass Amherst Student Samples](https://github.com/katiehouse3/microsoft-azure-ml-notebooks) - A number of end-to-end machine learning notebooks, including machine translation, image classification, and customer churn, created by students in the 696DS course at UMass Amherst.
## Data/Telemetry
This repository collects usage data and sends it to Microsoft to help improve our products and services. Read Microsoft's [privacy statement to learn more](https://privacy.microsoft.com/en-US/privacystatement)
To opt out of tracking, please go to the raw markdown or .ipynb files and remove the following line of code:
```sh
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/README.png)"
```
This URL will be slightly different depending on the file.
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/README.png)

View File

@@ -1,15 +0,0 @@
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.

View File

@@ -1,115 +0,0 @@
# The script to run.
script: train.py
# The arguments to the script file.
arguments: []
# The name of the compute target to use for this run.
target: local
# Framework to execute inside. Allowed values are "Python" , "PySpark", "CNTK", "TensorFlow", and "PyTorch".
framework: PySpark
# Communicator for the given framework. Allowed values are "None" , "ParameterServer", "OpenMpi", and "IntelMpi".
communicator: None
# Automatically prepare the run environment as part of the run itself.
autoPrepareEnvironment: true
# Maximum allowed duration for the run.
maxRunDurationSeconds:
# Number of nodes to use for running job.
nodeCount: 1
# Environment details.
environment:
# Environment variables set for the run.
environmentVariables:
EXAMPLE_ENV_VAR: EXAMPLE_VALUE
# Python details
python:
# user_managed_dependencies=True indicates that the environmentwill be user managed. False indicates that AzureML willmanage the user environment.
userManagedDependencies: false
# The python interpreter path
interpreterPath: python
# Path to the conda dependencies file to use for this run. If a project
# contains multiple programs with different sets of dependencies, it may be
# convenient to manage those environments with separate files.
condaDependenciesFile: aml_config/conda_dependencies.yml
# Docker details
docker:
# Set True to perform this run inside a Docker container.
enabled: true
# Base image used for Docker-based runs.
baseImage: mcr.microsoft.com/azureml/base:0.2.0
# Set False if necessary to work around shared volume bugs.
sharedVolumes: true
# Run with NVidia Docker extension to support GPUs.
gpuSupport: false
# Extra arguments to the Docker run command.
arguments: []
# Image registry that contains the base image.
baseImageRegistry:
# DNS name or IP address of azure container registry(ACR)
address:
# The username for ACR
username:
# The password for ACR
password:
# Spark details
spark:
# List of spark repositories.
repositories:
- https://mmlspark.azureedge.net/maven
packages:
- group: com.microsoft.ml.spark
artifact: mmlspark_2.11
version: '0.12'
precachePackages: true
# Databricks details
databricks:
# List of maven libraries.
mavenLibraries: []
# List of PyPi libraries
pypiLibraries: []
# List of RCran libraries
rcranLibraries: []
# List of JAR libraries
jarLibraries: []
# List of Egg libraries
eggLibraries: []
# History details.
history:
# Enable history tracking -- this allows status, logs, metrics, and outputs
# to be collected for a run.
outputCollection: true
# whether to take snapshots for history.
snapshotProject: true
# Spark configuration details.
spark:
configuration:
spark.app.name: Azure ML Experiment
spark.yarn.maxAppAttempts: 1
# HDI details.
hdi:
# Yarn deploy mode. Options are cluster and client.
yarnDeployMode: cluster
# Tensorflow details.
tensorflow:
# The number of worker tasks.
workerCount: 1
# The number of parameter server tasks.
parameterServerCount: 1
# Mpi details.
mpi:
# When using MPI, number of processes per node.
processCountPerNode: 1
# data reference configuration details
dataReferences: {}
# Project share datastore reference.
sourceDirectoryDataStore:
# AmlCompute details.
amlcompute:
# VM size of the Cluster to be created.Allowed values are Azure vm sizes.The list of vm sizes is available in 'https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-sizes-specs
vmSize:
# VM priority of the Cluster to be created.Allowed values are "dedicated" , "lowpriority".
vmPriority:
# A bool that indicates if the cluster has to be retained after job completion.
retainCluster: false
# Name of the cluster to be created. If not specified, runId will be used as cluster name.
name:
# Maximum number of nodes in the AmlCompute cluster to be created. Minimum number of nodes will always be set to 0.
clusterMaxNodeCount: 1

View File

@@ -1,115 +0,0 @@
# The script to run.
script: train.py
# The arguments to the script file.
arguments: []
# The name of the compute target to use for this run.
target: local
# Framework to execute inside. Allowed values are "Python" , "PySpark", "CNTK", "TensorFlow", and "PyTorch".
framework: Python
# Communicator for the given framework. Allowed values are "None" , "ParameterServer", "OpenMpi", and "IntelMpi".
communicator: None
# Automatically prepare the run environment as part of the run itself.
autoPrepareEnvironment: true
# Maximum allowed duration for the run.
maxRunDurationSeconds:
# Number of nodes to use for running job.
nodeCount: 1
# Environment details.
environment:
# Environment variables set for the run.
environmentVariables:
EXAMPLE_ENV_VAR: EXAMPLE_VALUE
# Python details
python:
# user_managed_dependencies=True indicates that the environmentwill be user managed. False indicates that AzureML willmanage the user environment.
userManagedDependencies: false
# The python interpreter path
interpreterPath: python
# Path to the conda dependencies file to use for this run. If a project
# contains multiple programs with different sets of dependencies, it may be
# convenient to manage those environments with separate files.
condaDependenciesFile: aml_config/conda_dependencies.yml
# Docker details
docker:
# Set True to perform this run inside a Docker container.
enabled: false
# Base image used for Docker-based runs.
baseImage: mcr.microsoft.com/azureml/base:0.2.0
# Set False if necessary to work around shared volume bugs.
sharedVolumes: true
# Run with NVidia Docker extension to support GPUs.
gpuSupport: false
# Extra arguments to the Docker run command.
arguments: []
# Image registry that contains the base image.
baseImageRegistry:
# DNS name or IP address of azure container registry(ACR)
address:
# The username for ACR
username:
# The password for ACR
password:
# Spark details
spark:
# List of spark repositories.
repositories:
- https://mmlspark.azureedge.net/maven
packages:
- group: com.microsoft.ml.spark
artifact: mmlspark_2.11
version: '0.12'
precachePackages: true
# Databricks details
databricks:
# List of maven libraries.
mavenLibraries: []
# List of PyPi libraries
pypiLibraries: []
# List of RCran libraries
rcranLibraries: []
# List of JAR libraries
jarLibraries: []
# List of Egg libraries
eggLibraries: []
# History details.
history:
# Enable history tracking -- this allows status, logs, metrics, and outputs
# to be collected for a run.
outputCollection: true
# whether to take snapshots for history.
snapshotProject: true
# Spark configuration details.
spark:
configuration:
spark.app.name: Azure ML Experiment
spark.yarn.maxAppAttempts: 1
# HDI details.
hdi:
# Yarn deploy mode. Options are cluster and client.
yarnDeployMode: cluster
# Tensorflow details.
tensorflow:
# The number of worker tasks.
workerCount: 1
# The number of parameter server tasks.
parameterServerCount: 1
# Mpi details.
mpi:
# When using MPI, number of processes per node.
processCountPerNode: 1
# data reference configuration details
dataReferences: {}
# Project share datastore reference.
sourceDirectoryDataStore:
# AmlCompute details.
amlcompute:
# VM size of the Cluster to be created.Allowed values are Azure vm sizes.The list of vm sizes is available in 'https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-sizes-specs
vmSize:
# VM priority of the Cluster to be created.Allowed values are "dedicated" , "lowpriority".
vmPriority:
# A bool that indicates if the cluster has to be retained after job completion.
retainCluster: false
# Name of the cluster to be created. If not specified, runId will be used as cluster name.
name:
# Maximum number of nodes in the AmlCompute cluster to be created. Minimum number of nodes will always be set to 0.
clusterMaxNodeCount: 1

View File

@@ -1 +0,0 @@
{"Id": "local-compute", "Scope": "/subscriptions/65a1016d-0f67-45d2-b838-b8f373d6d52e/resourceGroups/sheri/providers/Microsoft.MachineLearningServices/workspaces/sheritestqs3/projects/local-compute"}

View File

@@ -1,224 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 00. Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n",
"\n",
"\n",
"## Prerequisites:\n",
"\n",
"Before running this notebook, run the `automl_setup` script described in README.md.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Machine Learning Services Resource Provider\n",
"\n",
"Microsoft.MachineLearningServices only needs to be registed once in the subscription.\n",
"To register it:\n",
"1. Start the Azure portal.\n",
"2. Select your `All services` and then `Subscription`.\n",
"3. Select the subscription that you want to use.\n",
"4. Click on `Resource providers`\n",
"3. Click the `Register` link next to Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then load the workspace from this config file from any notebook in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load workspace configuration from ./aml_config/config.json file.\n",
"my_workspace = Workspace.from_config()\n",
"my_workspace.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host All Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,414 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 01: Classification with Local Compute\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 50,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_train, \n",
" y = y_train, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,485 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 03: Remote Execution using DSVM (Ubuntu)\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you wiil learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Attach an existing DSVM to a workspace.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using the DSVM.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition, this notebook showcases the following features:\n",
"- **Parallel** executions for iterations\n",
"- **Asynchronous** tracking of progress\n",
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-dsvm4'\n",
"project_folder = './sample_projects/automl-remote-dsvm4'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"**Note:** If creation fails with a message about Marketplace purchase eligibilty, start creation of a DSVM through the [Azure portal](https://portal.azure.com), and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled this setting, you can exit the portal without actually creating the DSVM, and creation of the DSVM through the notebook should work.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"\n",
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('Found an existing DSVM.')\n",
"except:\n",
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"from sklearn import datasets\n",
"from scipy import sparse\n",
"import numpy as np\n",
"\n",
"def get_data():\n",
" \n",
" digits = datasets.load_digits()\n",
" X_train = digits.data[100:,:]\n",
" y_train = digits.target[100:]\n",
"\n",
" return { \"X\" : X_train, \"y\" : y_train }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML <a class=\"anchor\" id=\"Instantiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
"**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be less than the number of cores on the DSVM.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 600,\n",
" \"iterations\": 20,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"concurrent_iterations\": 2,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder, \n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** The first run on a new DSVM may take several minutes to prepare the environment."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
"\n",
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results\n",
"\n",
"#### Loading Executed Runs\n",
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"remote_run = AutoMLRun(experiment=experiment, run_id = 'AutoML_480d3ed6-fc94-44aa-8f4e-0b945db9d3ef')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Wait until the run finishes.\n",
"remote_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2.\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model which has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = remote_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Our Best Fitted Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,511 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 03: Remote Execution using Batch AI\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Attach an existing Batch AI compute to a workspace.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using Batch AI.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** executions for iterations\n",
"- **Asynchronous** tracking of progress\n",
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-batchai'\n",
"project_folder = './sample_projects/automl-remote-batchai'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Batch AI Cluster\n",
"The cluster is created as Machine Learning Compute and will appear under your workspace.\n",
"\n",
"**Note:** The creation of the Batch AI cluster can take over 10 minutes, please be patient.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. Batch AI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import BatchAiCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# Choose a name for your cluster.\n",
"batchai_cluster_name = \"mybatchai\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"for ct_name, ct in ws.compute_targets().items():\n",
" print(ct.name, ct.type)\n",
" if (ct.name == batchai_cluster_name and ct.type == 'BatchAI'):\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = ct\n",
" break\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = BatchAiCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" autoscale_enabled = True,\n",
" cluster_min_nodes = 1, \n",
" cluster_max_nodes = 4)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, batchai_cluster_name, provisioning_config)\n",
" \n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
" \n",
" # For a more detailed view of current Batch AI cluster status, use the 'status' property."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"from sklearn import datasets\n",
"from scipy import sparse\n",
"import numpy as np\n",
"\n",
"def get_data():\n",
" \n",
" digits = datasets.load_digits()\n",
" X_train = digits.data\n",
" y_train = digits.target\n",
"\n",
" return { \"X\" : X_train, \"y\" : y_train }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
"**Note:** When using Batch AI, you can't pass Numpy arrays directly to the fit method.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 120,\n",
" \"iterations\": 20,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"concurrent_iterations\": 5,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" compute_target = compute_target,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results\n",
"\n",
"#### Loading executed runs\n",
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"remote_run = AutoMLRun(experiment = experiment, run_id = 'AutoML_5db13491-c92a-4f1d-b622-8ab8d973a058')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Wait until the run finishes.\n",
"remote_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2.\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model which has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = remote_run.get_output(iteration=iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,491 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Auto ML 04: Remote Execution with Text Data from Azure Blob Storage\n",
"\n",
"In this example we use the [Burning Man 2016 dataset](https://innovate.burningman.org/datasets-page/) to showcase how you can use AutoML to handle text data from Azure Blob Storage.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Attach an existing DSVM to a workspace.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using the DSVM.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** executions for iterations\n",
"- **Asynchronous** tracking of progress\n",
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n",
"- Handling **text** data using the `preprocess` flag\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-dsvm-blobstore'\n",
"project_folder = './sample_projects/automl-remote-dsvm-blobstore'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach a Remote Linux DSVM\n",
"To use a remote Docker compute target:\n",
"1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.\n",
"2. Enter the IP address, user name and password below.\n",
"\n",
"**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on changing SSH ports for security reasons."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import RemoteCompute\n",
"import time\n",
"\n",
"# Add your VM information below\n",
"# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port, \n",
"# dsvm_username and dsvm_password will be ignored.\n",
"compute_name = 'mydsvm'\n",
"dsvm_ip_addr = '<<ip_addr>>'\n",
"dsvm_ssh_port = 22\n",
"dsvm_username = '<<username>>'\n",
"dsvm_password = '<<password>>'\n",
"\n",
"if compute_name in ws.compute_targets():\n",
" print('Using existing compute.')\n",
" dsvm_compute = ws.compute_targets()[compute_name]\n",
"else:\n",
" RemoteCompute.attach(workspace=ws, name=compute_name, address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)\n",
"\n",
" while ws.compute_targets()[compute_name].provisioning_state == 'Creating':\n",
" time.sleep(1)\n",
"\n",
" dsvm_compute = ws.compute_targets()[compute_name]\n",
" \n",
" if dsvm_compute.provisioning_state == 'Failed':\n",
" print('Attached failed.')\n",
" print(dsvm_compute.provisioning_errors)\n",
" dsvm_compute.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns a [dictionary](README.md#getdata)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"\n",
"def get_data():\n",
" # Load Burning Man 2016 data.\n",
" df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
" # Get integer labels.\n",
" le = LabelEncoder()\n",
" le.fit(df[\"Label\"].values)\n",
" y = le.transform(df[\"Label\"].values)\n",
" X = df.drop([\"Label\"], axis=1)\n",
"\n",
" X_train, _, y_train, _ = train_test_split(X, y, test_size = 0.1, random_state = 42)\n",
"\n",
" return { \"X\" : X_train, \"y\" : y_train }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View data\n",
"\n",
"You can execute the `get_data()` function locally to view the training data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%run $project_folder/get_data.py\n",
"data_dict = get_data()\n",
"df = data_dict[\"X\"]\n",
"y = data_dict[\"y\"]\n",
"pd.set_option('display.max_colwidth', 15)\n",
"df['Label'] = pd.Series(y, index=df.index)\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
"**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
"|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.<br>Default is *1*; you can set it to *-1* to use all cores.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 3600,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": True,\n",
" \"max_cores_per_iteration\": 2\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" path = project_folder,\n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
"remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2.\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model which has the smallest `accuracy` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"zero_run, zero_model = remote_run.get_output(iteration = iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"\n",
"# get integer labels\n",
"le = LabelEncoder()\n",
"le.fit(df[\"Label\"].values)\n",
"y = le.transform(df[\"Label\"].values)\n",
"X = df.drop([\"Label\"], axis=1)\n",
"\n",
"_, X_test, _, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
"\n",
"\n",
"ypred = fitted_model.predict(X_test.values)\n",
"\n",
"\n",
"ypred_strings = le.inverse_transform(ypred)\n",
"ytest_strings = le.inverse_transform(y_test)\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,381 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 05: Blacklisting Models, Early Termination, and Handling Missing Data\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for handling missing values in data. We also provide a stopping metric indicating a target for the primary metrics so that AutoML can terminate the run without necessarly going through all the iterations. Finally, if you want to avoid a certain pipeline, we allow you to specify a blacklist of algorithms that AutoML will ignore for this run.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Blacklisting** certain pipelines\n",
"- Specifying **target metrics** to indicate stopping criteria\n",
"- Handling **missing data** in the input\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment.\n",
"experiment_name = 'automl-local-missing-data'\n",
"project_folder = './sample_projects/automl-local-missing-data'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating missing data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"\n",
"digits = datasets.load_digits()\n",
"X_train = digits.data[10:,:]\n",
"y_train = digits.target[10:]\n",
"\n",
"# Add missing values in 75% of the lines.\n",
"missing_rate = 0.75\n",
"n_missing_samples = int(np.floor(X_train.shape[0] * missing_rate))\n",
"missing_samples = np.hstack((np.zeros(X_train.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n",
"rng = np.random.RandomState(0)\n",
"rng.shuffle(missing_samples)\n",
"missing_features = rng.randint(0, X_train.shape[1], n_missing_samples)\n",
"X_train[np.where(missing_samples)[0], missing_features] = np.nan"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(data = X_train)\n",
"df['Label'] = pd.Series(y_train, index=df.index)\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment. This includes setting `exit_score`, which should cause the run to complete before the `iterations` count is reached.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
"|**exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
"|**blacklist_algos**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGDClassifierWrapper</i><br><i>NBWrapper</i><br><i>BernoulliNB</i><br><i>SVCWrapper</i><br><i>LinearSVMWrapper</i><br><i>KNeighborsClassifier</i><br><i>DecisionTreeClassifier</i><br><i>RandomForestClassifier</i><br><i>ExtraTreesClassifier</i><br><i>LightGBMClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet<i><br><i>GradientBoostingRegressor<i><br><i>DecisionTreeRegressor<i><br><i>KNeighborsRegressor<i><br><i>LassoLars<i><br><i>SGDRegressor<i><br><i>RandomForestRegressor<i><br><i>ExtraTreesRegressor<i>|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 20,\n",
" n_cross_validations = 5,\n",
" preprocess = True,\n",
" exit_score = 0.9984,\n",
" blacklist_algos = ['KNeighborsClassifier','LinearSVMWrapper'],\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model which has the smallest `accuracy` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = local_run.get_output(metric = lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 3\n",
"# best_run, fitted_model = local_run.get_output(iteration = iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the best Fitted Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()\n"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,384 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 06: Custom CV Splits and Handling Sparse Data\n",
"\n",
"In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Custom CV** splits \n",
"- Handling **sparse data** in the input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the experiment\n",
"experiment_name = 'automl-local-missing-data'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-missing-data'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating Sparse Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import fetch_20newsgroups\n",
"from sklearn.feature_extraction.text import HashingVectorizer\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"remove = ('headers', 'footers', 'quotes')\n",
"categories = [\n",
" 'alt.atheism',\n",
" 'talk.religion.misc',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
"]\n",
"data_train = fetch_20newsgroups(subset = 'train', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
"\n",
"X_train, X_valid, y_train, y_valid = train_test_split(data_train.data, data_train.target, test_size = 0.33, random_state = 42)\n",
"\n",
"\n",
"vectorizer = HashingVectorizer(stop_words = 'english', alternate_sign = False,\n",
" n_features = 2**16)\n",
"X_train = vectorizer.transform(X_train)\n",
"X_valid = vectorizer.transform(X_valid)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features'])\n",
"summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]]\n",
"summary_df['Validation Set'] = [X_valid.shape[0], X_valid.shape[1]]\n",
"summary_df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|\n",
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification for the custom validation set.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 5,\n",
" preprocess = False,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" X_valid = X_valid, \n",
" y_valid = y_valid, \n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model which has the smallest `accuracy` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = local_run.get_output(metric = lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 3\n",
"# best_run, fitted_model = local_run.get_output(iteration = iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Best Fitted Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load test data.\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"data_test = fetch_20newsgroups(subset = 'test', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
"\n",
"X_test = vectorizer.transform(data_test.data)\n",
"y_test = data_test.target\n",
"\n",
"# Test our best pipeline.\n",
"\n",
"y_pred = fitted_model.predict(X_test)\n",
"y_pred_strings = [data_test.target_names[i] for i in y_pred]\n",
"y_test_strings = [data_test.target_names[i] for i in y_test]\n",
"\n",
"cm = ConfusionMatrix(y_test_strings, y_pred_strings)\n",
"print(cm)\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,333 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 07: Exploring Previous Runs\n",
"\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List all AutoML Experiments in a Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import re\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.run import Run\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"experiment_list = Experiment.list(workspace=ws)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"for experiment in experiment_list:\n",
" all_runs = list(experiment.get_runs())\n",
" automl_runs = []\n",
" for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" automl_runs.append(run) \n",
" summary_df[experiment.name] = [len(automl_runs)]\n",
" \n",
"pd.set_option('display.max_colwidth', -1)\n",
"summary_df.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List AutoML runs for an experiment\n",
"Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n",
"\n",
"proj = ws.experiments()[experiment_name]\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"all_runs = list(proj.get_runs(properties={'azureml.runsource': 'automl'}))\n",
"for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
" iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" \n",
"from IPython.display import HTML\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
"\n",
"from IPython.display import display\n",
"display(projname_html)\n",
"display(summary_df.T)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get details for an AutoML run\n",
"\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_id = '' # Filling your own run_id from above run ids\n",
"assert (run_id in summary_df.keys()),\"Run id not found! Please set run id to a value from above run ids\"\n",
"\n",
"from azureml.train.widgets import RunDetails\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n",
"\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time'])\n",
"properties = ml_run.get_properties()\n",
"tags = ml_run.get_tags()\n",
"status = ml_run.get_details()\n",
"amlsettings = eval(properties['RawAMLSettingsString'])\n",
"if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
"else:\n",
" iterations = properties['num_iterations']\n",
"start_time = None\n",
"if 'startTimeUtc' in status:\n",
" start_time = status['startTimeUtc']\n",
"end_time = None\n",
"if 'endTimeUtc' in status:\n",
" end_time = status['endTimeUtc']\n",
"summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time]\n",
"display(HTML('<h3>Runtime Details</h3>'))\n",
"display(summary_df)\n",
"\n",
"#settings_df = pd.DataFrame(data = amlsettings, index = [''])\n",
"display(HTML('<h3>AutoML Settings</h3>'))\n",
"display(amlsettings)\n",
"\n",
"display(HTML('<h3>Iterations</h3>'))\n",
"RunDetails(ml_run).show() \n",
"\n",
"children = list(ml_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"display(HTML('<h3>Metrics</h3>'))\n",
"display(rundata)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Download fitted models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the Best Model for Any Given Metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name.\n",
"best_run, fitted_model = ml_run.get_output(metric = metric)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the Model for Any Given Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 4 # Replace with an iteration number.\n",
"best_run, fitted_model = ml_run.get_output(iteration = iteration)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register fitted model for deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the Best Model for Any Given Metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, metric = metric)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the Model for Any Given Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 4 # Replace with an iteration number.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, iteration = iteration)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,550 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 08: Remote Execution with DataStore\n",
"\n",
"This sample accesses a data file on a remote DSVM through DataStore. Advantages of using data store are:\n",
"1. DataStore secures the access details.\n",
"2. DataStore supports read, write to blob and file store\n",
"3. AutoML natively supports copying data from DataStore to DSVM\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Storing data in DataStore.\n",
"2. get_data returning data from DataStore.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-remote-datastore-file'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-dsvm-file'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"compute_target_name = 'mydsvm'\n",
"\n",
"try:\n",
" dsvm_compute = DsvmCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copy data file to local\n",
"\n",
"Download the data file.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"mkdir data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"df.to_csv(\"data/data.tsv\", sep=\"\\t\", quotechar='\"', index=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload data to the cloud"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now make the data accessible remotely by uploading that data from your local machine into Azure so it can be accessed for remote training. The datastore is a convenient construct associated with your workspace for you to upload/download data, and interact with it from your remote compute targets. It is backed by Azure blob storage account.\n",
"\n",
"The data.tsv files are uploaded into a directory named data at the root of the datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace, Datastore\n",
"#blob_datastore = Datastore(ws, blob_datastore_name)\n",
"ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# ds.upload_files(\"data.tsv\")\n",
"ds.upload(src_dir='./data', target_path='data', overwrite=True, show_progress=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run\n",
"\n",
"First let's create a DataReferenceConfigruation object to inform the system what data folder to download to the copmute target."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import DataReferenceConfiguration\n",
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
" path_on_datastore='data', \n",
" mode='download', # download files from datastore to compute target\n",
" overwrite=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"conda_run_config.target = dsvm_compute.name\n",
"# set the data reference of the run coonfiguration\n",
"conda_run_config.data_references = {ds.name: dr}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"\n",
"The *get_data()* function returns a [dictionary](README.md#getdata)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"import os\n",
"from os.path import expanduser, join, dirname\n",
"\n",
"def get_data():\n",
" # Burning man 2016 data\n",
" df = pd.read_csv(join(dirname(os.path.realpath(__file__)),\n",
" os.environ[\"AZUREML_DATAREFERENCE_workspacefilestore\"],\n",
" \"data.tsv\"), delimiter=\"\\t\", quotechar='\"')\n",
" # get integer labels\n",
" le = LabelEncoder()\n",
" le.fit(df[\"Label\"].values)\n",
" y = le.transform(df[\"Label\"].values)\n",
" X = df.drop([\"Label\"], axis=1)\n",
"\n",
" X_train, _, y_train, _ = train_test_split(X, y, test_size=0.1, random_state=42)\n",
"\n",
" return { \"X\" : X_train.values, \"y\" : y_train }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to AutoMLConfig.</i>\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\": 3600,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": True,\n",
" \"max_cores_per_iteration\": 2,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path=project_folder,\n",
" run_configuration=conda_run_config,\n",
" #compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Models <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
"\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling Runs\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
"remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The *get_output* method returns the best run and the fitted model. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model based on any other metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# lookup_metric = \"accuracy\"\n",
"# best_run, fitted_model = remote_run.get_output(metric=lookup_metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a specific iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# iteration = 1\n",
"# best_run, fitted_model = remote_run.get_output(iteration=iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Best Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"\n",
"# get integer labels\n",
"le = LabelEncoder()\n",
"le.fit(df[\"Label\"].values)\n",
"y = le.transform(df[\"Label\"].values)\n",
"X = df.drop([\"Label\"], axis=1)\n",
"\n",
"_, X_test, _, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
"\n",
"ypred = fitted_model.predict(X_test.values)\n",
"\n",
"ypred_strings = le.inverse_transform(ypred)\n",
"ytest_strings = le.inverse_transform(y_test)\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,500 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 09: Classification with Deployment\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI).\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Register the model.\n",
"6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-local-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in seconds for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_train = digits.data[10:,:]\n",
"y_train = digits.target[10:]\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" name = experiment_name,\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 1200,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Fitted Model for Deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description = description, tags = tags)\n",
"local_run.model_id # This will be written to the script file later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Scoring Script"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a YAML File for the Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dependencies = ml_run.get_run_sdk_dependencies(iteration = 7)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:\n",
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile myenv.yml\n",
"name: myenv\n",
"channels:\n",
" - defaults\n",
"dependencies:\n",
" - pip:\n",
" - numpy==1.14.2\n",
" - scikit-learn==0.19.2\n",
" - azureml-sdk[notebooks,automl]==<<azureml-version>>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<azureml-version>>', dependencies['azureml-sdk']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', local_run.model_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Container Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",
"image = Image.create(name = \"automlsampleimage\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the Image as a Web Service on Azure Container Instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'automl-sample-01'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete a Web Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get Logs from a Deployed Web Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test a Web Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"for index in np.random.choice(len(y_test), 3, replace = False):\n",
" print(index)\n",
" test_sample = json.dumps({'data':X_test[index:index + 1].tolist()})\n",
" predicted = aci_service.run(input_data = test_sample)\n",
" label = y_test[index]\n",
" predictedDict = json.loads(predicted)\n",
" title = \"Label value = %d Predicted value = %s \" % ( label,predictedDict['result'][0])\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,294 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 10: Multi-output\n",
"\n",
"This notebook shows how to use AutoML to train multi-output problems by leveraging the correlation between the outputs using indicator vectors.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transformer Functions\n",
"The transformations of inputs `X` and `y` are happening as follows, e.g. `y = {y_1, y_2}`, then `X` becomes\n",
" \n",
"`X 1 0`\n",
" \n",
"`X 0 1`\n",
"\n",
"and `y` becomes,\n",
"\n",
"`y_1`\n",
"\n",
"`y_2`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"from scipy import linalg\n",
"\n",
"#Transformer functions\n",
"def multi_output_transform_x_y(X, y):\n",
" X_new = multi_output_transformer_x(X, y.shape[1])\n",
" y_new = multi_output_transform_y(y)\n",
" return X_new, y_new\n",
"\n",
"def multi_output_transformer_x(X, number_of_columns_y):\n",
" indicator_vecs = linalg.block_diag(*([np.ones((X.shape[0], 1))] * number_of_columns_y))\n",
" if sparse.issparse(X):\n",
" X_new = sparse.vstack(np.tile(X, number_of_columns_y))\n",
" indicator_vecs = sparse.coo_matrix(indicator_vecs)\n",
" X_new = sparse.hstack((X_new, indicator_vecs))\n",
" else:\n",
" X_new = np.tile(X, (number_of_columns_y, 1))\n",
" X_new = np.hstack((X_new, indicator_vecs))\n",
" return X_new\n",
"\n",
"def multi_output_transform_y(y):\n",
" return y.reshape(-1, order=\"F\")\n",
"\n",
"def multi_output_inverse_transform_y(y, number_of_columns_y):\n",
" return y.reshape((-1, number_of_columns_y), order = \"F\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AutoML Experiment Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-multi-output'\n",
"project_folder = './sample_projects/automl-local-multi-output'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Random Dataset for Test Purposes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rng = np.random.RandomState(1)\n",
"X_train = np.sort(200 * rng.rand(600, 1) - 100, axis = 0)\n",
"y_train = np.array([np.pi * np.sin(X_train).ravel(), np.pi * np.cos(X_train).ravel()]).T\n",
"y_train += (0.5 - rng.rand(*y_train.shape))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perform X and y transformation using the transformer function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train_transformed, y_train_transformed = multi_output_transform_x_y(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Configure AutoML using the transformed results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log = 'automl_errors_multi.log',\n",
" primary_metric = 'r2_score',\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train_transformed,\n",
" y = y_train_transformed,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fit the Transformed Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the best fit model.\n",
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate random data set for predicting.\n",
"X_test = np.sort(200 * rng.rand(200, 1) - 100, axis = 0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Transform predict data.\n",
"X_test_transformed = multi_output_transformer_x(X_test, y_train.shape[1])\n",
"\n",
"# Predict and inverse transform the prediction.\n",
"y_predict = fitted_model.predict(X_test_transformed)\n",
"y_predict = multi_output_inverse_transform_y(y_predict, y_train.shape[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(y_predict)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,251 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 11: Sample Weight\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use sample weight with AutoML. Sample weight is used where some sample values are more important than others.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to configure AutoML to use `sample_weight` and you will see the difference sample weight makes to the test results.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose names for the regular and the sample weight experiments.\n",
"experiment_name = 'non_sample_weight_experiment'\n",
"sample_weight_experiment_name = 'sample_weight_experiment'\n",
"\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate two `AutoMLConfig` objects. One will be used with `sample_weight` and one without."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]\n",
"\n",
"# The example makes the sample weight 0.9 for the digit 4 and 0.1 for all other digits.\n",
"# This makes the model more likely to classify as 4 if the image it not clear.\n",
"sample_weight = np.array([(0.9 if x == 4 else 0.01) for x in y_train])\n",
"\n",
"automl_classifier = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)\n",
"\n",
"automl_sample_weight = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 3600,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" sample_weight = sample_weight,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment objects and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_classifier, show_output = True)\n",
"sample_weight_run = sample_weight_experiment.submit(automl_sample_weight, show_output = True)\n",
"\n",
"best_run, fitted_model = local_run.get_output()\n",
"best_run_sample_weight, fitted_model_sample_weight = sample_weight_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:100, :]\n",
"y_test = digits.target[:100]\n",
"images = digits.images[:100]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Compare the Models\n",
"The prediction from the sample weight model is more likely to correctly predict 4's. However, it is also more likely to predict 4 for some images that are not labelled as 4."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in range(0,len(y_test)):\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" predicted_sample_weight = fitted_model_sample_weight.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" if predicted == 4 or predicted_sample_weight == 4 or label == 4:\n",
" title = \"Label value = %d Predicted value = %d Prediced with sample weight = %d\" % (label, predicted, predicted_sample_weight)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,251 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 12: Retrieving Training SDK Versions\n",
"\n",
"This example shows how to find the SDK versions used for an experiment.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from azureml.train.automl.utilities import get_sdk_dependencies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Retrieve the SDK versions in the current environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To retrieve the SDK versions in the current environment, run `get_sdk_dependencies`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"get_sdk_dependencies()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Train models using AutoML"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_train = digits.data[10:,:]\n",
"y_train = digits.target[10:]\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iterations = 3,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)\n",
"\n",
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Retrieve the SDK versions from RunHistory"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the SDK versions from RunHistory, first the run id needs to be recorded. This can either be done by copying it from the output message or by retrieving it after each run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use a run id copied from an output message.\n",
"#run_id = 'AutoML_c0585b1f-a0e6-490b-84c7-3a099468b28e'\n",
"\n",
"# Retrieve the run id from a run.\n",
"run_id = local_run.id\n",
"print(run_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize a new `AutoMLRun` object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = run_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get parent training SDK versions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_run.get_run_sdk_dependencies()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get the traning SDK versions of a specific run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_run.get_run_sdk_dependencies(iteration = 2)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,558 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 13: Prepare Data using `azureml.dataprep`\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n",
"Make sure you have executed the [setup](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
"2. Pass the `Dataflow` to AutoML for a local run.\n",
"3. Pass the `Dataflow` to AutoML for a remote run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install `azureml.dataprep` SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml-dataprep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.runconfig import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.workspace import Workspace\n",
"import azureml.dataprep as dprep\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
" \n",
"# choose a name for experiment\n",
"experiment_name = 'automl-dataprep-classification'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-dataprep-classification'\n",
" \n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading Data using DataPrep"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can use `smart_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.smart_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
"\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
"# and convert column types manually.\n",
"# Here we read a comma delimited file and convert all columns to integers.\n",
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X.skip(1).head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"max_time_sec\" : 600,\n",
" \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : False,\n",
" \"verbosity\" : logging.INFO,\n",
" \"n_cross_validations\": 3\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `Dataflow` Objects\n",
"\n",
"The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach a Remote Linux DSVM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('Found existing DVSM.')\n",
"except:\n",
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Update Conda Dependency file to have AutoML and DataPrep SDK\n",
"\n",
"Currently the AutoML and DataPrep SDKs are not installed with the Azure ML SDK by default. To circumvent this limitation, we update the conda dependency file to add these dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cd = CondaDependencies()\n",
"cd.add_pip_package(pip_package='azureml-dataprep')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a `RunConfiguration` with DSVM name"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration(conda_dependencies=cd)\n",
"run_config.target = dsvm_compute\n",
"run_config.auto_prepare_environment = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `Dataflow` Objects\n",
"\n",
"The `Dataflow` objects captured above can also be passed to the `submit` method for a remote run. AutoML will serialize the `Dataflow` object and send it to the remote compute target. The `Dataflow` will not be evaluated locally."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" run_configuration = run_config,\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)\n",
"remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.widgets import RunDetails\n",
"RunDetails(local_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"import pandas as pd\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the first iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import random\n",
"import numpy as np\n",
"\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appendix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Capture the `Dataflow` Objects for Later Use in AutoML\n",
"\n",
"`Dataflow` objects are immutable and are composed of a list of data preparation steps. A `Dataflow` object can be branched at any point for further usage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# sklearn.digits.data + target\n",
"digits_complete = dprep.smart_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`digits_complete` (sourced from `sklearn.datasets.load_digits()`) is forked into `dflow_X` to capture all the feature columns and `dflow_y` to capture the label column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,232 +0,0 @@
# Table of Contents
1. [Auto ML Introduction](#introduction)
2. [Running samples in a Local Conda environment](#localconda)
3. [Auto ML SDK Sample Notebooks](#samples)
4. [Documentation](#documentation)
5. [Running using python command](#pythoncommand)
6. [Troubleshooting](#troubleshooting)
# Auto ML Introduction <a name="introduction"></a>
AutoML builds high quality Machine Learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, AutoML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
# Running samples in a Local Conda environment <a name="localconda"></a>
You can run these notebooks in Azure Notebooks without any extra installation. To run these notebook on your own notebook server, use these installation instructions.
It is best if you create a new conda environment locally to try this SDK, so it doesn't mess up with your existing Python environment.
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose Python 3.7 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
### 3. Setup a new conda environment
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 30 minutes to execute.
## Windows
Start a conda command windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup
```
## Mac
Install "Command line developer tools" if it is not already installed (you can use the command: `xcode-select --install`).
Start a Terminal windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_mac.sh
```
## Linux
cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup_linux.sh
```
### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook
- Please make sure you use the Python [conda env:azure_automl] kernel when running this notebook.
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in AutoML
# Auto ML SDK Sample Notebooks <a name="samples"></a>
- [00.configuration.ipynb](00.configuration.ipynb)
- Register Machine Learning Services Resource Provider
- Create new Azure ML Workspace
- Save Workspace configuration file
- [01.auto-ml-classification.ipynb](01.auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Uses local compute for training
- [02.auto-ml-regression.ipynb](02.auto-ml-regression.ipynb)
- Dataset: scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)
- Simple example of using Auto ML for regression
- Uses local compute for training
- [03.auto-ml-remote-execution.ipynb](03.auto-ml-remote-execution.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using Auto ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [03b.auto-ml-remote-batchai.ipynb](03b.auto-ml-remote-batchai.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using Auto ML for classification using a remote Batch AI compute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
- [04.auto-ml-remote-execution-text-data-blob-store.ipynb](04.auto-ml-remote-execution-text-data-blob-store.ipynb)
- Dataset: [Burning Man 2016 dataset](https://innovate.burningman.org/datasets-page/)
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
- [05.auto-ml-missing-data-blacklist-early-termination.ipynb](05.auto-ml-missing-data-blacklist-early-termination.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
- [06.auto-ml-sparse-data-custom-cv-split.ipynb](06.auto-ml-sparse-data-custom-cv-split.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Handle sparse datasets
- Specify custom train and validation set
- [07.auto-ml-exploring-previous-runs.ipynb](07.auto-ml-exploring-previous-runs)
- List all projects for the workspace
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Downlaod fitted pipeline for any iteration
- [08.auto-ml-remote-execution-with-text-file-on-DSVM](08.auto-ml-remote-execution-with-text-file-on-DSVM.ipynb)
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Download the data and store it in the DSVM to improve performance.
- [09.auto-ml-classification-with-deployment.ipynb](09.auto-ml-classification-with-deployment.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
- [10.auto-ml-multi-output-example.ipynb](10.auto-ml-multi-output-example.ipynb)
- Dataset: scikit learn's random example using multi-output pipeline(http://scikit-learn.org/stable/auto_examples/ensemble/plot_random_forest_regression_multioutput.html#sphx-glr-auto-examples-ensemble-plot-random-forest-regression-multioutput-py)
- Simple example of using Auto ML for multi output regression
- Handle both the dense and sparse metrix
- [11.auto-ml-sample-weight.ipynb](11.auto-ml-sample-weight.ipynb)
- How to specifying sample_weight
- The difference that it makes to test results
- [12.auto-ml-retrieve-the-training-sdk-versions.ipynb](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)
- How to get current and training env SDK versions
- [13.auto-ml-dataprep.ipynb](13.auto-ml-dataprep.ipynb)
- Using DataPrep for reading data
- [14a.auto-ml-classification-ensemble.ipynb](14a.auto-ml-classification-ensemble.ipynb)
- Classification with ensembling
- [14b.auto-ml-regression-ensemble.ipynb](14b.auto-ml-regression-ensemble.ipynb)
- Regression with ensembling
# Documentation <a name="documentation"></a>
## Table of Contents
1. [Auto ML Settings ](#automlsettings)
2. [Cross validation split options](#cvsplits)
3. [Get Data Syntax](#getdata)
4. [Data pre-processing and featurization](#preprocessing)
## Auto ML Settings <a name="automlsettings"></a>
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**max_time_sec**|Time limit in seconds for each iteration|None|
|**iterations**|Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. |100|
|**n_cross_validations**|Number of cross validation splits|None|
|**validation_size**|Size of validation set as percentage of all training samples|None|
|**concurrent_iterations**|Max number of iterations that would be executed in parallel|1|
|**preprocess**|*True/False* <br>Setting this to *True* enables preprocessing <br>on the input to handle missing data, and perform some common feature extraction<br>*Note: If input data is Sparse you cannot use preprocess=True*|False|
|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> You can set it to *-1* to use all cores|1|
|**exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|None|
|**blacklist_algos**|*Array* of *strings* indicating pipelines to ignore for Auto ML.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGDClassifierWrapper</i><br><i>NBWrapper</i><br><i>BernoulliNB</i><br><i>SVCWrapper</i><br><i>LinearSVMWrapper</i><br><i>KNeighborsClassifier</i><br><i>DecisionTreeClassifier</i><br><i>RandomForestClassifier</i><br><i>ExtraTreesClassifier</i><br><i>gradient boosting</i><br><i>LightGBMClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoostingRegressor</i><br><i>DecisionTreeRegressor</i><br><i>KNeighborsRegressor</i><br><i>LassoLars</i><br><i>SGDRegressor</i><br><i>RandomForestRegressor</i><br><i>ExtraTreesRegressor</i>|None|
## Cross validation split options <a name="cvsplits"></a>
### K-Folds Cross Validation
Use *n_cross_validations* setting to specify the number of cross validations. The training data set will be randomly split into *n_cross_validations* folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for *n_cross_validations* rounds until each fold is used once as validation set. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)
Use *validation_size* to specify the percentage of the training data set that should be used for validation, and use *n_cross_validations* to specify the number of cross validations. During each cross validation round, a subset of size *validation_size* will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Custom train and validation set
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
## get_data() syntax <a name="getdata"></a>
The *get_data()* function can be used to return a dictionary with these values:
|Key|Type|Dependency|Mutually Exclusive with|Description|
|:-|:-|:-|:-|:-|
|X|Pandas Dataframe or Numpy Array|y|data_train, label, columns|All features to train with|
|y|Pandas Dataframe or Numpy Array|X|label|Label data to train with. For classification, this should be an array of integers. |
|X_valid|Pandas Dataframe or Numpy Array|X, y, y_valid|data_train, label|*Optional* All features to validate with. If this is not specified, X is split between train and validate|
|y_valid|Pandas Dataframe or Numpy Array|X, y, X_valid|data_train, label|*Optional* The label data to validate with. If this is not specified, y is split between train and validate|
|sample_weight|Pandas Dataframe or Numpy Array|y|data_train, label, columns|*Optional* A weight value for each label. Higher values indicate that the sample is more important.|
|sample_weight_valid|Pandas Dataframe or Numpy Array|y_valid|data_train, label, columns|*Optional* A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate|
|data_train|Pandas Dataframe|label|X, y, X_valid, y_valid|All data (features+label) to train with|
|label|string|data_train|X, y, X_valid, y_valid|Which column in data_train represents the label|
|columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features|
|cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation|
## Data pre-processing and featurization <a name="preprocessing"></a>
If you use "preprocess=True", the following data preprocessing steps are performed automatically for you:
### 1. Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
### 2. Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
### 3. Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
### 4. Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
- Depending on cardinality of categorical features label encoding or (hashing) one-hot encoding is performed.
# Running using python command <a name="pythoncommand"></a>
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
However, on Windows the file needs to be modified before it can be run.
The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
# Troubleshooting <a name="troubleshooting"></a>
## Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size.
If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.
## Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the concurrent_iterations setting should always be less than the number of cores of the DSVM.
To resolve this issue, try reducing the value specified for the concurrent_iterations setting.

View File

@@ -1,19 +0,0 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6
- nb_conda
- matplotlib
- numpy>=1.11.0,<1.16.0
- scipy>=0.19.0,<0.20.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- pip:
# Required packages for AzureML execution, history, and data preparation.
- --extra-index-url https://pypi.python.org/simple
- azureml-sdk[automl]
- azureml-train-widgets
- pandas_ml

View File

@@ -1,42 +0,0 @@
@echo off
set conda_env_name=%1
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading azureml-sdk[automl] in existing conda environment %conda_env_name%
call pip install --upgrade azureml-sdk[automl]
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f automl_env.yml -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call pip install psutil
call jupyter nbextension install --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
call jupyter nbextension enable --py azureml.train.widgets
if errorlevel 1 goto ErrorExit
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
echo.
echo Starting jupyter notebook - please run notebook 00.configuration
echo.
jupyter notebook --log-level=50
goto End
:ErrorExit
echo Install failed
:End

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl]
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -1,36 +0,0 @@
#!/bin/bash
CONDA_ENV_NAME=$1
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl]
else
conda env create -f automl_env.yml -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
jupyter nbextension install --py azureml.train.widgets --user &&
jupyter nbextension enable --py azureml.train.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
echo "" &&
echo "Starting jupyter notebook - please run notebook 00.configuration" &&
echo "" &&
jupyter notebook --log-level=50
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

387
configuration.ipynb Normal file
View File

@@ -0,0 +1,387 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/configuration.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Configuration\n",
"\n",
"_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_\n",
"\n",
"---\n",
"---\n",
"\n",
"## Table of Contents\n",
"\n",
"1. [Introduction](#Introduction)\n",
" 1. What is an Azure Machine Learning workspace\n",
"1. [Setup](#Setup)\n",
" 1. Azure subscription\n",
" 1. Azure ML SDK and other library installation\n",
" 1. Azure Container Instance registration\n",
"1. [Configure your Azure ML Workspace](#Configure%20your%20Azure%20ML%20workspace)\n",
" 1. Workspace parameters\n",
" 1. Access your workspace\n",
" 1. Create a new workspace\n",
" 1. Create compute resources\n",
"1. [Next steps](#Next%20steps)\n",
"\n",
"---\n",
"\n",
"## Introduction\n",
"\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.\n",
"\n",
"Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.\n",
"\n",
"In this notebook you will\n",
"* Learn about getting an Azure subscription\n",
"* Specify your workspace parameters\n",
"* Access or create your workspace\n",
"* Add a default compute cluster for your workspace\n",
"\n",
"### What is an Azure Machine Learning workspace\n",
"\n",
"An Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"This section describes activities required before you can access any Azure ML services functionality."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Azure Subscription\n",
"\n",
"In order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces.\n",
"\n",
"### 2. Azure ML SDK and other library installation\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, the following cell checks the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.21.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK.\n",
"\n",
"### 3. Azure Container Instance registration\n",
"Azure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI. \n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure your Azure ML workspace\n",
"\n",
"### Workspace parameters\n",
"\n",
"To use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:\n",
"* Your subscription id\n",
"* A resource group name\n",
"* (optional) The region that will host your workspace\n",
"* A name for your workspace\n",
"\n",
"You can get your subscription ID from the [Azure portal](https://portal.azure.com).\n",
"\n",
"You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.\n",
"\n",
"The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.\n",
"\n",
"The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. \n",
"\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"Replace the default values in the cell below with your workspace parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.getenv(\"SUBSCRIPTION_ID\", default=\"<my-subscription-id>\")\n",
"resource_group = os.getenv(\"RESOURCE_GROUP\", default=\"<my-resource-group>\")\n",
"workspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<my-workspace-name>\")\n",
"workspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Access your workspace\n",
"\n",
"The following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" # write the details of the workspace to a configuration file to the notebook library\n",
" ws.write_config()\n",
" print(\"Workspace configuration succeeded. Skip the workspace creation steps below\")\n",
"except:\n",
" print(\"Workspace not accessible. Change your parameters or create a new workspace below\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a new workspace\n",
"\n",
"If you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.\n",
"\n",
"This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail if:\n",
"* You do not have permission to create a workspace in the resource group\n",
"* You do not have permission to create a resource group if it's non-existing.\n",
"* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note**: A Basic workspace is created by default. If you would like to create an Enterprise workspace, please specify sku = 'enterprise'.\n",
"Please visit our [pricing page](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) for more details on our Enterprise edition.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"# Create the workspace using the specified parameters\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" sku = 'basic',\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"\n",
"# write the details of the workspace to a configuration file to the notebook library\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create compute resources for your training experiments\n",
"\n",
"Many of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.\n",
"\n",
"To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.\n",
"\n",
"The cluster parameters are:\n",
"* vm_size - this describes the virtual machine type and size used in the cluster. All machines in the cluster are the same type. You can get the list of vm sizes available in your region by using the CLI command\n",
"\n",
"```shell\n",
"az vm list-skus -o tsv\n",
"```\n",
"* min_nodes - this sets the minimum size of the cluster. If you set the minimum to 0 the cluster will shut down all nodes while not in use. Setting this number to a value higher than 0 will allow for faster start-up times, but you will also be billed when the cluster is not in use.\n",
"* max_nodes - this sets the maximum size of the cluster. Setting this to a larger number allows for more concurrency and a greater distributed processing of scale-out jobs.\n",
"\n",
"\n",
"To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpu-cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
"\n",
" # Create the cluster with the specified name and configuration\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
" # Wait for the cluster to complete, show the output log\n",
" cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"gpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print(\"Found existing gpu cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new gpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" # Create the cluster with the specified name and configuration\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
" # Wait for the cluster to complete, show the output log\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Next steps\n",
"\n",
"In this notebook you configured this notebook library to connect easily to an Azure ML workspace. You can copy this notebook to your own libraries to connect them to you workspace, or use it to bootstrap new workspaces completely.\n",
"\n",
"If you came here from another notebook, you can return there and complete that exercise, or you can try out the [Tutorials](./tutorials) or jump into \"how-to\" notebooks and start creating and deploying models. A good place to start is the [train within notebook](./how-to-use-azureml/training/train-within-notebook) example that walks through a simplified but complete end to end machine learning process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "ninhu"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

4
configuration.yml Normal file
View File

@@ -0,0 +1,4 @@
name: configuration
dependencies:
- pip:
- azureml-sdk

305
contrib/RAPIDS/README.md Normal file
View File

@@ -0,0 +1,305 @@
## How to use the RAPIDS on AzureML materials
### Setting up requirements
The material requires the use of the Azure ML SDK and of the Jupyter Notebook Server to run the interactive execution. Please refer to instructions to [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") Follow the instructions under **Local Computer**, make sure to run the last step: <span style="font-family: Courier New;">pip install \<new package\></span> with <span style="font-family: Courier New;">new package = progressbar2 (pip install progressbar2)</span>
After following the directions, the user should end up setting a conda environment (<span style="font-family: Courier New;">myenv</span>)that can be activated in an Anaconda prompt
The user would also require an Azure Subscription with a Machine Learning Services quota on the desired region for 24 nodes or more (to be able to select a vmSize with 4 GPUs as it is used on the Notebook) on the desired VM family ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)), the specific vmSize to be used within the chosen family would also need to be whitelisted for Machine Learning Services usage.
&nbsp;
### Getting and running the material
Clone the AzureML Notebooks repository in GitHub by running the following command on a local_directory:
* C:\local_directory>git clone https://github.com/Azure/MachineLearningNotebooks.git
On a conda prompt navigate to the local directory, activate the conda environment (<span style="font-family: Courier New;">myenv</span>), where the Azure ML SDK was installed and launch Jupyter Notebook.
* (<span style="font-family: Courier New;">myenv</span>) C:\local_directory>jupyter notebook
From the resulting browser at http://localhost:8888/tree, navigate to the master notebook:
* http://localhost:8888/tree/MachineLearningNotebooks/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
&nbsp;
The following notebook will appear:
![](imgs/NotebookHome.png)
&nbsp;
### Master Jupyter Notebook
The notebook can be executed interactively step by step, by pressing the Run button (In a red circle in the above image.)
The first couple of functional steps import the necessary AzureML libraries. If you experience any errors please refer back to the [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") instructions.
&nbsp;
#### Setting up a Workspace
The following step gathers the information necessary to set up a workspace to execute the RAPIDS script. This needs to be done only once, or not at all if you already have a workspace you can use set up on the Azure Portal:
![](imgs/WorkSpaceSetUp.png)
It is important to be sure to set the correct values for the subscription\_id, resource\_group, workspace\_name, and region before executing the step. An example is:
subscription_id = os.environ.get("SUBSCRIPTION_ID", "1358e503-xxxx-4043-xxxx-65b83xxxx32d")
resource_group = os.environ.get("RESOURCE_GROUP", "AML-Rapids-Testing")
workspace_name = os.environ.get("WORKSPACE_NAME", "AML_Rapids_Tester")
workspace_region = os.environ.get("WORKSPACE_REGION", "West US 2")
&nbsp;
The resource\_group and workspace_name could take any value, the region should match the region for which the subscription has the required Machine Learning Services node quota.
The first time the code is executed it will redirect to the Azure Portal to validate subscription credentials. After the workspace is created, its related information is stored on a local file so that this step can be subsequently skipped. The immediate step will just load the saved workspace
![](imgs/saved_workspace.png)
Once a workspace has been created the user could skip its creation and just jump to this step. The configuration file resides in:
* C:\local_directory\\MachineLearningNotebooks\contrib\RAPIDS\aml_config\config.json
&nbsp;
#### Creating an AML Compute Target
Following step, creates an AML Compute Target
![](imgs/target_creation.png)
Parameter vm\_size on function call AmlCompute.provisioning\_configuration() has to be a member of the VM families ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)) that are the ones provided with P40 or V100 GPUs, that are the ones supported by RAPIDS. In this particular case an Standard\_NC24s\_V2 was used.
&nbsp;
If the output of running the step has an error of the form:
![](imgs/targeterror1.png)
It is an indication that even though the subscription has a node quota for VMs for that family, it does not have a node quota for Machine Learning Services for that family.
You will need to request an increase node quota for that family in that region for **Machine Learning Services**.
&nbsp;
Another possible error is the following:
![](imgs/targeterror2.png)
Which indicates that specified vmSize has not been whitelisted for usage on Machine Learning Services and a request to do so should be filled.
The successful creation of the compute target would have an output like the following:
![](imgs/targetsuccess.png)
&nbsp;
#### RAPIDS script uploading and viewing
The next step copies the RAPIDS script process_data.py, which is a slightly modified implementation of the [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb), into a script processing folder and it presents its contents to the user. (The script is discussed in the next section in detail).
If the user wants to use a different RAPIDS script, the references to the <span style="font-family: Courier New;">process_data.py</span> script have to be changed
![](imgs/scriptuploading.png)
&nbsp;
#### Data Uploading
The RAPIDS script loads and extracts features from the Fannie Maes Mortgage Dataset to train an XGBoost prediction model. The script uses two years of data
The next few steps download and decompress the data and is made available to the script as an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data).
&nbsp;
The following functions are used to download and decompress the input data
![](imgs/dcf1.png)
![](imgs/dcf2.png)
![](imgs/dcf3.png)
![](imgs/dcf4.png)
&nbsp;
The next step uses those functions to download locally file:
http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/mortgage_2000-2001.tgz'
And to decompress it, into local folder path = .\mortgage_2000-2001
The step takes several minutes, the intermediate outputs provide progress indicators.
![](imgs/downamddecom.png)
&nbsp;
The decompressed data should have the following structure:
* .\mortgage_2000-2001\acq\Acquisition_<year>Q<num>.txt
* .\mortgage_2000-2001\perf\Performance_<year>Q<num>.txt
* .\mortgage_2000-2001\names.csv
The data is divided in partitions that roughly correspond to yearly quarters. RAPIDS includes support for multi-node, multi-GPU deployments, enabling scaling up and out on much larger dataset sizes. The user will be able to verify that the number of partitions that the script is able to process increases with the number of GPUs used. The RAPIDS script is implemented for single-machine scenarios. An example supporting multiple nodes will be published later.
&nbsp;
The next step upload the data into the [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) under reference <span style="font-family: Courier New;">fileroot = mortgage_2000-2001</span>
The step takes several minutes to load the data, the output provides a progress indicator.
![](imgs/datastore.png)
Once the data has been loaded into the Azure Machine LEarning Data Store, in subsequent run, the user can comment out the ds.upload line and just make reference to the <span style="font-family: Courier New;">mortgage_2000-2001</blog> data store reference
&nbsp;
#### Setting up required libraries and environment to run RAPIDS code
There are two options to setup the environment to run RAPIDS code. The following steps shows how to ues a prebuilt conda environment. A recommended alternative is to specify a base Docker image and package dependencies. You can find sample code for that in the notebook.
![](imgs/install2.png)
&nbsp;
#### Wrapper function to submit the RAPIDS script as an Azure Machine Learning experiment
The next step consists of the definition of a wrapper function to be used when the user attempts to run the RAPIDS script with different arguments. It takes as arguments: <span style="font-family: Times New Roman;">*cpu\_training*</span>; a flag that indicates if the run is meant to be processed with CPU-only, <span style="font-family: Times New Roman;">*gpu\_count*</span>; the number of GPUs to be used if they are meant to be used and part_count: the number of data partitions to be used
![](imgs/wrapper.png)
&nbsp;
The core of the function resides in configuring the run by the instantiation of a ScriptRunConfig object, which defines the source_directory for the script to be executed, the name of the script and the arguments to be passed to the script.
In addition to the wrapper function arguments, two other arguments are passed: <span style="font-family: Times New Roman;">*data\_dir*</span>, the directory where the data is stored and <span style="font-family: Times New Roman;">*end_year*</span> is the largest year to use partition from.
As mentioned earlier the size of the data that can be processed increases with the number of gpus, in the function, dictionary <span style="font-family: Times New Roman;">*max\_gpu\_count\_data\_partition_mapping*</span> maps the maximum number of partitions that we empirically found that the system can handle given the number of GPUs used. The function throws a warning when the number of partitions for a given number of gpus exceeds the maximum but the script is still executed, however the user should expect an error as an out of memory situation would be encountered
If the user wants to use a different RAPIDS script, the reference to the process_data.py script has to be changed
&nbsp;
#### Submitting Experiments
We are ready to submit experiments: launching the RAPIDS script with different sets of parameters.
&nbsp;
The following couple of steps submit experiments under different conditions.
![](imgs/submission1.png)
&nbsp;
The user can change variable num\_gpu between one and the number of GPUs supported by the chosen vmSize. Variable part\_count can take any value between 1 and 11, but if it exceeds the maximum for num_gpu, the run would result in an error
&nbsp;
If the experiment is successfully submitted, it would be placed on a queue for processing, its status would appeared as Queued and an output like the following would appear
![](imgs/queue.png)
&nbsp;
When the experiment starts running, its status would appeared as Running and the output would change to something like this:
![](imgs/running.png)
&nbsp;
#### Reproducing the performance gains plot results on the Blog Post
When the run has finished successfully, its status would appeared as Completed and the output would change to something like this:
&nbsp;
![](imgs/completed.png)
Which is the output for an experiment run with three partitions and one GPU, notice that the reported processing time is 49.16 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/2GPUs.png)
This output corresponds to a run with three partitions and two GPUs, notice that the reported processing time is 37.50 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/3GPUs.png)
This output corresponds to an experiment run with three partitions and three GPUs, notice that the reported processing time is 24.40 seconds just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/4gpus.png)
This output corresponds to an experiment run with three partitions and four GPUs, notice that the reported processing time is 23.33 seconds just as depicted on the performance gains plot on the blogpost
&nbsp;
![](imgs/CPUBase.png)
This output corresponds to an experiment run with three partitions and using only CPU, notice that the reported processing time is 9 minutes and 1.21 seconds or 541.21 second just as depicted on the performance gains plot on the blog post
&nbsp;
![](imgs/OOM.png)
This output corresponds to an experiment run with nine partitions and four GPUs, notice that the notebook throws a warning signaling that the number of partitions exceed the maximum that the system can handle with those many GPUs and the run ends up failing, hence having and status of Failed.
&nbsp;
##### Freeing Resources
In the last step the notebook deletes the compute target. (This step is optional especially if the min_nodes in the cluster is set to 0 with which the cluster will scale down to 0 nodes when there is no usage.)
![](imgs/clusterdelete.png)
&nbsp;
### RAPIDS Script
The Master Notebook runs experiments by launching a RAPIDS script with different sets of parameters. In this section, the RAPIDS script, process_data.py in the material, is analyzed
The script first imports all the necessary libraries and parses the arguments passed by the Master Notebook.
The all internal functions to be used by the script are defined.
&nbsp;
#### Wrapper Auxiliary Functions:
The below functions are wrappers for a configuration module for librmm, the RAPIDS Memory Manager python interface:
![](imgs/wap1.png)![](imgs/wap2.png)
&nbsp;
A couple of other functions are wrappers for the submission of jobs to the DASK client:
![](imgs/wap3.png)
![](imgs/wap4.png)
&nbsp;
#### Data Loading Functions:
The data is loaded through the use of the following three functions
![](imgs/DLF1.png)![](imgs/DLF2.png)![](imgs/DLF3.png)
All three functions use library function cudf.read_csv(), cuDF version for the well known counterpart on Pandas.
&nbsp;
#### Data Transformation and Feature Extraction Functions:
The raw data is transformed and processed to extract features by joining, slicing, grouping, aggregating, factoring, etc, the original dataframes just as is done with Pandas. The following functions in the script are used for that purpose:
![](imgs/fef1.png)![](imgs/fef2.png)![](imgs/fef3.png)![](imgs/fef4.png)![](imgs/fef5.png)
![](imgs/fef6.png)![](imgs/fef7.png)![](imgs/fef8.png)![](imgs/fef9.png)
&nbsp;
#### Main() Function
The previous functions are used in the Main function to accomplish several steps: Set up the Dask client, do all ETL operations, set up and train an XGBoost model, the function also assigns which data needs to be processed by each Dask client
&nbsp;
##### Setting Up DASK client:
The following lines:
![](imgs/daskini.png)
&nbsp;
Initialize and set up a DASK client with a number of workers corresponding to the number of GPUs to be used on the run. A successful execution of the set up will result on the following output:
![](imgs/daskoutput.png)
##### All ETL functions are used on single calls to process\_quarter_gpu, one per data partition
![](imgs/ETL.png)
&nbsp;
##### Concentrating the data assigned to each DASK worker
The partitions assigned to each worker are concatenated and set up for training.
![](imgs/Dask2.png)
&nbsp;
##### Setting Training Parameters
The parameters used for the training of a gradient boosted decision tree model are set up in the following code block:
![](imgs/PArameters.png)
Notice how the parameters are modified when using the CPU-only mode.
&nbsp;
##### Launching the training of a gradient boosted decision tree model using XGBoost.
![](imgs/training.png)
The outputs of the script can be observed in the master notebook as the script is executed

View File

@@ -0,0 +1,554 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/RAPIDS/azure-ml-with-nvidia-rapids/azure-ml-with-nvidia-rapids.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NVIDIA RAPIDS in Azure Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c3\u201a\u00c2\u00a0in Azure.\n",
" \n",
"In this notebook, we will do the following:\n",
" \n",
"* Create an Azure Machine Learning Workspace\n",
"* Create an AMLCompute target\n",
"* Use a script to process our data and train a model\n",
"* Obtain the data required to run this sample\n",
"* Create an AML run configuration to launch a machine learning job\n",
"* Run the script to prepare data for training and train the model\n",
" \n",
"Prerequisites:\n",
"* An Azure subscription to create a Machine Learning Workspace\n",
"* Familiarity with the Azure ML SDK (refer to [notebook samples](https://github.com/Azure/MachineLearningNotebooks))\n",
"* A Jupyter notebook environment with Azure Machine Learning SDK installed. Refer to instructions to [setup the environment](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Verify if Azure ML SDK is installed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core import ScriptRunConfig\n",
"from azureml.widgets import RunDetails"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Azure ML Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following step is optional if you already have a workspace. If you want to use an existing workspace, then\n",
"skip this workspace creation step and move on to the next step to load the workspace.\n",
" \n",
"<font color='red'>Important</font>: in the code cell below, be sure to set the correct values for the subscription_id, \n",
"resource_group, workspace_name, region before executing this code cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<subscription_id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<resource_group>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<workspace_name>\")\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"<region>\")\n",
"\n",
"ws = Workspace.create(workspace_name, subscription_id=subscription_id, resource_group=resource_group, location=workspace_region)\n",
"\n",
"# write config to a local directory for future use\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load existing Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
"\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
"\n",
"scripts_folder = \"scripts_folder\"\n",
"\n",
"if not os.path.isdir(scripts_folder):\n",
" os.mkdir(scripts_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML Compute Target"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because NVIDIA RAPIDS requires P40 or V100 GPUs, the user needs to specify compute targets from one of [NC_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview) virtual machine types in Azure; these are the families of virtual machines in Azure that are provisioned with these GPUs.\n",
" \n",
"Pick one of the supported VM SKUs based on the number of GPUs you want to use for ETL and training in RAPIDS.\n",
" \n",
"The script in this notebook is implemented for single-machine scenarios. An example supporting multiple nodes will be published later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"if gpu_cluster_name in ws.compute_targets:\n",
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
" print('Found compute target. Will use {0} '.format(gpu_cluster_name))\n",
"else:\n",
" print(\"creating new cluster\")\n",
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v2\", min_nodes=1, max_nodes = 1)\n",
"\n",
" # create the cluster\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Script to process data and train model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The _process&#95;data.py_ script used in the step below is a slightly modified implementation of [RAPIDS Mortgage E2E example](https://github.com/rapidsai/notebooks-contrib/blob/master/intermediate_notebooks/E2E/mortgage/mortgage_e2e.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# copy process_data.py into the script folder\n",
"import shutil\n",
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data required to run this sample"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample uses [Fannie Mae's Single-Family Loan Performance Data](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). Once you obtain access to the data, you will need to make this data available in an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data), for use in this sample. The following code shows how to do that."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Downloading Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tarfile\n",
"import hashlib\n",
"from urllib.request import urlretrieve\n",
"\n",
"def validate_downloaded_data(path):\n",
" if(os.path.isdir(path) and os.path.exists(path + '//names.csv')) :\n",
" if(os.path.isdir(path + '//acq' ) and len(os.listdir(path + '//acq')) == 8):\n",
" if(os.path.isdir(path + '//perf' ) and len(os.listdir(path + '//perf')) == 11):\n",
" print(\"Data has been downloaded and decompressed at: {0}\".format(path))\n",
" return True\n",
" print(\"Data has not been downloaded and decompressed\")\n",
" return False\n",
"\n",
"def show_progress(count, block_size, total_size):\n",
" global pbar\n",
" global processed\n",
" \n",
" if count == 0:\n",
" pbar = ProgressBar(maxval=total_size)\n",
" processed = 0\n",
" \n",
" processed += block_size\n",
" processed = min(processed,total_size)\n",
" pbar.update(processed)\n",
"\n",
" \n",
"def download_file(fileroot):\n",
" filename = fileroot + '.tgz'\n",
" if(not os.path.exists(filename) or hashlib.md5(open(filename, 'rb').read()).hexdigest() != '82dd47135053303e9526c2d5c43befd5' ):\n",
" url_format = 'http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/{0}.tgz'\n",
" url = url_format.format(fileroot)\n",
" print(\"...Downloading file :{0}\".format(filename))\n",
" urlretrieve(url, filename)\n",
" pbar.finish()\n",
" print(\"...File :{0} finished downloading\".format(filename))\n",
" else:\n",
" print(\"...File :{0} has been downloaded already\".format(filename))\n",
" return filename\n",
"\n",
"def decompress_file(filename,path):\n",
" tar = tarfile.open(filename)\n",
" print(\"...Getting information from {0} about files to decompress\".format(filename))\n",
" members = tar.getmembers()\n",
" numFiles = len(members)\n",
" so_far = 0\n",
" for member_info in members:\n",
" tar.extract(member_info,path=path)\n",
" so_far += 1\n",
" print(\"...All {0} files have been decompressed\".format(numFiles))\n",
" tar.close()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fileroot = 'mortgage_2000-2001'\n",
"path = '.\\\\{0}'.format(fileroot)\n",
"pbar = None\n",
"processed = 0\n",
"\n",
"if(not validate_downloaded_data(path)):\n",
" print(\"Downloading and Decompressing Input Data\")\n",
" filename = download_file(fileroot)\n",
" decompress_file(filename,path)\n",
" print(\"Input Data has been Downloaded and Decompressed\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Uploading Data to Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"\n",
"# download and uncompress data in a local directory before uploading to data store\n",
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
"\n",
"# ---->>>> UNCOMMENT THE BELOW LINE TO UPLOAD YOUR DATA IF NOT DONE SO ALREADY <<<<----\n",
"# ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
"\n",
"# data already uploaded to the datastore\n",
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore=fileroot)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML run configuration to launch a machine learning job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"RunConfiguration is used to submit jobs to Azure Machine Learning service. When creating RunConfiguration for a job, users can either \n",
"1. specify a Docker image with prebuilt conda environment and use it without any modifications to run the job, or \n",
"2. specify a Docker image as the base image and conda or pip packages as dependnecies to let AML build a new Docker image with a conda environment containing specified dependencies to use in the job\n",
"\n",
"The second option is the recommended option in AML. \n",
"The following steps have code for both options. You can pick the one that is more appropriate for your requirements. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Specify prebuilt conda environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code shows how to install RAPIDS using conda. The `rapids.yml` file contains the list of packages necessary to run this tutorial. **NOTE:** Initial build of the image might take up to 20 minutes as the service needs to build and cache the new image; once the image is built the subequent runs use the cached image and the overhead is minimal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
"run_config = RunConfiguration(conda_dependencies=cd)\n",
"run_config.framework = 'python'\n",
"run_config.target = gpu_cluster_name\n",
"run_config.environment.docker.enabled = True\n",
"run_config.environment.docker.gpu_support = True\n",
"run_config.environment.docker.base_image = \"mcr.microsoft.com/azureml/base-gpu:intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04\"\n",
"run_config.environment.spark.precache_packages = False\n",
"run_config.data_references={'data':data_ref.to_config()}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Using Docker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can specify RAPIDS Docker image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# run_config = RunConfiguration()\n",
"# run_config.framework = 'python'\n",
"# run_config.environment.python.user_managed_dependencies = True\n",
"# run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
"# run_config.target = gpu_cluster_name\n",
"# run_config.environment.docker.enabled = True\n",
"# run_config.environment.docker.gpu_support = True\n",
"# run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu18.04\"\n",
"# # run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# # run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# # run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"# run_config.environment.spark.precache_packages = False\n",
"# run_config.data_references={'data':data_ref.to_config()}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wrapper function to submit Azure Machine Learning experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# parameter cpu_predictor indicates if training should be done on CPU. If set to true, GPUs are used *only* for ETL and *not* for training\n",
"# parameter num_gpu indicates number of GPUs to use among the GPUs available in the VM for ETL and if cpu_predictor is false, for training as well \n",
"def run_rapids_experiment(cpu_training, gpu_count, part_count):\n",
" # any value between 1-4 is allowed here depending the type of VMs available in gpu_cluster\n",
" if gpu_count not in [1, 2, 3, 4]:\n",
" raise Exception('Value specified for the number of GPUs to use {0} is invalid'.format(gpu_count))\n",
"\n",
" # following data partition mapping is empirical (specific to GPUs used and current data partitioning scheme) and may need to be tweaked\n",
" max_gpu_count_data_partition_mapping = {1: 3, 2: 4, 3: 6, 4: 8}\n",
" \n",
" if part_count > max_gpu_count_data_partition_mapping[gpu_count]:\n",
" print(\"Too many partitions for the number of GPUs, exceeding memory threshold\")\n",
" \n",
" if part_count > 11:\n",
" print(\"Warning: Maximum number of partitions available is 11\")\n",
" part_count = 11\n",
" \n",
" end_year = 2000\n",
" \n",
" if part_count > 4:\n",
" end_year = 2001 # use more data with more GPUs\n",
"\n",
" src = ScriptRunConfig(source_directory=scripts_folder, \n",
" script='process_data.py', \n",
" arguments = ['--num_gpu', gpu_count, '--data_dir', str(data_ref),\n",
" '--part_count', part_count, '--end_year', end_year,\n",
" '--cpu_predictor', cpu_training\n",
" ],\n",
" run_config=run_config\n",
" )\n",
"\n",
" exp = Experiment(ws, 'rapidstest')\n",
" run = exp.submit(config=src)\n",
" RunDetails(run).show()\n",
" return run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL & training on GPU)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = False\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1\n",
"data_part_count = 1\n",
"# train using CPU, use GPU for both ETL and training\n",
"run = run_rapids_experiment(cpu_predictor, num_gpu, data_part_count)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL on GPU, training on CPU)\n",
"\n",
"To observe performance difference between GPU-accelerated RAPIDS based training with CPU-only training, set 'cpu_predictor' predictor to 'True' and rerun the experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = True\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1\n",
"data_part_count = 1\n",
"# train using CPU, use GPU for ETL\n",
"run = run_rapids_experiment(cpu_predictor, num_gpu, data_part_count)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete cluster"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# delete the cluster\n",
"# gpu_cluster.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "ksivas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
contrib/RAPIDS/imgs/ETL.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

BIN
contrib/RAPIDS/imgs/OOM.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Some files were not shown because too many files have changed in this diff Show More