Compare commits

...

529 Commits

Author SHA1 Message Date
jeff-shepherd
f1aff553c4 Merge pull request #1980 from Man-MSFT/mafong/fairness-dep
Remove fairness notebooks
2025-03-14 09:42:02 -07:00
Man Fong
d195a673e2 Remove fairness notebooks 2025-03-13 14:25:59 -07:00
jeff-shepherd
8dce0fa6fe Merge pull request #1977 from Azure/jeffshep/windowsonnx
Pin onnx on Windows
2024-12-16 08:44:42 -08:00
Jeff Shepherd
4e8a240a71 Pin onnx on Windows 2024-12-13 15:51:10 -08:00
jeff-shepherd
5b019e28de Merge pull request #1976 from Azure/release_update_stablev2/Release-247
update samples from Release-247 as a part of 1.59.0 SDK stable release
2024-12-13 08:50:52 -08:00
amlrelsa-ms
bf4cb1e86c update samples from Release-247 as a part of 1.59.0 SDK stable release 2024-12-10 17:34:41 +00:00
jeff-shepherd
eaa7c56590 Merge pull request #1974 from Azure/jeffshep/post158sync
Remove deprecated sample notebooks
2024-11-04 09:20:56 -08:00
Jeff Shepherd
8fc0fa040d Remove deprecated sample notebooks 2024-11-01 11:49:20 -07:00
jeff-shepherd
56e13b0b9a Merge pull request #1972 from Azure/release_update_stablev2/Release-243
update samples from Release-243 as a part of 1.58.0 SDK stable release
2024-10-21 09:03:36 -07:00
amlrelsa-ms
785fe3c962 update samples from Release-243 as a part of 1.58.0 SDK stable release 2024-10-16 17:50:12 +00:00
jeff-shepherd
3c341f6e9a Merge pull request #1968 from Azure/release_update_stablev2/Release-240
update samples from Release-240 as a part of 1.57.0 SDK stable release
2024-08-08 08:36:05 -07:00
amlrelsa-ms
aae88e87ea update samples from Release-240 as a part of 1.57.0 SDK stable release 2024-08-05 21:57:46 +00:00
jeff-shepherd
2352e458c7 Merge pull request #1963 from Azure/release_update_stablev2/Release-209
update samples from Release-209 as a part of 1.56.0 SDK stable release
2024-05-16 09:15:57 -07:00
amlrelsa-ms
8373b93887 update samples from Release-209 as a part of 1.56.0 SDK stable release 2024-04-29 18:42:13 +00:00
jeff-shepherd
f0442166cd Updated curated environments in sample notebooks (#1958)
* Updated curated environments in sample notebooks

* Fixed continuous retraining notebook
2024-02-15 13:01:44 -05:00
jeff-shepherd
33ca8c7933 Merge pull request #1957 from Azure/release_update_stablev2/Release-207
update samples from Release-207 as a part of 1.55.0 SDK stable release
2024-02-07 08:48:02 -08:00
amlrelsa-ms
3fd1ce8993 update samples from Release-207 as a part of 1.55.0 SDK stable release 2024-02-06 19:58:35 +00:00
jeff-shepherd
aa93588190 Merge pull request #1954 from Azure/jeffshep/pinpy38
Temporarily pin back to Python 3.8
2023-12-07 11:03:20 -08:00
Jeff Shepherd
12520400e5 Temporarily pin back to Python 3.8 2023-12-06 13:24:28 -08:00
jeff-shepherd
35614e83fa Merge pull request #1951 from Azure/release_update_stablev2/Release-200
update samples from Release-200 as a part of 1.54.0 SDK stable release
2023-11-22 18:24:05 -08:00
amlrelsa-ms
ff22ac01cc update samples from Release-200 as a part of 1.54.0 SDK stable release 2023-11-21 17:51:12 +00:00
jeff-shepherd
e7dd826f34 Merge pull request #1946 from Azure/jeffshep/pinscikit-learn
Pin scikit-learn to avoid conflict with azureml-responsibleai
2023-10-23 14:57:13 -07:00
Jeff Shepherd
fcc882174b Pin scikit-learn to avoid conflict with azureml-responsibleai 2023-10-23 09:53:39 -07:00
jeff-shepherd
6872d8a3bb Merge pull request #1941 from Azure/jeffshep/updatefor1.53.2
Updated automl_env.yml for Azure ML SDK 1.53.2
2023-10-10 08:49:04 -07:00
Jeff Shepherd
a2cb4c3589 Updated fbprophet to prophet 2023-10-10 08:47:09 -07:00
Jeff Shepherd
15008962b2 Updated automl_env.yml for Azure ML SDK 1.53.2 2023-10-05 19:29:26 -07:00
jeff-shepherd
9414b51fac Merge pull request #1937 from Azure/jeffshep/fixwindows153
Fixed Windows automl_setup for 1.53.0
2023-08-31 21:56:12 -07:00
Jeff Shepherd
80ac414582 Fixed Windows automl_setup for 1.53.0 2023-08-31 16:54:20 -07:00
jeff-shepherd
cbc151660b Merge pull request #1936 from Azure/jeffshep/fixtabulardataset
Fixed tabular-dataset-partition-per-column.ipynb
2023-08-25 15:34:08 -07:00
Jeff Shepherd
0024abc6e3 Fixed tabular-dataset-partition-per-column.ipynb and removed deploy-to-cloud/model-register-and-deploy.ipynb 2023-08-25 13:52:29 -07:00
jeff-shepherd
fa13385860 Merge pull request #1935 from Azure/release_update_stablev2/Release-193
update samples from Release-193 as a part of 1.53.0 SDK stable release
2023-08-23 11:41:24 -07:00
Jeff Shepherd
0c5f6daf52 Fixed readme syntax 2023-08-23 11:37:30 -07:00
Jeff Shepherd
c11e9fc1da Fixed readme syntax 2023-08-23 11:36:17 -07:00
Jeff Shepherd
280150713e Restored V2 message 2023-08-23 10:20:25 -07:00
amlrelsa-ms
bb11c80b1b update samples from Release-193 as a part of 1.53.0 SDK stable release 2023-08-23 03:24:03 +00:00
Diondra Peck
d0961b98bf Add disclaimer to README 2023-06-28 15:47:49 -07:00
Paul Shealy
302589b7f9 Merge pull request #1915 from Azure/release_update_stablev2/Release-171
Release update stablev2/release 171 for SDK 1.51.0
2023-06-07 19:19:33 -07:00
amlrelsa-ms
cc85949d6d update samples from Release-171 as a part of 1.51 SDK stable release 2023-06-06 21:58:24 +05:30
amlrelsa-sa
3a1824e3ad update samples from Release-170 as a part of 1.51 SDK stable release 2023-06-06 10:50:33 +05:30
Paul Shealy
579643326d Merge pull request #1911 from diondrapeck/add-deprecation-disclaimer
Add repository deprecation disclaimer and pointer to v2 repo
2023-05-25 08:04:29 -07:00
Diondra Peck
14f76f227e Add deprecation disclaimer 2023-05-23 12:48:14 -07:00
Paul Shealy
25baf5203a Merge pull request #1899 from Azure/release_update/Release-177
update samples from Release-177 as a part of  SDK release
2023-04-17 13:01:27 -07:00
amlrelsa-ms
1178fcb0ba update samples from Release-177 as a part of SDK release 2023-04-17 10:22:59 +00:00
Sasidhar Kasturi
e4d84c8e45 update samples from Release-169 as a part of 1.50.0 SDK stable release (#1898)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2023-04-14 10:39:38 -04:00
Harneet Virk
7a3ab1e44c Merge pull request #1895 from Azure/release_update/Release-175
update samples from Release-175 as a part of  SDK release
2023-03-28 10:17:27 -07:00
amlrelsa-ms
598a293dfa update samples from Release-175 as a part of SDK release 2023-03-28 01:02:26 +00:00
Harneet Virk
40b3068462 Merge pull request #1884 from Azure/release_update_stablev2/Release-166
update samples from Release-166 as a part of 1.49.0 SDK stable release
2023-02-13 21:22:05 -08:00
amlrelsa-ms
0ecbbbce75 update samples from Release-166 as a part of 1.49.0 SDK stable release 2023-02-14 02:46:24 +00:00
Harneet Virk
9b1e130d18 Merge pull request #1867 from Azure/release_update/Release-173
update samples from Release-173 as a part of  SDK release
2022-12-19 19:37:41 -08:00
amlrelsa-ms
0e17b33d2a update samples from Release-173 as a part of SDK release 2022-12-20 03:35:58 +00:00
Harneet Virk
34d80abd26 Merge pull request #1864 from Azure/release_update/Release-172
update samples from Release-172 as a part of  SDK release
2022-12-16 09:28:16 -08:00
amlrelsa-ms
249278ab77 update samples from Release-172 as a part of SDK release 2022-12-15 17:32:05 +00:00
Harneet Virk
25fdb17f80 Merge pull request #1862 from Azure/release_update/Release-170
update samples from Release-170 as a part of  SDK release
2022-12-06 10:06:06 -08:00
amlrelsa-ms
3a02a27f1e update samples from Release-170 as a part of SDK release 2022-12-06 03:22:18 +00:00
Harneet Virk
4eed9d529f Merge pull request #1861 from Azure/release_update/Release-169
update samples from Release-169 as a part of  SDK release
2022-12-05 12:33:52 -08:00
amlrelsa-ms
f344d410a2 update samples from Release-169 as a part of SDK release 2022-12-05 20:12:47 +00:00
Harneet Virk
9dc1228063 Merge pull request #1860 from Azure/release_update/Release-168
update samples from Release-168 as a part of  SDK release
2022-12-05 09:54:01 -08:00
amlrelsa-ms
4404e62f58 update samples from Release-168 as a part of SDK release 2022-12-05 17:52:07 +00:00
Harneet Virk
38d5743bbb Merge pull request #1852 from Azure/release_update/Release-167
update samples from Release-167 as a part of  SDK release
2022-11-08 11:01:10 -08:00
amlrelsa-ms
0814eee151 update samples from Release-167 as a part of SDK release 2022-11-08 01:17:48 +00:00
Harneet Virk
f45b815221 Merge pull request #1848 from Azure/release_update/Release-166
update samples from Release-166 as a part of  SDK release
2022-10-26 12:04:10 -07:00
amlrelsa-ms
bd629ae454 update samples from Release-166 as a part of SDK release 2022-10-26 18:46:34 +00:00
Harneet Virk
41de75a584 Merge pull request #1846 from Azure/release_update_stablev2/Release-156
update samples from Release-156 as a part of 1.47.0 SDK stable release
2022-10-25 21:01:03 -07:00
amlrelsa-ms
96a426dc36 update samples from Release-156 as a part of 1.47.0 SDK stable release 2022-10-25 21:28:24 +00:00
Harneet Virk
824dd40f7e Merge pull request #1836 from Azure/release_update/Release-165
update samples from Release-165 as a part of  SDK release
2022-10-11 13:07:26 -07:00
amlrelsa-ms
fa2e649fe8 update samples from Release-165 as a part of SDK release 2022-10-11 19:33:50 +00:00
Harneet Virk
e25e8e3a41 Merge pull request #1832 from Azure/release_update/Release-164
update samples from Release-164 as a part of  SDK release
2022-10-05 11:29:47 -07:00
amlrelsa-ms
aa3670a902 update samples from Release-164 as a part of SDK release 2022-10-05 17:31:10 +00:00
Harneet Virk
ef1f9205ac Merge pull request #1831 from Azure/release_update_stablev2/Release-153
update samples from Release-153 as a part of 1.46.0 SDK stable release
2022-10-04 15:04:25 -07:00
amlrelsa-ms
3228bbfc63 update samples from Release-153 as a part of 1.46.0 SDK stable release 2022-09-30 17:30:23 +00:00
Harneet Virk
f18a0dfc4d Merge pull request #1825 from Azure/release_update/Release-163
update samples from Release-163 as a part of  SDK release
2022-09-20 14:12:22 -07:00
amlrelsa-ms
badb620261 update samples from Release-163 as a part of SDK release 2022-09-20 21:11:25 +00:00
Harneet Virk
acf46100ae Merge pull request #1817 from Azure/release_update/Release-161
update samples from Release-161 as a part of  SDK release
2022-09-16 15:54:11 -07:00
amlrelsa-ms
cf2e3804d5 update samples from Release-161 as a part of SDK release 2022-09-16 20:16:37 +00:00
Harneet Virk
b7be42357f Merge pull request #1814 from Azure/release_update/Release-160
update samples from Release-160 as a part of  SDK release
2022-09-12 18:57:44 -07:00
amlrelsa-ms
3ac82c07ae update samples from Release-160 as a part of SDK release 2022-09-13 01:24:40 +00:00
Harneet Virk
9743c0a1fa Merge pull request #1755 from Azure/users/GitHubPolicyService/11f57c70-4141-4c68-9224-aceb8eab1c48
Adding Microsoft SECURITY.MD
2022-09-06 16:52:36 -07:00
Harneet Virk
ba4dac530e Merge pull request #1808 from Azure/release_update/Release-157
update samples from Release-157 as a part of  SDK release
2022-09-06 16:33:03 -07:00
amlrelsa-ms
7f7f0040fd update samples from Release-157 as a part of SDK release 2022-09-06 23:16:24 +00:00
Harneet Virk
9ca567cd9c Merge pull request #1802 from Azure/release_update/Release-156
update samples from Release-156 as a part of  SDK release
2022-08-18 17:23:55 -07:00
amlrelsa-ms
ae7b234ba0 update samples from Release-156 as a part of SDK release 2022-08-18 23:57:09 +00:00
Harneet Virk
9788d1965f Merge pull request #1799 from Azure/release_update/Release-155
update samples from Release-155 as a part of  SDK release
2022-08-12 14:18:11 -07:00
amlrelsa-ms
387e43a423 update samples from Release-155 as a part of SDK release 2022-08-12 20:38:16 +00:00
Harneet Virk
25f407fc81 Merge pull request #1796 from Azure/release_update/Release-154
update samples from Release-154 as a part of  SDK release
2022-08-10 11:36:05 -07:00
amlrelsa-ms
dcb2c4638f update samples from Release-154 as a part of SDK release 2022-08-10 18:10:45 +00:00
Harneet Virk
7fb5dd3ef9 Merge pull request #1795 from Azure/release_update/Release-153
update samples from Release-153 as a part of  SDK release
2022-08-09 15:39:30 -07:00
amlrelsa-ms
6a38f4bec3 update samples from Release-153 as a part of SDK release 2022-08-09 21:50:34 +00:00
Harneet Virk
aed078aeab Merge pull request #1793 from Azure/release_update/Release-152
update samples from Release-152 as a part of  SDK release
2022-08-08 11:51:52 -07:00
amlrelsa-ms
f999f41ed3 update samples from Release-152 as a part of SDK release 2022-08-08 17:27:37 +00:00
Harneet Virk
07e43ee7e4 Merge pull request #1791 from Azure/release_update/Release-151
update samples from Release-151 as a part of  SDK release
2022-08-05 13:12:57 -07:00
amlrelsa-ms
aac706c3f0 update samples from Release-151 as a part of SDK release 2022-08-05 20:01:34 +00:00
Harneet Virk
4ccb278051 Merge pull request #1789 from Azure/release_update/Release-150
update samples from Release-150 as a part of  SDK release
2022-08-04 12:08:14 -07:00
amlrelsa-ms
64a733480b update samples from Release-150 as a part of SDK release 2022-08-03 16:29:31 +00:00
Harneet Virk
dd0976f678 Merge pull request #1779 from Azure/release_update/Release-149
update samples from Release-149 as a part of  SDK release
2022-07-07 08:37:35 -07:00
amlrelsa-ms
15a3ca649d update samples from Release-149 as a part of SDK release 2022-07-07 00:18:42 +00:00
Harneet Virk
3c4770cfe5 Merge pull request #1776 from Azure/release_update/Release-148
update samples from Release-148 as a part of  SDK release
2022-07-01 13:41:03 -07:00
amlrelsa-ms
8d7de05908 update samples from Release-148 as a part of SDK release 2022-07-01 20:40:11 +00:00
Harneet Virk
863faae57f Merge pull request #1772 from Azure/release_update/Release-147
Update samples from Release-147 as a part of SDK release 1.43
2022-06-27 10:32:58 -07:00
amlrelsa-ms
8d3f5adcdb update samples from Release-147 as a part of SDK release 2022-06-27 17:29:38 +00:00
Harneet Virk
cd3394e129 Merge pull request #1771 from Azure/release_update/Release-146
update samples from Release-146 as a part of  SDK release
2022-06-20 14:31:06 -07:00
amlrelsa-ms
ee5d0239a3 update samples from Release-146 as a part of SDK release 2022-06-20 20:45:50 +00:00
Harneet Virk
388111cedc Merge pull request #1763 from Azure/release_update/Release-144
update samples from Release-144 as a part of  SDK release
2022-06-03 11:04:13 -07:00
amlrelsa-ms
b86191ed7f update samples from Release-144 as a part of SDK release 2022-06-03 17:28:37 +00:00
Harneet Virk
22753486de Merge pull request #1762 from Azure/release_update/Release-143
update samples from Release-143 as a part of  SDK release
2022-06-01 11:29:19 -07:00
amlrelsa-ms
cf1d1dbf01 update samples from Release-143 as a part of SDK release 2022-06-01 17:26:59 +00:00
Harneet Virk
2e45d9800d Merge pull request #1758 from Azure/release_update/Release-142
update samples from Release-142 as a part of  SDK release
2022-05-27 15:44:52 -07:00
amlrelsa-ms
a9a8de02ec update samples from Release-142 as a part of SDK release 2022-05-27 18:58:51 +00:00
microsoft-github-policy-service[bot]
e0c9376aab Microsoft mandatory file 2022-05-25 17:12:16 +00:00
Harneet Virk
dd8339e650 Merge pull request #1754 from Azure/release_update/Release-141
update samples from Release-141 as a part of  SDK release
2022-05-25 10:12:10 -07:00
amlrelsa-ms
1594ee64a1 update samples from Release-141 as a part of SDK release 2022-05-25 16:56:26 +00:00
Harneet Virk
83ed8222d2 Merge pull request #1750 from Azure/release_update/Release-140
update samples from Release-140 as a part of  SDK release
2022-05-04 16:16:28 -07:00
amlrelsa-ms
b0aa91acce update samples from Release-140 as a part of SDK release 2022-05-04 23:01:56 +00:00
Harneet Virk
5928ba83bb Merge pull request #1748 from Azure/release_update/Release-138
update samples from Release-138 as a part of  SDK release
2022-04-29 10:40:01 -07:00
amlrelsa-ms
ffa3a43979 update samples from Release-138 as a part of SDK release 2022-04-29 17:09:13 +00:00
Harneet Virk
7ce79a43f1 Merge pull request #1746 from Azure/release_update/Release-137
update samples from Release-137 as a part of  SDK release
2022-04-27 11:50:44 -07:00
amlrelsa-ms
edcc50ab0c update samples from Release-137 as a part of SDK release 2022-04-27 17:59:44 +00:00
Harneet Virk
4a391522d0 Merge pull request #1742 from Azure/release_update/Release-136
update samples from Release-136 as a part of  SDK release
2022-04-25 13:16:03 -07:00
amlrelsa-ms
1903f78285 update samples from Release-136 as a part of SDK release 2022-04-25 17:08:42 +00:00
Harneet Virk
a4dfcc4693 Merge pull request #1730 from Azure/release_update/Release-135
update samples from Release-135 as a part of  SDK release
2022-04-04 14:47:18 -07:00
amlrelsa-ms
faffb3fef7 update samples from Release-135 as a part of SDK release 2022-04-04 20:15:29 +00:00
Harneet Virk
6c6227c403 Merge pull request #1729 from rezasherafat/rl_notebook_update
add docker subfolder to pong notebook directly.
2022-03-30 16:05:10 -07:00
Reza Sherafat
e3be364e7a add docker subfolder to pong notebook directly. 2022-03-30 22:47:50 +00:00
Harneet Virk
90e20a60e9 Merge pull request #1726 from Azure/release_update/Release-131
update samples from Release-131 as a part of  SDK release
2022-03-29 19:32:11 -07:00
amlrelsa-ms
33a4eacf1d update samples from Release-131 as a part of SDK release 2022-03-30 02:26:53 +00:00
Harneet Virk
e30b53fddc Merge pull request #1725 from Azure/release_update/Release-130
update samples from Release-130 as a part of  SDK release
2022-03-29 15:41:28 -07:00
amlrelsa-ms
95b0392ed2 update samples from Release-130 as a part of SDK release 2022-03-29 22:33:38 +00:00
Harneet Virk
796798cb49 Merge pull request #1724 from Azure/release_update/Release-129
update samples from Release-129 as a part of  1.40.0 SDK release
2022-03-29 12:18:30 -07:00
amlrelsa-ms
08b0ba7854 update samples from Release-129 as a part of SDK release 2022-03-29 18:28:35 +00:00
Harneet Virk
ceaf82acc6 Merge pull request #1720 from Azure/release_update/Release-128
update samples from Release-128 as a part of  SDK release
2022-03-21 17:56:06 -07:00
amlrelsa-ms
dadc93cfe5 update samples from Release-128 as a part of SDK release 2022-03-22 00:51:19 +00:00
Harneet Virk
c7076bf95c Merge pull request #1715 from Azure/release_update/Release-127
update samples from Release-127 as a part of  SDK release
2022-03-15 17:02:41 -07:00
amlrelsa-ms
ebdffd5626 update samples from Release-127 as a part of SDK release 2022-03-16 00:00:00 +00:00
Harneet Virk
d123880562 Merge pull request #1711 from Azure/release_update/Release-126
update samples from Release-126 as a part of  SDK release
2022-03-11 16:53:06 -08:00
amlrelsa-ms
4864e8ea60 update samples from Release-126 as a part of SDK release 2022-03-12 00:47:46 +00:00
Harneet Virk
c86db0d7fd Merge pull request #1707 from Azure/release_update/Release-124
update samples from Release-124 as a part of  SDK release
2022-03-08 09:15:45 -08:00
amlrelsa-ms
ccfbbb3b14 update samples from Release-124 as a part of SDK release 2022-03-08 00:37:35 +00:00
Harneet Virk
c42ba64b15 Merge pull request #1700 from Azure/release_update/Release-123
update samples from Release-123 as a part of  SDK release
2022-03-01 16:33:02 -08:00
amlrelsa-ms
6d8bf32243 update samples from Release-123 as a part of SDK release 2022-02-28 17:20:57 +00:00
Harneet Virk
9094da4085 Merge pull request #1684 from Azure/release_update/Release-122
update samples from Release-122 as a part of  SDK release
2022-02-14 11:38:49 -08:00
amlrelsa-ms
ebf9d2855c update samples from Release-122 as a part of SDK release 2022-02-14 19:24:27 +00:00
v-pbavanari
1bbd78eb33 update samples from Release-121 as a part of SDK release (#1678)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-02-02 12:28:49 -05:00
v-pbavanari
77f5a69e04 update samples from Release-120 as a part of SDK release (#1676)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-28 12:51:49 -05:00
raja7592
ce82af2ab0 update samples from Release-118 as a part of SDK release (#1673)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-24 20:07:35 -05:00
Harneet Virk
2a2d2efa17 Merge pull request #1658 from Azure/release_update/Release-117
Update samples from Release sdk 1.37.0 as a part of  SDK release
2021-12-13 10:36:08 -08:00
amlrelsa-ms
dd494e9cac update samples from Release-117 as a part of SDK release 2021-12-13 16:57:22 +00:00
Harneet Virk
352adb7487 Merge pull request #1629 from Azure/release_update/Release-116
Update samples from Release as a part of SDK release 1.36.0
2021-11-08 09:48:25 -08:00
amlrelsa-ms
aebe34b4e8 update samples from Release-116 as a part of SDK release 2021-11-08 16:09:41 +00:00
Harneet Virk
c7e1241e20 Merge pull request #1612 from Azure/release_update/Release-115
Update samples from Release-115 as a part of  SDK release
2021-10-11 12:01:59 -07:00
amlrelsa-ms
6529298c24 update samples from Release-115 as a part of SDK release 2021-10-11 16:09:57 +00:00
Harneet Virk
e2dddfde85 Merge pull request #1601 from Azure/release_update/Release-114
update samples from Release-114 as a part of  SDK release
2021-09-29 14:21:59 -07:00
amlrelsa-ms
36d96f96ec update samples from Release-114 as a part of SDK release 2021-09-29 20:16:51 +00:00
Harneet Virk
7ebcfea5a3 Merge pull request #1600 from Azure/release_update/Release-113
update samples from Release-113 as a part of  SDK release
2021-09-28 12:53:57 -07:00
amlrelsa-ms
b20bfed33a update samples from Release-113 as a part of SDK release 2021-09-28 19:44:58 +00:00
Harneet Virk
a66a92e338 Merge pull request #1597 from Azure/release_update/Release-112
update samples from Release-112 as a part of  SDK release
2021-09-24 14:44:53 -07:00
amlrelsa-ms
c56c2c3525 update samples from Release-112 as a part of SDK release 2021-09-24 21:40:44 +00:00
Harneet Virk
4cac072fa4 Merge pull request #1588 from Azure/release_update/Release-111
Update samples from Release-111 as a part of SDK 1.34.0 release
2021-09-09 09:02:38 -07:00
amlrelsa-ms
aeab6b3e28 update samples from Release-111 as a part of SDK release 2021-09-07 17:32:15 +00:00
Harneet Virk
015e261f29 Merge pull request #1581 from Azure/release_update/Release-110
update samples from Release-110 as a part of  SDK release
2021-08-20 09:21:08 -07:00
amlrelsa-ms
d2a423dde9 update samples from Release-110 as a part of SDK release 2021-08-20 00:28:42 +00:00
Harneet Virk
3ecbfd6532 Merge pull request #1578 from Azure/release_update/Release-109
update samples from Release-109 as a part of  SDK release
2021-08-18 18:16:31 -07:00
amlrelsa-ms
02ecb2d755 update samples from Release-109 as a part of SDK release 2021-08-18 22:07:12 +00:00
Harneet Virk
122df6e846 Merge pull request #1576 from Azure/release_update/Release-108
update samples from Release-108 as a part of  SDK release
2021-08-18 09:47:34 -07:00
amlrelsa-ms
7d6a0a2051 update samples from Release-108 as a part of SDK release 2021-08-18 00:33:54 +00:00
Harneet Virk
6cc8af80a2 Merge pull request #1565 from Azure/release_update/Release-107
update samples from Release-107 as a part of  SDK release 1.33
2021-08-02 13:14:30 -07:00
amlrelsa-ms
f61898f718 update samples from Release-107 as a part of SDK release 2021-08-02 18:01:38 +00:00
Harneet Virk
5cb465171e Merge pull request #1556 from Azure/update-spark-notebook
updating spark notebook
2021-07-26 17:09:42 -07:00
Shivani Santosh Sambare
0ce37dd18f updating spark notebook 2021-07-26 15:51:54 -07:00
Cody
d835b183a5 update README.md (#1552) 2021-07-15 10:43:22 -07:00
Cody
d3cafebff9 add code of conduct (#1551) 2021-07-15 08:08:44 -07:00
Harneet Virk
354b194a25 Merge pull request #1543 from Azure/release_update/Release-106
update samples from Release-106 as a part of  SDK release
2021-07-06 11:05:55 -07:00
amlrelsa-ms
a52d67bb84 update samples from Release-106 as a part of SDK release 2021-07-06 17:17:27 +00:00
Harneet Virk
421ea3d920 Merge pull request #1530 from Azure/release_update/Release-105
update samples from Release-105 as a part of  SDK release
2021-06-25 09:58:05 -07:00
amlrelsa-ms
24f53f1aa1 update samples from Release-105 as a part of SDK release 2021-06-24 23:00:13 +00:00
Harneet Virk
6fc5d11de2 Merge pull request #1518 from Azure/release_update/Release-104
update samples from Release-104 as a part of  SDK release
2021-06-21 10:29:53 -07:00
amlrelsa-ms
d17547d890 update samples from Release-104 as a part of SDK release 2021-06-21 17:16:09 +00:00
Harneet Virk
928e0d4327 Merge pull request #1510 from Azure/release_update/Release-103
update samples from Release-103 as a part of  SDK release
2021-06-14 10:33:34 -07:00
amlrelsa-ms
05327cfbb9 update samples from Release-103 as a part of SDK release 2021-06-14 17:30:30 +00:00
Harneet Virk
8f7717014b Merge pull request #1506 from Azure/release_update/Release-102
update samples from Release-102 as a part of  SDK release 1.30.0
2021-06-07 11:14:02 -07:00
amlrelsa-ms
a47e50b79a update samples from Release-102 as a part of SDK release 2021-06-07 17:34:51 +00:00
Harneet Virk
8f89d88def Merge pull request #1505 from Azure/release_update/Release-101
update samples from Release-101 as a part of  SDK release
2021-06-04 19:54:53 -07:00
amlrelsa-ms
ec97207bb1 update samples from Release-101 as a part of SDK release 2021-06-05 02:54:13 +00:00
Harneet Virk
a2d20b0f47 Merge pull request #1493 from Azure/release_update/Release-98
update samples from Release-98 as a part of  SDK release
2021-05-28 08:04:58 -07:00
amlrelsa-ms
8180cebd75 update samples from Release-98 as a part of SDK release 2021-05-28 03:44:25 +00:00
Harneet Virk
700ab2d782 Merge pull request #1489 from Azure/release_update/Release-97
update samples from Release-97 as a part of  SDK  1.29.0 release
2021-05-25 07:43:14 -07:00
amlrelsa-ms
ec9a5a061d update samples from Release-97 as a part of SDK release 2021-05-24 17:39:23 +00:00
Harneet Virk
467630f955 Merge pull request #1466 from Azure/release_update/Release-96
update samples from Release-96 as a part of  SDK release 1.28.0
2021-05-10 22:48:19 -07:00
amlrelsa-ms
eac6b69bae update samples from Release-96 as a part of SDK release 2021-05-10 18:38:34 +00:00
Harneet Virk
441a5b0141 Merge pull request #1440 from Azure/release_update/Release-95
update samples from Release-95 as a part of  SDK 1.27 release
2021-04-19 11:51:21 -07:00
amlrelsa-ms
70902df6da update samples from Release-95 as a part of SDK release 2021-04-19 18:42:58 +00:00
nikAI77
6f893ff0b4 update samples from Release-94 as a part of SDK release (#1418)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2021-04-06 12:36:12 -04:00
Harneet Virk
bda592a236 Merge pull request #1406 from Azure/release_update/Release-93
update samples from Release-93 as a part of  SDK release
2021-03-24 11:25:00 -07:00
amlrelsa-ms
8b32e8d5ad update samples from Release-93 as a part of SDK release 2021-03-24 16:45:36 +00:00
Harneet Virk
54a065c698 Merge pull request #1386 from yunjie-hub/master
Add synapse sample notebooks
2021-03-09 18:05:10 -08:00
yunjie-hub
b9718678b3 Add files via upload 2021-03-09 18:02:27 -08:00
Harneet Virk
3fa40d2c6d Merge pull request #1385 from Azure/release_update/Release-92
update samples from Release-92 as a part of  SDK release
2021-03-09 17:51:27 -08:00
amlrelsa-ms
883e4a4c59 update samples from Release-92 as a part of SDK release 2021-03-10 01:48:54 +00:00
Harneet Virk
e90826b331 Merge pull request #1384 from yunjie-hub/master
Add synapse sample notebooks
2021-03-09 12:40:33 -08:00
yunjie-hub
ac04172f6d Add files via upload 2021-03-09 12:38:23 -08:00
Harneet Virk
8c0000beb4 Merge pull request #1382 from Azure/release_update/Release-91
update samples from Release-91 as a part of  SDK release
2021-03-08 21:43:10 -08:00
amlrelsa-ms
35287ab0d8 update samples from Release-91 as a part of SDK release 2021-03-09 05:36:08 +00:00
Harneet Virk
3fe4f8b038 Merge pull request #1375 from Azure/release_update/Release-90
update samples from Release-90 as a part of  SDK release
2021-03-01 09:15:14 -08:00
amlrelsa-ms
1722678469 update samples from Release-90 as a part of SDK release 2021-03-01 17:13:25 +00:00
Harneet Virk
17da7e8706 Merge pull request #1364 from Azure/release_update/Release-89
update samples from Release-89 as a part of  SDK release
2021-02-23 17:27:27 -08:00
amlrelsa-ms
d2e7213ff3 update samples from Release-89 as a part of SDK release 2021-02-24 01:26:17 +00:00
mx-iao
882cb76e8a Merge pull request #1361 from Azure/minxia/distr-pytorch
Update distributed pytorch example
2021-02-23 12:07:20 -08:00
mx-iao
37f37a46c1 Delete pytorch_mnist.py 2021-02-23 11:19:39 -08:00
mx-iao
0cd1412421 Delete distributed-pytorch-with-nccl-gloo.ipynb 2021-02-23 11:19:33 -08:00
mx-iao
c3ae9f00f6 Add files via upload 2021-02-23 11:19:02 -08:00
mx-iao
11b02c650c Rename how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-distributeddataparallel.ipynb to how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-distributeddataparallel/distributed-pytorch-with-distributeddataparallel.ipynb 2021-02-23 11:18:43 -08:00
mx-iao
606048c71f Add files via upload 2021-02-23 11:18:10 -08:00
Harneet Virk
cb1c354d44 Merge pull request #1353 from Azure/release_update/Release-88
update samples from Release-88 as a part of  SDK release 1.23.0
2021-02-22 11:49:02 -08:00
amlrelsa-ms
c868fff5a2 update samples from Release-88 as a part of SDK release 2021-02-22 19:23:04 +00:00
Harneet Virk
bc4e6611c4 Merge pull request #1342 from Azure/release_update/Release-87
update samples from Release-87 as a part of  SDK release
2021-02-16 18:43:49 -08:00
amlrelsa-ms
0a58881b70 update samples from Release-87 as a part of SDK release 2021-02-17 02:13:51 +00:00
Harneet Virk
2544e85c5f Merge pull request #1333 from Azure/release_update/Release-85
SDK release 1.22.0
2021-02-10 07:59:22 -08:00
amlrelsa-ms
7fe27501d1 update samples from Release-85 as a part of SDK release 2021-02-10 15:27:28 +00:00
Harneet Virk
624c46e7f9 Merge pull request #1321 from Azure/release_update/Release-84
update samples from Release-84 as a part of  SDK release
2021-02-05 19:10:29 -08:00
amlrelsa-ms
40fbadd85c update samples from Release-84 as a part of SDK release 2021-02-06 03:09:22 +00:00
Harneet Virk
0c1fc25542 Merge pull request #1317 from Azure/release_update/Release-83
update samples from Release-83 as a part of  SDK release
2021-02-03 14:31:31 -08:00
amlrelsa-ms
e8e1357229 update samples from Release-83 as a part of SDK release 2021-02-03 05:22:32 +00:00
Harneet Virk
ad44f8fa2b Merge pull request #1313 from zronaghi/contrib-rapids
Update RAPIDS README
2021-01-29 10:33:47 -08:00
Zahra Ronaghi
ee63e759f0 Update RAPIDS README 2021-01-28 22:19:27 -06:00
Harneet Virk
b81d97ebbf Merge pull request #1303 from Azure/release_update/Release-82
update samples from Release-82 as a part of  SDK release 1.21.0
2021-01-25 11:09:12 -08:00
amlrelsa-ms
249fb6bbb5 update samples from Release-82 as a part of SDK release 2021-01-25 19:03:14 +00:00
Harneet Virk
cda1f3e4cf Merge pull request #1289 from Azure/release_update/Release-81
update samples from Release-81 as a part of  SDK release
2021-01-11 12:52:48 -07:00
amlrelsa-ms
1d05efaac2 update samples from Release-81 as a part of SDK release 2021-01-11 19:35:54 +00:00
Harneet Virk
3adebd1127 Merge pull request #1262 from Azure/release_update/Release-80
update samples from Release-80 as a part of  SDK release
2020-12-11 16:49:33 -08:00
amlrelsa-ms
a6817063df update samples from Release-80 as a part of SDK release 2020-12-12 00:45:42 +00:00
Harneet Virk
a79f8c254a Merge pull request #1255 from Azure/release_update/Release-79
update samples from Release-79 as a part of  SDK release
2020-12-07 11:11:32 -08:00
amlrelsa-ms
fb4f287458 update samples from Release-79 as a part of SDK release 2020-12-07 19:09:59 +00:00
Harneet Virk
41366a4af0 Merge pull request #1238 from Azure/release_update/Release-78
update samples from Release-78 as a part of  SDK release
2020-11-11 13:00:22 -08:00
amlrelsa-ms
74deb14fac update samples from Release-78 as a part of SDK release 2020-11-11 19:32:32 +00:00
Harneet Virk
4ed1d445ae Merge pull request #1236 from Azure/release_update/Release-77
update samples from Release-77 as a part of  SDK release
2020-11-10 10:52:23 -08:00
amlrelsa-ms
b5c15db0b4 update samples from Release-77 as a part of SDK release 2020-11-10 18:46:23 +00:00
Harneet Virk
91d43bade6 Merge pull request #1235 from Azure/release_update_stablev2/Release-44
update samples from Release-44 as a part of 1.18.0 SDK stable release
2020-11-10 08:52:24 -08:00
amlrelsa-ms
bd750f5817 update samples from Release-44 as a part of 1.18.0 SDK stable release 2020-11-10 03:42:03 +00:00
mx-iao
637bcc5973 Merge pull request #1229 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 15:18:37 -10:00
Cody
ba741fb18d Update README.md 2020-11-03 17:16:28 -08:00
Harneet Virk
ac0ad8d487 Merge pull request #1228 from Azure/release_update/Release-76
update samples from Release-76 as a part of  SDK release
2020-11-03 16:12:15 -08:00
amlrelsa-ms
5019ad6c5a update samples from Release-76 as a part of SDK release 2020-11-03 22:31:02 +00:00
Cody
41a2ebd2b3 Merge pull request #1226 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 11:25:10 -08:00
Cody
53e3283d1d Update README.md 2020-11-03 11:17:41 -08:00
Harneet Virk
ba9c4c5465 Merge pull request #1225 from Azure/release_update/Release-75
update samples from Release-75 as a part of  SDK release
2020-11-03 11:11:11 -08:00
amlrelsa-ms
a6c65f00ec update samples from Release-75 as a part of SDK release 2020-11-03 19:07:12 +00:00
Cody
95072eabc2 Merge pull request #1221 from Azure/lostmygithubaccount-patch-2
Update README.md
2020-11-02 11:52:05 -08:00
Cody
12905ef254 Update README.md 2020-11-02 06:59:44 -08:00
Harneet Virk
4cf56eee91 Merge pull request #1217 from Azure/release_update/Release-74
update samples from Release-74 as a part of  SDK release
2020-10-30 17:27:02 -07:00
amlrelsa-ms
d345ff6c37 update samples from Release-74 as a part of SDK release 2020-10-30 22:20:10 +00:00
Harneet Virk
560dcac0a0 Merge pull request #1214 from Azure/release_update/Release-73
update samples from Release-73 as a part of  SDK release
2020-10-29 23:38:02 -07:00
amlrelsa-ms
322087a58c update samples from Release-73 as a part of SDK release 2020-10-30 06:37:05 +00:00
Harneet Virk
e255c000ab Merge pull request #1211 from Azure/release_update/Release-72
update samples from Release-72 as a part of  SDK release
2020-10-28 14:30:50 -07:00
amlrelsa-ms
7871e37ec0 update samples from Release-72 as a part of SDK release 2020-10-28 21:24:40 +00:00
Cody
58e584e7eb Update README.md (#1209) 2020-10-27 21:00:38 -04:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
Cody
4aad830f1c update readme 2020-10-22 09:13:20 -07:00
Harneet Virk
c1b125025a Merge pull request #1198 from harneetvirk/master
Fixing/Removing broken links
2020-10-20 12:30:46 -07:00
Harneet Virk
9f364f7638 Update README.md 2020-10-20 12:30:03 -07:00
Harneet Virk
4beb749a76 Fixing/Removing the broken links 2020-10-20 12:28:45 -07:00
Harneet Virk
04fe8c4580 Merge pull request #1191 from savitamittal1/patch-4
Update README.md
2020-10-17 08:48:20 -07:00
Harneet Virk
498018451a Merge pull request #1193 from savitamittal1/patch-6
Update automl-databricks-local-with-deployment.ipynb
2020-10-17 08:47:54 -07:00
savitamittal1
04305e33f0 Update automl-databricks-local-with-deployment.ipynb 2020-10-16 23:58:12 -07:00
savitamittal1
d22e76d5e0 Update README.md 2020-10-16 23:53:41 -07:00
Harneet Virk
d71c482f75 Merge pull request #1184 from Azure/release_update/Release-70
update samples from Release-70 as a part of  SDK 1.16.0 release
2020-10-12 22:24:25 -07:00
amlrelsa-ms
5775f8a78f update samples from Release-70 as a part of SDK release 2020-10-13 05:19:49 +00:00
Cody
aae823ecd8 Merge pull request #1181 from samuel100/quickstart-notebook
quickstart nb added
2020-10-09 10:54:32 -07:00
Sam Kemp
f1126e07f9 quickstart nb added 2020-10-09 10:35:19 +01:00
Harneet Virk
0e4b27a233 Merge pull request #1171 from savitamittal1/patch-2
Update automl-databricks-local-01.ipynb
2020-10-02 09:41:14 -07:00
Harneet Virk
0a3d5f68a1 Merge pull request #1172 from savitamittal1/patch-3
Update automl-databricks-local-with-deployment.ipynb
2020-10-02 09:41:02 -07:00
savitamittal1
a6fe2affcb Update automl-databricks-local-with-deployment.ipynb
fixed link to readme
2020-10-01 19:38:11 -07:00
savitamittal1
ce469ddf6a Update automl-databricks-local-01.ipynb
fixed link for readme
2020-10-01 19:36:06 -07:00
mx-iao
9fe459be79 Merge pull request #1166 from Azure/minxia/patch
patch for resume training notebook
2020-09-29 17:30:24 -07:00
mx-iao
89c35c8ed6 Update train-tensorflow-resume-training.ipynb 2020-09-29 17:28:17 -07:00
mx-iao
33168c7f5d Update train-tensorflow-resume-training.ipynb 2020-09-29 17:27:23 -07:00
Cody
1d0766bd46 Merge pull request #1165 from samuel100/quickstart-add
quickstart added
2020-09-29 13:13:36 -07:00
Sam Kemp
9903e56882 quickstart added 2020-09-29 21:09:55 +01:00
Harneet Virk
a039166b90 Merge pull request #1162 from Azure/release_update/Release-69
update samples from Release-69 as a part of  SDK 1.15.0 release
2020-09-28 23:54:05 -07:00
amlrelsa-ms
4e4bf48013 update samples from Release-69 as a part of SDK release 2020-09-29 06:48:31 +00:00
Harneet Virk
0a2408300a Merge pull request #1158 from Azure/release_update/Release-68
update samples from Release-68 as a part of  SDK release
2020-09-25 09:23:59 -07:00
amlrelsa-ms
d99c3f5470 update samples from Release-68 as a part of SDK release 2020-09-25 16:10:59 +00:00
Harneet Virk
3f62fe7d47 Merge pull request #1157 from Azure/release_update/Release-67
update samples from Release-67 as a part of  SDK release
2020-09-23 15:51:20 -07:00
amlrelsa-ms
6059c1dc0c update samples from Release-67 as a part of SDK release 2020-09-23 22:48:56 +00:00
Harneet Virk
8e2032fcde Merge pull request #1153 from Azure/release_update/Release-66
update samples from Release-66 as a part of  SDK release
2020-09-21 16:04:23 -07:00
amlrelsa-ms
824d844cd7 update samples from Release-66 as a part of SDK release 2020-09-21 23:02:01 +00:00
Harneet Virk
bb1c7db690 Merge pull request #1148 from Azure/release_update/Release-65
update samples from Release-65 as a part of  SDK release
2020-09-16 18:23:12 -07:00
amlrelsa-ms
8dad09a42f update samples from Release-65 as a part of SDK release 2020-09-17 01:14:32 +00:00
Harneet Virk
db2bf8ae93 Merge pull request #1137 from Azure/release_update/Release-64
update samples from Release-64 as a part of  SDK release
2020-09-09 15:31:51 -07:00
amlrelsa-ms
820c09734f update samples from Release-64 as a part of SDK release 2020-09-09 22:30:45 +00:00
Cody
a2a33c70a6 Merge pull request #1123 from oliverw1/patch-2
docs: bring docs in line with code
2020-09-02 11:12:31 -07:00
Cody
2ff791968a Merge pull request #1122 from oliverw1/patch-1
docs: Move unintended side columns below the main rows
2020-09-02 11:11:58 -07:00
Harneet Virk
7186127804 Merge pull request #1128 from Azure/release_update/Release-63
update samples from Release-63 as a part of  SDK release
2020-08-31 13:23:08 -07:00
amlrelsa-ms
b01c52bfd6 update samples from Release-63 as a part of SDK release 2020-08-31 20:00:07 +00:00
Oliver W
28be7bcf58 docs: bring docs in line with code
A non-existant name was being referred to, which only serves confusion.
2020-08-28 10:24:24 +02:00
Oliver W
37a9350fde Properly format markdown table
Remove the unintended two columns that appeared on the right side
2020-08-28 09:29:46 +02:00
Harneet Virk
5080053a35 Merge pull request #1120 from Azure/release_update/Release-62
update samples from Release-62 as a part of  SDK release
2020-08-27 17:12:05 -07:00
amlrelsa-ms
3c02102691 update samples from Release-62 as a part of SDK release 2020-08-27 23:28:05 +00:00
Sheri Gilley
07e1676762 Merge pull request #1010 from GinSiuCheng/patch-1
Include additional details on user authentication
2020-08-25 11:45:58 -05:00
Sheri Gilley
919a3c078f fix code blocks 2020-08-25 11:13:24 -05:00
Sheri Gilley
9b53c924ed add code block for better formatting 2020-08-25 11:09:56 -05:00
Sheri Gilley
04ad58056f fix quotes 2020-08-25 11:06:18 -05:00
Sheri Gilley
576bf386b5 fix quotes 2020-08-25 11:05:25 -05:00
Cody
7e62d1cfd6 Merge pull request #891 from Fokko/patch-1
Don't print the access token
2020-08-22 18:28:33 -07:00
Cody
ec67a569af Merge pull request #804 from omartin2010/patch-3
typo
2020-08-17 14:35:55 -07:00
Cody
6d1e80bcef Merge pull request #1031 from hyoshioka0128/patch-1
Typo "Mircosoft"→"Microsoft"
2020-08-17 14:32:44 -07:00
mx-iao
db00d9ad3c Merge pull request #1100 from Azure/lostmygithubaccount-patch-1
fix minor typo in how-to-use-azureml/README.md
2020-08-17 14:30:18 -07:00
Harneet Virk
d33c75abc3 Merge pull request #1104 from Azure/release_update/Release-61
update samples from Release-61 as a part of  SDK release
2020-08-17 10:59:39 -07:00
amlrelsa-ms
d0dc4836ae update samples from Release-61 as a part of SDK release 2020-08-17 17:45:26 +00:00
Cody
982f8fcc1d Update README.md 2020-08-14 15:25:39 -07:00
Akshaya Annavajhala
79739b5e1b Remove broken links (#1095)
* Remove broken links

* Update README.md
2020-08-10 19:35:41 -04:00
Harneet Virk
aac4fa1fb9 Merge pull request #1081 from Azure/release_update/Release-60
update samples from Release-60 as a part of  SDK 1.11.0 release
2020-08-04 00:04:38 -07:00
amlrelsa-ms
5b684070e1 update samples from Release-60 as a part of SDK release 2020-08-04 06:12:06 +00:00
Harneet Virk
0ab8b141ee Merge pull request #1078 from Azure/release_update/Release-59
update samples from Release-59 as a part of  SDK release
2020-07-31 10:52:22 -07:00
amlrelsa-ms
b9ef23ad4b update samples from Release-59 as a part of SDK release 2020-07-31 17:23:17 +00:00
Harneet Virk
7e2c1ca152 Merge pull request #1063 from Azure/release_update/Release-58
update samples from Release-58 as a part of  SDK release
2020-07-20 13:46:37 -07:00
amlrelsa-ms
d096535e48 update samples from Release-58 as a part of SDK release 2020-07-20 20:44:42 +00:00
Harneet Virk
f80512a6db Merge pull request #1056 from wchill/wchill-patch-1
Update README.md with KeyError: brand workaround
2020-07-15 10:22:18 -07:00
Eric Ahn
b54111620e Update README.md 2020-07-14 17:47:23 -07:00
Harneet Virk
8dd52ee2df Merge pull request #1036 from Azure/release_update/Release-57
update samples from Release-57 as a part of  SDK release
2020-07-06 15:06:14 -07:00
amlrelsa-ms
6c629f1eda update samples from Release-57 as a part of SDK release 2020-07-06 22:05:24 +00:00
Hiroshi Yoshioka
9c32ca9db5 Typo "Mircosoft"→"Microsoft"
https://docs.microsoft.com/en-us/samples/azure/machinelearningnotebooks/azure-machine-learning-service-example-notebooks/
2020-06-29 12:21:23 +09:00
Harneet Virk
053efde8c9 Merge pull request #1022 from Azure/release_update/Release-56
update samples from Release-56 as a part of  SDK release
2020-06-22 11:12:31 -07:00
amlrelsa-ms
5189691f06 update samples from Release-56 as a part of SDK release 2020-06-22 18:11:40 +00:00
Gin
745b4f0624 Include additional details on user authentication
Additional details should be included for user authentication esp. for enterprise users who may have more than one single aad tenant linked to a user.
2020-06-13 21:24:56 -04:00
Harneet Virk
fb900916e3 Update README.md 2020-06-11 13:26:04 -07:00
Harneet Virk
738347f3da Merge pull request #996 from Azure/release_update/Release-55
update samples from Release-55 as a part of  SDK release
2020-06-08 15:31:35 -07:00
amlrelsa-ms
34a67c1f8b update samples from Release-55 as a part of SDK release 2020-06-08 22:28:25 +00:00
Harneet Virk
34898828be Merge pull request #992 from Azure/release_update/Release-54
update samples from Release-54 as a part of  SDK release
2020-06-02 14:42:02 -07:00
vizhur
a7c3a0fdb8 update samples from Release-54 as a part of SDK release 2020-06-02 21:34:10 +00:00
Harneet Virk
6d11cdfa0a Merge pull request #984 from Azure/release_update/Release-53
update samples from Release-53 as a part of  SDK release
2020-05-26 19:59:58 -07:00
vizhur
11e8ed2bab update samples from Release-53 as a part of SDK release 2020-05-27 02:45:07 +00:00
Harneet Virk
12c06a4168 Merge pull request #978 from ahcan76/patch-1
Fix image paths in tutorial-1st-experiment-sdk-train.ipynb
2020-05-18 12:58:21 -07:00
ahcan76
1f75dc9725 Update tutorial-1st-experiment-sdk-train.ipynb
Fix the image path
2020-05-18 22:40:54 +03:00
Harneet Virk
1a1a42d525 Merge pull request #977 from Azure/release_update/Release-52
update samples from Release-52 as a part of  SDK release
2020-05-18 12:22:48 -07:00
vizhur
879a272a8d update samples from Release-52 as a part of SDK release 2020-05-18 19:21:05 +00:00
Harneet Virk
bc65bde097 Merge pull request #971 from Azure/release_update/Release-51
update samples from Release-51 as a part of  SDK release
2020-05-13 22:17:45 -07:00
vizhur
690bdfbdbe update samples from Release-51 as a part of SDK release 2020-05-14 05:03:47 +00:00
Harneet Virk
3c02bd8782 Merge pull request #967 from Azure/release_update/Release-50
update samples from Release-50 as a part of  SDK release
2020-05-12 19:57:40 -07:00
vizhur
5c14610a1c update samples from Release-50 as a part of SDK release 2020-05-13 02:45:40 +00:00
Harneet Virk
4e3afae6fb Merge pull request #965 from Azure/release_update/Release-49
update samples from Release-49 as a part of  SDK release
2020-05-11 19:25:28 -07:00
vizhur
a2144aa083 update samples from Release-49 as a part of SDK release 2020-05-12 02:24:34 +00:00
Harneet Virk
0e6334178f Merge pull request #963 from Azure/release_update/Release-46
update samples from Release-46 as a part of  SDK release
2020-05-11 14:49:34 -07:00
vizhur
4ec9178d22 update samples from Release-46 as a part of SDK release 2020-05-11 21:48:31 +00:00
Harneet Virk
2aa7c53b0c Merge pull request #962 from Azure/release_update_stablev2/Release-11
update samples from Release-11 as a part of 1.5.0 SDK stable release
2020-05-11 12:42:32 -07:00
vizhur
553fa43e17 update samples from Release-11 as a part of 1.5.0 SDK stable release 2020-05-11 18:59:22 +00:00
Harneet Virk
e98131729e Merge pull request #949 from Azure/release_update_stablev2/Release-8
update samples from Release-8 as a part of 1.4.0 SDK stable release
2020-04-27 11:00:37 -07:00
vizhur
fd2b09e2c2 update samples from Release-8 as a part of 1.4.0 SDK stable release 2020-04-27 17:44:41 +00:00
Harneet Virk
7970209069 Merge pull request #930 from Azure/release_update/Release-44
update samples from Release-44 as a part of  SDK release
2020-04-17 12:46:29 -07:00
vizhur
24f8651bb5 update samples from Release-44 as a part of SDK release 2020-04-17 19:45:37 +00:00
Harneet Virk
b881f78e46 Merge pull request #918 from Azure/release_update_stablev2/Release-6
update samples from Release-6 as a part of 1.3.0 SDK stable release
2020-04-13 09:23:38 -07:00
vizhur
057e22b253 update samples from Release-6 as a part of 1.3.0 SDK stable release 2020-04-13 16:22:23 +00:00
Fokko Driesprong
119fd0a8f6 Don't print the access token
That's never a good idea, no exceptions :)
2020-03-31 08:14:05 +02:00
Harneet Virk
c520bd1d41 Merge pull request #884 from Azure/release_update/Release-43
update samples from Release-43 as a part of  SDK release
2020-03-23 16:49:27 -07:00
vizhur
d3f1212440 update samples from Release-43 as a part of SDK release 2020-03-23 23:39:45 +00:00
Harneet Virk
b95a65eef4 Merge pull request #883 from Azure/release_update_stablev2/Release-3
update samples from Release-3 as a part of 1.2.0 SDK stable release
2020-03-23 16:21:53 -07:00
vizhur
2218af619f update samples from Release-3 as a part of 1.2.0 SDK stable release 2020-03-23 23:11:53 +00:00
Harneet Virk
0401128638 Merge pull request #878 from Azure/release_update/Release-42
update samples from Release-42 as a part of  SDK release
2020-03-20 11:14:02 -07:00
vizhur
59fcb54998 update samples from Release-42 as a part of SDK release 2020-03-20 18:10:08 +00:00
Harneet Virk
e0ea99a6bb Merge pull request #862 from Azure/release_update/Release-41
update samples from Release-41 as a part of  SDK release
2020-03-13 14:57:58 -07:00
vizhur
b06f5ce269 update samples from Release-41 as a part of SDK release 2020-03-13 21:57:04 +00:00
Harneet Virk
ed0ce9e895 Merge pull request #856 from Azure/release_update/Release-40
update samples from Release-40 as a part of  SDK release
2020-03-12 12:28:18 -07:00
vizhur
71053d705b update samples from Release-40 as a part of SDK release 2020-03-12 19:25:26 +00:00
Harneet Virk
77f98bf75f Merge pull request #852 from Azure/release_update_stable/Release-6
update samples from Release-6 as a part of 1.1.5 SDK stable release
2020-03-11 15:37:59 -06:00
vizhur
e443fd1342 update samples from Release-6 as a part of 1.1.5rc0 SDK stable release 2020-03-11 19:51:02 +00:00
Harneet Virk
2165cf308e update samples from Release-25 as a part of 1.1.2rc0 SDK experimental release (#829)
Co-authored-by: vizhur <vizhur@live.com>
2020-03-02 15:42:04 -05:00
Olivier Martin
d4a486827d typo 2020-02-17 17:16:47 -05:00
Harneet Virk
3d6caa10a3 Merge pull request #801 from Azure/release_update/Release-39
update samples from Release-39 as a part of  SDK release
2020-02-13 19:03:36 -07:00
vizhur
4df079db1c update samples from Release-39 as a part of SDK release 2020-02-14 02:01:41 +00:00
Sander Vanhove
67d0b02ef9 Fix broken link in README (#797) 2020-02-13 08:20:28 -05:00
Harneet Virk
4e7b3784d5 Merge pull request #788 from Azure/release_update/Release-38
update samples from Release-38 as a part of  SDK release
2020-02-11 13:16:15 -07:00
vizhur
ed91e39d7e update samples from Release-38 as a part of SDK release 2020-02-11 20:00:16 +00:00
Harneet Virk
a09a1a16a7 Merge pull request #780 from Azure/release_update/Release-37
update samples from Release-37 as a part of  SDK release
2020-02-07 21:52:34 -07:00
vizhur
9662505517 update samples from Release-37 as a part of SDK release 2020-02-08 04:49:27 +00:00
Harneet Virk
8e103c02ff Merge pull request #779 from Azure/release_update/Release-36
update samples from Release-36 as a part of  SDK release
2020-02-07 21:40:57 -07:00
vizhur
ecb5157add update samples from Release-36 as a part of SDK release 2020-02-08 04:35:14 +00:00
Shané Winner
d7d23d5e7c Update index.md 2020-02-05 22:41:22 -08:00
Harneet Virk
83a21ba53a update samples from Release-35 as a part of SDK release (#765)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-05 20:03:41 -05:00
Harneet Virk
3c9cb89c1a update samples from Release-18 as a part of 1.1.0rc0 SDK experimental release (#760)
Co-authored-by: vizhur <vizhur@live.com>
2020-02-04 22:19:52 -05:00
Sheri Gilley
cca7c2e26f add cell metadata 2020-02-04 11:31:07 -06:00
Harneet Virk
e895d7c2bf update samples - test (#758)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-31 15:19:58 -05:00
Shané Winner
3588eb9665 Update index.md 2020-01-23 15:46:43 -08:00
Harneet Virk
a09e726f31 update samples - test (#748)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-23 16:50:29 -05:00
Shané Winner
4fb1d9ee5b Update index.md 2020-01-22 11:38:24 -08:00
Harneet Virk
b05ff80e9d update samples from Release-169 as a part of 1.0.85 SDK release (#742)
Co-authored-by: vizhur <vizhur@live.com>
2020-01-21 18:00:15 -05:00
Shané Winner
512630472b Update index.md 2020-01-08 14:52:23 -08:00
vizhur
ae1337fe70 Merge pull request #724 from Azure/release_update/Release-167
update samples from Release-167 as a part of 1.0.83 SDK release
2020-01-06 15:38:25 -05:00
vizhur
c95f970dc8 update samples from Release-167 as a part of 1.0.83 SDK release 2020-01-06 20:16:21 +00:00
Shané Winner
9b9d112719 Update index.md 2019-12-24 07:40:48 -08:00
vizhur
fe8fcd4b48 Merge pull request #712 from Azure/release_update/Release-31
update samples - test
2019-12-23 20:28:02 -05:00
vizhur
296ae01587 update samples - test 2019-12-24 00:42:48 +00:00
Shané Winner
8f4efe15eb Update index.md 2019-12-10 09:05:23 -08:00
vizhur
d179080467 Merge pull request #690 from Azure/release_update/Release-163
update samples from Release-163 as a part of 1.0.79 SDK release
2019-12-09 15:41:03 -05:00
vizhur
0040644e7a update samples from Release-163 as a part of 1.0.79 SDK release 2019-12-09 20:09:30 +00:00
Shané Winner
8aa04307fb Update index.md 2019-12-03 10:24:18 -08:00
Shané Winner
a525da4488 Update index.md 2019-11-27 13:08:21 -08:00
Shané Winner
e149565a8a Merge pull request #679 from Azure/release_update/Release-30
update samples - test
2019-11-27 13:05:00 -08:00
vizhur
75610ec31c update samples - test 2019-11-27 21:02:21 +00:00
Shané Winner
0c2c450b6b Update index.md 2019-11-25 14:34:48 -08:00
Shané Winner
0d548eabff Merge pull request #677 from Azure/release_update/Release-29
update samples - test
2019-11-25 14:31:50 -08:00
vizhur
e4029801e6 update samples - test 2019-11-25 22:24:09 +00:00
Shané Winner
156974ee7b Update index.md 2019-11-25 11:42:53 -08:00
Shané Winner
1f05157d24 Merge pull request #676 from Azure/release_update/Release-160
update samples from Release-160 as a part of 1.0.76 SDK release
2019-11-25 11:39:27 -08:00
vizhur
2214ea8616 update samples from Release-160 as a part of 1.0.76 SDK release 2019-11-25 19:28:19 +00:00
Sheri Gilley
b54b2566de Merge pull request #667 from Azure/sdk-codetest
remove deprecated auto_prepare_environment
2019-11-21 09:25:15 -06:00
Sheri Gilley
57b0f701f8 remove deprecated auto_prepare_environment 2019-11-20 17:28:44 -06:00
Shané Winner
d658c85208 Update index.md 2019-11-12 14:59:15 -08:00
vizhur
a5f627a9b6 Merge pull request #655 from Azure/release_update/Release-28
update samples - test
2019-11-12 17:11:45 -05:00
vizhur
a8b08bdff0 update samples - test 2019-11-12 21:53:12 +00:00
Shané Winner
0dc3f34b86 Update index.md 2019-11-11 14:49:44 -08:00
Shané Winner
9ba7d5e5bb Update index.md 2019-11-11 14:48:05 -08:00
Shané Winner
c6ad2f8ec0 Merge pull request #654 from Azure/release_update/Release-158
update samples from Release-158 as a part of 1.0.74 SDK release
2019-11-11 10:25:18 -08:00
vizhur
33d6def8c3 update samples from Release-158 as a part of 1.0.74 SDK release 2019-11-11 16:57:02 +00:00
Shané Winner
69d4344dff Update index.md 2019-11-04 10:09:41 -08:00
Shané Winner
34aeec1439 Update index.md 2019-11-04 10:08:10 -08:00
Shané Winner
a9b9ebbf7d Merge pull request #641 from Azure/release_update/Release-27
update samples - test
2019-11-04 10:02:25 -08:00
vizhur
41fa508d53 update samples - test 2019-11-04 17:57:28 +00:00
Shané Winner
e1bfa98844 Update index.md 2019-11-04 08:41:15 -08:00
Shané Winner
2bcee9aa20 Update index.md 2019-11-04 08:40:29 -08:00
Shané Winner
37541b1071 Merge pull request #638 from Azure/release_update/Release-26
update samples - test
2019-11-04 08:31:59 -08:00
Shané Winner
4aff1310a7 Merge branch 'master' into release_update/Release-26 2019-11-04 08:31:37 -08:00
Shané Winner
51ecb7c54f Update index.md 2019-11-01 10:38:46 -07:00
Shané Winner
4e7fc7c82c Update index.md 2019-11-01 10:36:02 -07:00
vizhur
4ed3f0767a update samples - test 2019-11-01 14:48:01 +00:00
vizhur
46ec74f8df Merge pull request #627 from jingyanwangms/jingywa/lightgbm-notebook
add Lightgbm Estimator notebook
2019-10-22 20:54:33 -04:00
Jingyan Wang
8d2e362a10 add Lightgbm notebook 2019-10-22 17:40:32 -07:00
vizhur
86c1b3d760 adding missing files for rapids 2019-10-21 12:20:15 -04:00
Shané Winner
41dc05952f Update index.md 2019-10-15 16:37:53 -07:00
vizhur
df2e08e4a3 Merge pull request #622 from Azure/release_update/Release-25
update samples - test
2019-10-15 18:34:28 -04:00
vizhur
828a976907 update samples - test 2019-10-15 22:01:55 +00:00
vizhur
1a373f11a0 Merge pull request #621 from Azure/ak/revert-db-overwrite
Revert automatic overwrite of databricks content
2019-10-15 16:07:37 -04:00
Akshaya Annavajhala (AK)
60de701207 revert overwrites 2019-10-15 12:33:31 -07:00
Akshaya Annavajhala (AK)
5841fa4a42 revert overwrites 2019-10-15 12:27:56 -07:00
Shané Winner
659fb7abc3 Merge pull request #619 from Azure/release_update/Release-153
update samples from Release-153 as a part of 1.0.69 SDK release
2019-10-14 15:39:40 -07:00
vizhur
2e404cfc3a update samples from Release-153 as a part of 1.0.69 SDK release 2019-10-14 22:30:58 +00:00
Shané Winner
5fcf4887bc Update index.md 2019-10-06 11:44:35 -07:00
Shané Winner
1e7f3117ae Update index.md 2019-10-06 11:44:01 -07:00
Shané Winner
bbb3f85da9 Update README.md 2019-10-06 11:33:56 -07:00
Shané Winner
c816dfb479 Update index.md 2019-10-06 11:29:58 -07:00
Shané Winner
8c128640b1 Update index.md 2019-10-06 11:28:34 -07:00
vizhur
4d2b937846 Merge pull request #600 from Azure/release_update/Release-24
Fix for Tensorflow 2.0 related Notebook Failures
2019-10-02 16:27:31 -04:00
vizhur
5492f52faf update samples - test 2019-10-02 20:23:54 +00:00
Shané Winner
735db9ebe7 Update index.md 2019-10-01 09:59:10 -07:00
Shané Winner
573030b990 Update README.md 2019-10-01 09:52:10 -07:00
Shané Winner
392a059000 Update index.md 2019-10-01 09:44:37 -07:00
Shané Winner
3580e54fbb Update index.md 2019-10-01 09:42:20 -07:00
Shané Winner
2017bcd716 Update index.md 2019-10-01 09:41:33 -07:00
Roope Astala
4a3f8e7025 Merge pull request #594 from Azure/release_update/Release-149
update samples from Release-149 as a part of 1.0.65 SDK release
2019-09-30 13:29:57 -04:00
vizhur
45880114db update samples from Release-149 as a part of 1.0.65 SDK release 2019-09-30 17:08:52 +00:00
Roope Astala
314bad72a4 Merge pull request #588 from skaarthik/rapids
updating to use AML base image and system managed dependencies
2019-09-25 07:44:31 -04:00
Kaarthik Sivashanmugam
f252308005 updating to use AML base image and system managed dependencies 2019-09-24 20:47:15 -07:00
Kaarthik Sivashanmugam
6622a6c5f2 Merge pull request #1 from Azure/master
merge latest changes from Azure/MLNB repo
2019-09-24 20:40:43 -07:00
Roope Astala
6b19e2f263 Merge pull request #587 from Azure/akshaya-a-patch-3
Update README.md to remove confusing reference
2019-09-24 16:13:16 -04:00
Akshaya Annavajhala
42fd4598cb Update README.md 2019-09-24 15:28:30 -04:00
Roope Astala
476d945439 Merge pull request #580 from akshaya-a/master
Add documentation on the preview ADB linking experience
2019-09-24 09:31:45 -04:00
Shané Winner
e96bb9bef2 Delete manage-runs.yml 2019-09-22 20:37:17 -07:00
Shané Winner
2be4a5e54d Delete manage-runs.ipynb 2019-09-22 20:37:07 -07:00
Shané Winner
247a25f280 Delete hello_with_delay.py 2019-09-22 20:36:50 -07:00
Shané Winner
5d9d8eade6 Delete hello_with_children.py 2019-09-22 20:36:39 -07:00
Shané Winner
dba978e42a Delete hello.py 2019-09-22 20:36:29 -07:00
Shané Winner
7f4101c33e Delete run_details.PNG 2019-09-22 20:36:12 -07:00
Shané Winner
62b0d5df69 Delete run_history.png 2019-09-22 20:36:01 -07:00
Shané Winner
f10b55a1bc Delete logging-api.ipynb 2019-09-22 20:35:47 -07:00
Shané Winner
da9e86635e Delete logging-api.yml 2019-09-22 20:35:36 -07:00
Shané Winner
9ca6388996 Delete datasets-diff.ipynb 2019-09-19 14:14:59 -07:00
Akshaya Annavajhala
3ce779063b address PR feedback 2019-09-18 15:48:42 -04:00
Akshaya Annavajhala
ce635ce4fe add the word mlflow 2019-09-18 13:25:41 -04:00
Akshaya Annavajhala
f08e68c8e9 add linking docs 2019-09-18 11:08:46 -04:00
Shané Winner
93a1d232db Update index.md 2019-09-17 10:00:57 -07:00
Shané Winner
923483528c Update index.md 2019-09-17 09:59:23 -07:00
Shané Winner
cbeacb2ab2 Delete sklearn_regression_model.pkl 2019-09-17 09:37:44 -07:00
Shané Winner
c928c50707 Delete score.py 2019-09-17 09:37:34 -07:00
Shané Winner
efb42bacf9 Delete register-model-deploy-local.ipynb 2019-09-17 09:37:26 -07:00
Shané Winner
d8f349a1ae Delete register-model-deploy-local-advanced.ipynb 2019-09-17 09:37:17 -07:00
Shané Winner
96a61fdc78 Delete myenv.yml 2019-09-17 09:37:08 -07:00
Shané Winner
ff8128f023 Delete helloworld.txt 2019-09-17 09:36:59 -07:00
Shané Winner
8260302a68 Delete dockerSharedDrive.JPG 2019-09-17 09:36:50 -07:00
Shané Winner
fbd7f4a55b Delete README.md 2019-09-17 09:36:41 -07:00
Shané Winner
d4e4206179 Delete helloworld.txt 2019-09-17 09:35:38 -07:00
Shané Winner
a98b918feb Delete model-register-and-deploy.ipynb 2019-09-17 09:35:29 -07:00
Shané Winner
890490ec70 Delete model-register-and-deploy.yml 2019-09-17 09:35:17 -07:00
Shané Winner
c068c9b979 Delete myenv.yml 2019-09-17 09:34:54 -07:00
Shané Winner
f334a3516f Delete score.py 2019-09-17 09:34:44 -07:00
Shané Winner
96248d8dff Delete sklearn_regression_model.pkl 2019-09-17 09:34:27 -07:00
Shané Winner
c42e865700 Delete README.md 2019-09-17 09:29:20 -07:00
vizhur
9233ce089a Merge pull request #577 from Azure/release_update/Release-146
update samples from Release-146 as a part of 1.0.62 SDK release
2019-09-16 19:44:43 -04:00
vizhur
6bb1e2a3e3 update samples from Release-146 as a part of 1.0.62 SDK release 2019-09-16 23:21:57 +00:00
Shané Winner
e1724c8a89 Merge pull request #573 from lostmygithubaccount/master
adding timeseries dataset example notebook
2019-09-16 11:00:30 -07:00
Shané Winner
446e0768cc Delete datasets-diff.ipynb 2019-09-16 10:53:16 -07:00
Cody Peterson
8a2f114a16 adding timeseries dataset example notebook 2019-09-13 08:30:26 -07:00
Shané Winner
80c0d4d30f Merge pull request #570 from trevorbye/master
new pipeline tutorial
2019-09-11 09:28:40 -07:00
Trevor Bye
e8f4708a5a adding index metadata 2019-09-11 09:24:41 -07:00
Trevor Bye
fbaeb84204 adding tutorial 2019-09-11 09:02:06 -07:00
Trevor Bye
da1fab0a77 removing dprep file from old deleted tutorial 2019-09-10 12:31:57 -07:00
Shané Winner
94d2890bb5 Update index.md 2019-09-06 06:37:35 -07:00
Shané Winner
4d1ec4f7d4 Update index.md 2019-09-06 06:30:54 -07:00
Shané Winner
ace3153831 Update index.md 2019-09-06 06:28:50 -07:00
Shané Winner
58bbfe57b2 Update index.md 2019-09-06 06:15:36 -07:00
vizhur
11ea00b1d9 Update index.md 2019-09-06 09:14:30 -04:00
Shané Winner
b81efca3e5 Update index.md 2019-09-06 06:13:03 -07:00
vizhur
d7ceb9bca2 Update index.md 2019-09-06 09:08:02 -04:00
Shané Winner
17730dc69a Merge pull request #564 from MayMSFT/patch-1
Update file-dataset-img-classification.ipynb
2019-09-04 13:31:08 -07:00
May Hu
3a029d48a2 Update file-dataset-img-classification.ipynb
made edit on the sdk version
2019-09-04 13:25:10 -07:00
vizhur
06d43956f3 Merge pull request #558 from Azure/release_update/Release-144
update samples from Release-144 as a part of 1.0.60 SDK release
2019-09-03 22:09:33 -04:00
vizhur
a1cb9b33a5 update samples from Release-144 as a part of 1.0.60 SDK release 2019-09-03 22:39:55 +00:00
Sheri Gilley
7db93bcb1d update comments 2019-01-22 17:18:19 -06:00
Sheri Gilley
fcbe925640 Merge branch 'sdk-codetest' of https://github.com/Azure/MachineLearningNotebooks into sdk-codetest 2019-01-07 13:06:12 -06:00
Sheri Gilley
bedfbd649e fix files 2019-01-07 13:06:02 -06:00
Sheri Gilley
fb760f648d Delete temp.py 2019-01-07 12:58:32 -06:00
Sheri Gilley
a9a0713d2f Delete donotupload.py 2019-01-07 12:57:58 -06:00
Sheri Gilley
c9d018b52c remove prepare environment 2019-01-07 12:56:54 -06:00
Sheri Gilley
53dbd0afcf hdi run config code 2019-01-07 11:29:40 -06:00
Sheri Gilley
e3a64b1f16 code for remote vm 2019-01-04 12:51:11 -06:00
Sheri Gilley
732eecfc7c update names 2019-01-04 12:45:28 -06:00
Sheri Gilley
6995c086ff change snippet names 2019-01-03 22:39:06 -06:00
Sheri Gilley
80bba4c7ae code for amlcompute section 2019-01-03 18:55:31 -06:00
Sheri Gilley
3c581b533f for local computer 2019-01-03 18:07:12 -06:00
Sheri Gilley
cc688caa4e change names 2019-01-03 08:53:49 -06:00
Sheri Gilley
da225e116e new code 2019-01-03 08:02:35 -06:00
Sheri Gilley
73c5d02880 Update quickstart.py 2018-12-17 12:23:03 -06:00
Sheri Gilley
e472b54f1b Update quickstart.py 2018-12-17 12:22:40 -06:00
Sheri Gilley
716c6d8bb1 add quickstart code 2018-11-06 11:27:58 -06:00
Sheri Gilley
23189c6f40 move folder 2018-10-17 16:24:46 -05:00
Sheri Gilley
361b57ed29 change all names to camelCase 2018-10-17 11:47:09 -05:00
Sheri Gilley
3f531fd211 try camelCase 2018-10-17 11:09:46 -05:00
Sheri Gilley
111f5e8d73 playing around 2018-10-17 10:46:33 -05:00
Sheri Gilley
96c59d5c2b testing 2018-10-17 09:56:04 -05:00
Sheri Gilley
ce3214b7c6 fix name 2018-10-16 17:33:24 -05:00
Sheri Gilley
53199d17de add delete 2018-10-16 16:54:08 -05:00
Sheri Gilley
54c883412c add test service 2018-10-16 16:49:41 -05:00
595 changed files with 81698 additions and 121546 deletions

View File

@@ -1,30 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: "[Notebook issue]"
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
Provide the following if applicable:
+ Your Python & SDK version
+ Python Scripts or the full notebook name
+ Pipeline definition
+ Environment definition
+ Example data
+ Any log files.
+ Run and Workspace Id
**To Reproduce**
Steps to reproduce the behavior:
1.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.

View File

@@ -1,43 +0,0 @@
---
name: Notebook issue
about: Describe your notebook issue
title: "[Notebook] DESCRIPTIVE TITLE"
labels: notebook
assignees: ''
---
### DESCRIPTION: Describe clearly + concisely
.
### REPRODUCIBLE: Steps
.
### EXPECTATION: Clear description
.
### CONFIG/ENVIRONMENT:
```Provide where applicable
## Your Python & SDK version:
## Environment definition:
## Notebook name or Python scripts:
## Run and Workspace Id:
## Pipeline definition:
## Example data:
## Any log files:
```

9
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,9 @@
# Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns

View File

@@ -28,7 +28,7 @@ git clone https://github.com/Azure/MachineLearningNotebooks.git
pip install azureml-sdk[notebooks,tensorboard]
# install model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install automated ml components
pip install azureml-sdk[automl]
@@ -86,7 +86,7 @@ If you need additional Azure ML SDK components, you can either modify the Docker
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]

102
README.md
View File

@@ -1,79 +1,43 @@
---
page_type: sample
languages:
- python
products:
- azure
- azure-machine-learning-service
description: "With Azure Machine Learning service, learn to prep data, train, test, deploy, manage, and track machine learning models in a cloud-based environment."
---
# Azure Machine Learning Python SDK notebooks
# Azure Machine Learning service example notebooks
### **With the introduction of AzureML SDK v2, this samples repository for the v1 SDK is now deprecated and will not be monitored or updated. Users are encouraged to visit the [v2 SDK samples repository](https://github.com/Azure/azureml-examples) instead for up-to-date and enhanced examples of how to build, train, and deploy machine learning models with AzureML's newest features.**
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
Welcome to the Azure Machine Learning Python SDK notebooks repository!
## Getting started
## Quick installation
```sh
pip install azureml-sdk
```
Read more detailed instructions on [how to set up your environment](./NBSETUP.md) using Azure Notebook service, your own Jupyter notebook server, or Docker.
These notebooks are recommended for use in an Azure Machine Learning [Compute Instance](https://docs.microsoft.com/azure/machine-learning/concept-compute-instance), where you can run them without any additional set up.
## How to navigate and use the example notebooks?
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, you should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
However, the notebooks can be run in any development environment with the correct `azureml` packages installed.
If you want to...
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/img-classification-part2-deploy.ipynb).
* ...prepare your data and do automated machine learning, start with regression tutorials: [Part 1 (Data Prep)](./tutorials/regression-part1-data-prep.ipynb) and [Part 2 (Automated ML)](./tutorials/regression-part2-automated-ml.ipynb).
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
## Tutorials
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs).
## How to use Azure ML
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
- [Monitor Models](./how-to-use-azureml/monitor-models) - Examples showing how to enable model monitoring services such as DataDrift
---
## Documentation
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
* [Python SDK reference](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
* Azure ML Data Prep SDK [overview](https://aka.ms/data-prep-sdk), [Python SDK reference](https://aka.ms/aml-data-prep-apiref), and [tutorials and how-tos](https://aka.ms/aml-data-prep-notebooks).
---
## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users:
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
## Data/Telemetry
This repository collects usage data and sends it to Mircosoft to help improve our products and services. Read Microsoft's [privacy statement to learn more](https://privacy.microsoft.com/en-US/privacystatement)
To opt out of tracking, please go to the raw markdown or .ipynb files and remove the following line of code:
Install the `azureml.core` Python package:
```sh
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/README.png)"
pip install azureml-core
```
This URL will be slightly different depending on the file.
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/README.png)
Install additional packages as needed:
```sh
pip install azureml-mlflow
pip install azureml-dataset-runtime
pip install azureml-automl-runtime
pip install azureml-pipeline
pip install azureml-pipeline-steps
...
```
We recommend starting with one of the [quickstarts](tutorials/compute-instance-quickstarts).
## Contributing
This repository is a push-only mirror. Pull requests are ignored.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Please see the [code of conduct](CODE_OF_CONDUCT.md) for details.
## Reference
- [Documentation](https://docs.microsoft.com/azure/machine-learning)

41
SECURITY.md Normal file
View File

@@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.7 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.57 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -214,7 +214,10 @@
"* You do not have permission to create a resource group if it's non-existing.\n",
"* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note**: A Basic workspace is created by default. If you would like to create an Enterprise workspace, please specify sku = 'enterprise'.\n",
"Please visit our [pricing page](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) for more details on our Enterprise edition.\n"
]
},
{
@@ -235,6 +238,7 @@
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" sku = 'basic',\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"\n",
@@ -250,6 +254,8 @@
"\n",
"Many of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.\n",
"\n",
"The cluster parameters are:\n",
@@ -323,7 +329,7 @@
" print(\"Creating new gpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"Standard_NC6s_v3\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" # Create the cluster with the specified name and configuration\n",
@@ -357,13 +363,13 @@
"metadata": {
"authors": [
{
"name": "roastala"
"name": "ninhu"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python36"
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {

View File

@@ -287,8 +287,6 @@ Notice how the parameters are modified when using the CPU-only mode.
The outputs of the script can be observed in the master notebook as the script is executed
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/RAPIDS/README.png)

View File

@@ -9,6 +9,13 @@
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/RAPIDS/azure-ml-with-nvidia-rapids/azure-ml-with-nvidia-rapids.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -20,7 +27,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c3\u201a\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c2\u00a0in Azure.\n",
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c3\u201a\u00c2\u00a0in Azure.\n",
" \n",
"In this notebook, we will do the following:\n",
" \n",
@@ -119,8 +126,10 @@
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
"\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
@@ -161,11 +170,11 @@
"if gpu_cluster_name in ws.compute_targets:\n",
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
" print('found compute target. just use it. ' + gpu_cluster_name)\n",
" print('Found compute target. Will use {0} '.format(gpu_cluster_name))\n",
"else:\n",
" print(\"creating new cluster\")\n",
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v2\", min_nodes=1, max_nodes = 1)\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v3\", min_nodes=1, max_nodes = 1)\n",
"\n",
" # create the cluster\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
@@ -179,13 +188,6 @@
"### Script to process data and train model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The _process&#95;data.py_ script used in the step below is a slightly modified implementation of [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -194,10 +196,7 @@
"source": [
"# copy process_data.py into the script folder\n",
"import shutil\n",
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))\n",
"\n",
"with open(os.path.join(scripts_folder, './process_data.py'), 'r') as process_data_script:\n",
" print(process_data_script.read())"
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))"
]
},
{
@@ -221,13 +220,6 @@
"### Downloading Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Important</font>: Python package progressbar2 is necessary to run the following cell. If it is not available in your environment where this notebook is running, please install it."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -237,7 +229,6 @@
"import tarfile\n",
"import hashlib\n",
"from urllib.request import urlretrieve\n",
"from progressbar import ProgressBar\n",
"\n",
"def validate_downloaded_data(path):\n",
" if(os.path.isdir(path) and os.path.exists(path + '//names.csv')) :\n",
@@ -267,7 +258,7 @@
" url_format = 'http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/{0}.tgz'\n",
" url = url_format.format(fileroot)\n",
" print(\"...Downloading file :{0}\".format(filename))\n",
" urlretrieve(url, filename,show_progress)\n",
" urlretrieve(url, filename)\n",
" pbar.finish()\n",
" print(\"...File :{0} finished downloading\".format(filename))\n",
" else:\n",
@@ -282,9 +273,7 @@
" so_far = 0\n",
" for member_info in members:\n",
" tar.extract(member_info,path=path)\n",
" show_progress(so_far, 1, numFiles)\n",
" so_far += 1\n",
" pbar.finish()\n",
" print(\"...All {0} files have been decompressed\".format(numFiles))\n",
" tar.close()"
]
@@ -324,7 +313,9 @@
"\n",
"# download and uncompress data in a local directory before uploading to data store\n",
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
"ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
"\n",
"# ---->>>> UNCOMMENT THE BELOW LINE TO UPLOAD YOUR DATA IF NOT DONE SO ALREADY <<<<----\n",
"# ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
"\n",
"# data already uploaded to the datastore\n",
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore=fileroot)"
@@ -360,7 +351,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code shows how to use an existing image from [Docker Hub](https://hub.docker.com/r/rapidsai/rapidsai/) that has a prebuilt conda environment named 'rapids' when creating a RunConfiguration. Note that this conda environment does not include azureml-defaults package that is required for using AML functionality like metrics tracking, model management etc. This package is automatically installed when you use 'Specify package dependencies' option and that is why it is the recommended option to create RunConfiguraiton in AML."
"The following code shows how to install RAPIDS using conda. The `rapids.yml` file contains the list of packages necessary to run this tutorial. **NOTE:** Initial build of the image might take up to 20 minutes as the service needs to build and cache the new image; once the image is built the subequent runs use the cached image and the overhead is minimal."
]
},
{
@@ -369,17 +360,13 @@
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration()\n",
"cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
"run_config = RunConfiguration(conda_dependencies=cd)\n",
"run_config.framework = 'python'\n",
"run_config.environment.python.user_managed_dependencies = True\n",
"run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
"run_config.target = gpu_cluster_name\n",
"run_config.environment.docker.enabled = True\n",
"run_config.environment.docker.gpu_support = True\n",
"run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu18.04\"\n",
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"run_config.environment.docker.base_image = \"mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu20.04\"\n",
"run_config.environment.spark.precache_packages = False\n",
"run_config.data_references={'data':data_ref.to_config()}"
]
@@ -388,14 +375,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Specify package dependencies"
"#### Using Docker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code shows how to list package dependencies in a conda environment definition file (rapids.yml) when creating a RunConfiguration"
"Alternatively, you can specify RAPIDS Docker image."
]
},
{
@@ -404,16 +391,17 @@
"metadata": {},
"outputs": [],
"source": [
"# cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
"# run_config = RunConfiguration(conda_dependencies=cd)\n",
"# run_config = RunConfiguration()\n",
"# run_config.framework = 'python'\n",
"# run_config.environment.python.user_managed_dependencies = True\n",
"# run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
"# run_config.target = gpu_cluster_name\n",
"# run_config.environment.docker.enabled = True\n",
"# run_config.environment.docker.gpu_support = True\n",
"# run_config.environment.docker.base_image = \"<image>\"\n",
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"# run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu20.04\"\n",
"# # run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
"# # run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
"# # run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
"# run_config.environment.spark.precache_packages = False\n",
"# run_config.data_references={'data':data_ref.to_config()}"
]
@@ -537,9 +525,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python36"
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
@@ -551,9 +539,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -15,21 +15,6 @@ from glob import glob
import os
import argparse
def initialize_rmm_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = True
#rmm_cfg.initial_pool_size = 2<<30 # set to 2GiB. Default is 1/2 total GPU memory
import cudf
return cudf._gdf.rmm_initialize()
def initialize_rmm_no_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = False
import cudf
return cudf._gdf.rmm_initialize()
def run_dask_task(func, **kwargs):
task = func(**kwargs)
return task
@@ -207,26 +192,26 @@ def gpu_load_names(col_path):
def create_ever_features(gdf, **kwargs):
everdf = gdf[['loan_id', 'current_loan_delinquency_status']]
everdf = everdf.groupby('loan_id', method='hash').max()
everdf = everdf.groupby('loan_id', method='hash').max().reset_index()
del(gdf)
everdf['ever_30'] = (everdf['max_current_loan_delinquency_status'] >= 1).astype('int8')
everdf['ever_90'] = (everdf['max_current_loan_delinquency_status'] >= 3).astype('int8')
everdf['ever_180'] = (everdf['max_current_loan_delinquency_status'] >= 6).astype('int8')
everdf.drop_column('max_current_loan_delinquency_status')
everdf['ever_30'] = (everdf['current_loan_delinquency_status'] >= 1).astype('int8')
everdf['ever_90'] = (everdf['current_loan_delinquency_status'] >= 3).astype('int8')
everdf['ever_180'] = (everdf['current_loan_delinquency_status'] >= 6).astype('int8')
everdf.drop_column('current_loan_delinquency_status')
return everdf
def create_delinq_features(gdf, **kwargs):
delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']]
del(gdf)
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_30['delinquency_30'] = delinq_30['min_monthly_reporting_period']
delinq_30.drop_column('min_monthly_reporting_period')
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_90['delinquency_90'] = delinq_90['min_monthly_reporting_period']
delinq_90.drop_column('min_monthly_reporting_period')
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_180['delinquency_180'] = delinq_180['min_monthly_reporting_period']
delinq_180.drop_column('min_monthly_reporting_period')
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
delinq_30['delinquency_30'] = delinq_30['monthly_reporting_period']
delinq_30.drop_column('monthly_reporting_period')
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
delinq_90['delinquency_90'] = delinq_90['monthly_reporting_period']
delinq_90.drop_column('monthly_reporting_period')
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
delinq_180['delinquency_180'] = delinq_180['monthly_reporting_period']
delinq_180.drop_column('monthly_reporting_period')
del(delinq_gdf)
delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash')
delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
@@ -279,16 +264,15 @@ def create_joined_df(gdf, everdf, **kwargs):
def create_12_mon_features(joined_df, **kwargs):
testdfs = []
n_months = 12
for y in range(1, n_months + 1):
tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']]
tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month']
tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor()
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'})
tmpdf['delinquency_12'] = (tmpdf['max_delinquency_12']>3).astype('int32')
tmpdf['delinquency_12'] +=(tmpdf['min_upb_12']==0).astype('int32')
tmpdf.drop_column('max_delinquency_12')
tmpdf['upb_12'] = tmpdf['min_upb_12']
tmpdf.drop_column('min_upb_12')
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'}).reset_index()
tmpdf['delinquency_12'] = (tmpdf['delinquency_12']>3).astype('int32')
tmpdf['delinquency_12'] +=(tmpdf['upb_12']==0).astype('int32')
tmpdf['upb_12'] = tmpdf['upb_12']
tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16')
tmpdf['timestamp_month'] = np.int8(y)
tmpdf.drop_column('josh_mody_n')
@@ -329,6 +313,7 @@ def last_mile_cleaning(df, **kwargs):
'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12',
'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp'
]
for column in drop_list:
df.drop_column(column)
for col, dtype in df.dtypes.iteritems():
@@ -342,7 +327,6 @@ def last_mile_cleaning(df, **kwargs):
return df.to_arrow(preserve_index=False)
def main():
#print('XGBOOST_BUILD_DOC is ' + os.environ['XGBOOST_BUILD_DOC'])
parser = argparse.ArgumentParser("rapidssample")
parser.add_argument("--data_dir", type=str, help="location of data")
parser.add_argument("--num_gpu", type=int, help="Number of GPUs to use", default=1)
@@ -364,7 +348,6 @@ def main():
print('data_dir = {0}'.format(data_dir))
print('num_gpu = {0}'.format(num_gpu))
print('part_count = {0}'.format(part_count))
#part_count = part_count + 1 # adding one because the usage below is not inclusive
print('end_year = {0}'.format(end_year))
print('cpu_predictor = {0}'.format(cpu_predictor))
@@ -380,19 +363,17 @@ def main():
client
print(client.ncores())
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
acq_data_path = "{0}/acq".format(data_dir) #"/rapids/data/mortgage/acq"
perf_data_path = "{0}/perf".format(data_dir) #"/rapids/data/mortgage/perf"
col_names_path = "{0}/names.csv".format(data_dir) # "/rapids/data/mortgage/names.csv"
start_year = 2000
#end_year = 2000 # end_year is inclusive -- converted to parameter
#part_count = 2 # the number of data files to train against -- converted to parameter
client.run(initialize_rmm_pool)
client
print(client.ncores())
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
# This can be optimized to avoid calculating the dropped features.
print('--->>> Workers used: {0}'.format(client.ncores()))
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
# This can be optimized to avoid calculating the dropped features.
print("Reading ...")
t1 = datetime.datetime.now()
gpu_dfs = []
@@ -414,14 +395,9 @@ def main():
wait(gpu_dfs)
t2 = datetime.datetime.now()
print("Reading time ...")
print(t2-t1)
print('len(gpu_dfs) is {0}'.format(len(gpu_dfs)))
print("Reading time: {0}".format(str(t2-t1)))
print('--->>> Number of data parts: {0}'.format(len(gpu_dfs)))
client.run(cudf._gdf.rmm_finalize)
client.run(initialize_rmm_no_pool)
client
print(client.ncores())
dxgb_gpu_params = {
'nround': 100,
'max_depth': 8,
@@ -438,7 +414,7 @@ def main():
'n_gpus': 1,
'distributed_dask': True,
'loss': 'ls',
'objective': 'gpu:reg:linear',
'objective': 'reg:squarederror',
'max_features': 'auto',
'criterion': 'friedman_mse',
'grow_policy': 'lossguide',
@@ -446,13 +422,13 @@ def main():
}
if cpu_predictor:
print('Training using CPUs')
print('\n---->>>> Training using CPUs <<<<----\n')
dxgb_gpu_params['predictor'] = 'cpu_predictor'
dxgb_gpu_params['tree_method'] = 'hist'
dxgb_gpu_params['objective'] = 'reg:linear'
else:
print('Training using GPUs')
print('\n---->>>> Training using GPUs <<<<----\n')
print('Training parameters are {0}'.format(dxgb_gpu_params))
@@ -482,13 +458,12 @@ def main():
gc.collect()
wait(gpu_dfs)
# TRAIN THE MODEL
labels = None
t1 = datetime.datetime.now()
bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround'])
t2 = datetime.datetime.now()
print("Training time ...")
print(t2-t1)
print('str(bst) is {0}'.format(str(bst)))
print('\n---->>>> Training time: {0} <<<<----\n'.format(str(t2-t1)))
print('Exiting script')
if __name__ == '__main__':

View File

@@ -1,35 +0,0 @@
name: rapids
channels:
- nvidia
- numba
- conda-forge
- rapidsai
- defaults
- pytorch
dependencies:
- arrow-cpp=0.12.0
- bokeh
- cffi=1.11.5
- cmake=3.12
- cuda92
- cython==0.29
- dask=1.1.1
- distributed=1.25.3
- faiss-gpu=1.5.0
- numba=0.42
- numpy=1.15.4
- nvstrings
- pandas=0.23.4
- pyarrow=0.12.0
- scikit-learn
- scipy
- cudf
- cuml
- python=3.6.2
- jupyterlab
- pip:
- file:/rapids/xgboost/python-package/dist/xgboost-0.81-py3-none-any.whl
- git+https://github.com/rapidsai/dask-xgboost@dask-cudf
- git+https://github.com/rapidsai/dask-cudf@master
- git+https://github.com/rapidsai/dask-cuda@master

View File

@@ -4,14 +4,11 @@ Learn how to use Azure Machine Learning services for experimentation and model m
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-within-notebook](./training/train-within-notebook): Train a model while tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.
* [logging-api](./track-and-monitor-experiments/logging-api): Learn about the details of logging metrics to run history.
* [register-model-create-image-deploy-service](./deployment/register-model-create-image-deploy-service): Learn about the details of model management.
* [production-deploy-to-aks](./deployment/production-deploy-to-aks) Deploy a model to production at scale on Azure Kubernetes Service.
* [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model.
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).

View File

@@ -1,8 +1,8 @@
# Table of Contents
1. [Automated ML Introduction](#introduction)
1. [Setup using Azure Notebooks](#jupyter)
1. [Setup using Azure Databricks](#databricks)
1. [Setup using Compute Instances](#jupyter)
1. [Setup using a Local Conda environment](#localconda)
1. [Setup using Azure Databricks](#databricks)
1. [Automated ML SDK Sample Notebooks](#samples)
1. [Documentation](#documentation)
1. [Running using python command](#pythoncommand)
@@ -21,22 +21,14 @@ Below are the three execution environments supported by automated ML.
<a name="jupyter"></a>
## Setup using Azure Notebooks - Jupyter based notebooks in the Azure cloud
## Setup using Compute Instances - Jupyter based notebooks from a Azure Virtual Machine
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
1. Follow the instructions in the [configuration](../../configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks.
<a name="databricks"></a>
## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
1. Open the [ML Azure portal](https://ml.azure.com)
1. Select Compute
1. Select Compute Instances
1. Click New
1. Type a Compute Name, select a Virtual Machine type and select a Virtual Machine size
1. Click Create
<a name="localconda"></a>
## Setup using a Local Conda environment
@@ -102,111 +94,99 @@ source activate azure_automl
jupyter notebook
```
<a name="databricks"></a>
## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v7.1 (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl).
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
<a name="samples"></a>
# Automated ML SDK Sample Notebooks
- [auto-ml-classification.ipynb](classification/auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification
- Uses local compute for training
## Classification
- **Classify Credit Card Fraud**
- Dataset: [Kaggle's credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- **[Jupyter Notebook (remote run)](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)**
- run the experiment remotely on AML Compute cluster
- test the performance of the best model in the local environment
- **[Jupyter Notebook (local run)](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)**
- run experiment in the local environment
- use Mimic Explainer for computing feature importance
- deploy the best model along with the explainer to an Azure Kubernetes (AKS) cluster, which will compute the raw and engineered feature importances at inference time
- **Predict Term Deposit Subscriptions in a Bank**
- Dataset: [UCI's bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
- **[Jupyter Notebook](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)**
- run experiment remotely on AML Compute cluster to generate ONNX compatible models
- view the featurization steps that were applied during training
- view feature importance for the best model
- download the best model in ONNX format and use it for inferencing using ONNXRuntime
- deploy the best model in PKL format to Azure Container Instance (ACI)
- **Predict Newsgroup based on Text from News Article**
- Dataset: [20 newsgroups text dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html)
- **[Jupyter Notebook](classification-text-dnn/auto-ml-classification-text-dnn.ipynb)**
- AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data
- AutoML will use Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used
- Bidirectional Long-Short Term neural network (BiLSTM) will be utilized when a CPU compute is used, thereby optimizing the choice of DNN
- [auto-ml-regression.ipynb](regression/auto-ml-regression.ipynb)
- Dataset: scikit learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)
- Simple example of using automated ML for regression
- Uses local compute for training
## Regression
- **Predict Performance of Hardware Parts**
- Dataset: Hardware Performance Dataset
- **[Jupyter Notebook](regression/auto-ml-regression.ipynb)**
- run the experiment remotely on AML Compute cluster
- get best trained model for a different metric than the one the experiment was optimized for
- test the performance of the best model in the local environment
- **[Jupyter Notebook (advanced)](regression/auto-ml-regression.ipynb)**
- run the experiment remotely on AML Compute cluster
- customize featurization: override column purpose within the dataset, configure transformer parameters
- get best trained model for a different metric than the one the experiment was optimized for
- run a model explanation experiment on the remote cluster
- deploy the model along the explainer and run online inferencing
- [auto-ml-remote-amlcompute.ipynb](remote-amlcompute/auto-ml-remote-amlcompute.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using automated ML for classification using remote AmlCompute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automated ML settings as kwargs
- [auto-ml-missing-data-blacklist-early-termination.ipynb](missing-data-blacklist-early-termination/auto-ml-missing-data-blacklist-early-termination.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
- [auto-ml-sparse-data-train-test-split.ipynb](sparse-data-train-test-split/auto-ml-sparse-data-train-test-split.ipynb)
- Dataset: Scikit learn's [20newsgroup](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html)
- Handle sparse datasets
- Specify custom train and validation set
- [auto-ml-exploring-previous-runs.ipynb](exploring-previous-runs/auto-ml-exploring-previous-runs.ipynb)
- List all projects for the workspace
- List all automated ML Runs for a given project
- Get details for a automated ML Run. (automated ML settings, run widget & all metrics)
- Download fitted pipeline for any iteration
- [auto-ml-classification-with-deployment.ipynb](classification-with-deployment/auto-ml-classification-with-deployment.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
- [auto-ml-sample-weight.ipynb](sample-weight/auto-ml-sample-weight.ipynb)
- How to specifying sample_weight
- The difference that it makes to test results
- [auto-ml-subsampling-local.ipynb](subsampling/auto-ml-subsampling-local.ipynb)
- How to enable subsampling
- [auto-ml-dataset.ipynb](dataprep/auto-ml-dataset.ipynb)
- Using Dataset for reading data
- [auto-ml-dataset-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataset-remote-execution.ipynb)
- Using Dataset for reading data with remote execution
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using automated ML for classification with whitelisting tensorflow models.
- Uses local compute for training
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv)
- Example of using automated ML for training a forecasting model
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb)
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-classification-with-onnx.ipynb](classification-with-onnx/auto-ml-classification-with-onnx.ipynb)
- Dataset: scikit learn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html)
- Simple example of using automated ML for classification with ONNX models
- Uses local compute for training
- [auto-ml-remote-amlcompute-with-onnx.ipynb](remote-amlcompute-with-onnx/auto-ml-remote-amlcompute-with-onnx.ipynb)
- Dataset: scikit learn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html)
- Example of using automated ML for classification using remote AmlCompute for training
- Train the models with ONNX compatible config on
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving the ONNX models and do the inference with them
- [auto-ml-bank-marketing-subscribers-with-deployment.ipynb](bank-marketing-subscribers-with-deployment/auto-ml-bank-marketing-with-deployment.ipynb)
- Dataset: UCI's [bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
- Simple example of using automated ML for classification to predict term deposit subscriptions for a bank
- Uses azure compute for training
- [auto-ml-creditcard-with-deployment.ipynb](credit-card-fraud-detection-with-deployment/auto-ml-creditcard-with-deployment.ipynb)
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- Simple example of using automated ML for classification to fraudulent credit card transactions
- Uses azure compute for training
- [auto-ml-hardware-performance-with-deployment.ipynb](hardware-performance-prediction-with-deployment/auto-ml-hardware-performance-with-deployment.ipynb)
- Dataset: UCI's [computer hardware dataset](https://archive.ics.uci.edu/ml/datasets/Computer+Hardware)
- Simple example of using automated ML for regression to predict the performance of certain combinations of hardware components
- Uses azure compute for training
- [auto-ml-concrete-strength-with-deployment.ipynb](predicting-concrete-strength-with-deployment/auto-ml-concrete-strength-with-deployment.ipynb)
- Dataset: UCI's [concrete compressive strength dataset](https://www.kaggle.com/pavanraj159/concrete-compressive-strength-data-set)
- Simple example of using automated ML for regression to predict the strength predict the compressive strength of concrete based off of different ingredient combinations and quantities of those ingredients
- Uses azure compute for training
## Time Series Forecasting
- **Forecast Energy Demand**
- Dataset: [NYC energy demand data](http://mis.nyiso.com/public/P-58Blist.htm)
- **[Jupyter Notebook](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)**
- run experiment remotely on AML Compute cluster
- use lags and rolling window features
- view the featurization steps that were applied during training
- get the best model, use it to forecast on test data and compare the accuracy of predictions against real data
- **Forecast Orange Juice Sales (Multi-Series)**
- Dataset: [Dominick's grocery sales of orange juice](forecasting-orange-juice-sales/dominicks_OJ.csv)
- **[Jupyter Notebook](forecasting-orange-juice-sales/dominicks_OJ.csv)**
- run experiment remotely on AML Compute cluster
- customize time-series featurization, change column purpose and override transformer hyper parameters
- evaluate locally the performance of the generated best model
- deploy the best model as a webservice on Azure Container Instance (ACI)
- get online predictions from the deployed model
- **Forecast Demand of a Bike-Sharing Service**
- Dataset: [Bike demand data](forecasting-bike-share/bike-no.csv)
- **[Jupyter Notebook](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)**
- run experiment remotely on AML Compute cluster
- integrate holiday features
- run rolling forecast for test set that is longer than the forecast horizon
- compute metrics on the predictions from the remote forecast
- **The Forecast Function Interface**
- Dataset: Generated for sample purposes
- **[Jupyter Notebook](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)**
- train a forecaster using a remote AML Compute cluster
- capabilities of forecast function (e.g. forecast farther into the horizon)
- generate confidence intervals
- **Forecast Beverage Production**
- Dataset: [Monthly beer production data](forecasting-beer-remote/Beer_no_valid_split_train.csv)
- **[Jupyter Notebook](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)**
- train using a remote AML Compute cluster
- enable the DNN learning model
- forecast on a remote compute cluster and compare different model performance
- **Continuous Retraining with NOAA Weather Data**
- Dataset: [NOAA weather data from Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/)
- **[Jupyter Notebook](continuous-retraining/auto-ml-continuous-retraining.ipynb)**
- continuously retrain a model using Pipelines and AutoML
- create a Pipeline to upload a time series dataset to an Azure blob
- create a Pipeline to run an AutoML experiment and register the best resulting model in the Workspace
- publish the training pipeline created and schedule it to run daily
<a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
@@ -227,7 +207,7 @@ The main code of the file must be indented so that it is under this condition.
## automl_setup fails
1. On Windows, make sure that you are running automl_setup from an Anconda Prompt window rather than a regular cmd window. You can launch the "Anaconda Prompt" window by hitting the Start button and typing "Anaconda Prompt". If you don't see the application "Anaconda Prompt", you might not have conda or mini conda installed. In that case, you can install it [here](https://conda.io/miniconda.html)
2. Check that you have conda 64-bit installed rather than 32-bit. You can check this with the command `conda info`. The `platform` should be `win-64` for Windows or `osx-64` for Mac.
3. Check that you have conda 4.4.10 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
3. Check that you have conda 4.7.8 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
4. On Linux, if the error is `gcc: error trying to exec 'cc1plus': execvp: No such file or directory`, install build essentials using the command `sudo apt-get install build-essential`.
5. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
@@ -245,6 +225,17 @@ If automl_setup_linux.sh fails on Ubuntu Linux with the error: `unable to execut
4) Check that the region is one of the supported regions: `eastus2`, `eastus`, `westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`
5) Check that you have access to the region using the Azure Portal.
## import AutoMLConfig fails after upgrade from before 1.0.76 to 1.0.76 or later
There were package changes in automated machine learning version 1.0.76, which require the previous version to be uninstalled before upgrading to the new version.
If you have manually upgraded from a version of automated machine learning before 1.0.76 to 1.0.76 or later, you may get the error:
`ImportError: cannot import name 'AutoMLConfig'`
This can be resolved by running:
`pip uninstall azureml-train-automl` and then
`pip install azureml-train-automl`
The automl_setup.cmd script does this automatically.
## workspace.from_config fails
If the call `ws = Workspace.from_config()` fails:
1) Make sure that you have run the `configuration.ipynb` notebook successfully.
@@ -267,6 +258,15 @@ You may check the version of tensorflow and uninstall as follows
2) enter `pip freeze` and look for `tensorflow` , if found, the version listed should be < 1.13
3) If the listed version is a not a supported version, `pip uninstall tensorflow` in the command shell and enter y for confirmation.
## KeyError: 'brand' when running AutoML on local compute or Azure Databricks cluster**
If a new environment was created after 10 June 2020 using SDK 1.7.0 or lower, training may fail with the above error due to an update in the py-cpuinfo package. (Environments created on or before 10 June 2020 are unaffected, as well as experiments run on remote compute as cached training images are used.) To work around this issue, either of the two following steps can be taken:
1) Update the SDK version to 1.8.0 or higher (this will also downgrade py-cpuinfo to 5.0.0):
`pip install --upgrade azureml-sdk[automl]`
2) Downgrade the installed version of py-cpuinfo to 5.0.0:
`pip install py-cpuinfo==5.0.0`
## Remote run: DsvmCompute.create fails
There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are:
1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name.

View File

@@ -1,25 +1,26 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip
- python>=3.5.2,<3.6.8
- nb_conda
- matplotlib==2.1.0
- numpy>=1.11.0,<=1.16.2
- cython
- urllib3<1.24
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.19.0,<=0.20.3
- pandas>=0.22.0,<=0.23.4
- py-xgboost<=0.80
- pyarrow>=0.11.0
# Azure ML only supports 3.8 and later.
- pip==22.3.1
- python>=3.10,<3.11
- holidays==0.29
- scipy==1.10.1
- tqdm==4.66.1
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- azureml-train-automl
- azureml-widgets
- azureml-explain-model
- pandas_ml
- azureml-widgets~=1.59.0
- azureml-defaults~=1.59.0
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_win32_requirements.txt [--no-deps]
- matplotlib==3.7.1
- xgboost==1.5.2
- prophet==1.1.4
- onnx==1.16.1
- setuptools-git==1.2
- spacy==3.7.4
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz

View File

@@ -0,0 +1,30 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Azure ML only supports 3.7 and later.
- pip==22.3.1
- python>=3.10,<3.11
- matplotlib==3.7.1
- numpy>=1.21.6,<=1.23.5
- urllib3==1.26.7
- scipy==1.10.1
- scikit-learn==1.5.1
- holidays==0.29
- pytorch::pytorch=1.11.0
- cudatoolkit=10.1.243
- notebook
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.59.0
- azureml-defaults~=1.59.0
- pytorch-transformers==1.0.0
- spacy==3.7.4
- xgboost==1.5.2
- prophet==1.1.4
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_linux_requirements.txt [--no-deps]

View File

@@ -1,26 +1,26 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip
- nomkl
- python>=3.5.2,<3.6.8
- nb_conda
- matplotlib==2.1.0
- numpy>=1.11.0,<=1.16.2
- cython
- urllib3<1.24
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.19.0,<=0.20.3
- pandas>=0.22.0,<0.23.0
- py-xgboost<=0.80
- pyarrow>=0.11.0
# Currently Azure ML only supports 3.7 and later.
- pip==22.3.1
- python>=3.10,<3.11
- numpy>=1.21.6,<=1.23.5
- scipy==1.10.1
- scikit-learn==1.5.1
- holidays==0.29
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- azureml-train-automl
- azureml-widgets
- azureml-explain-model
- pandas_ml
- azureml-widgets~=1.59.0
- azureml-defaults~=1.59.0
- pytorch-transformers==1.0.0
- prophet==1.1.4
- xgboost==1.5.2
- spacy==3.7.4
- matplotlib==3.7.1
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_darwin_requirements.txt [--no-deps]

View File

@@ -6,21 +6,35 @@ set PIP_NO_WARN_SCRIPT_LOCATION=0
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
IF "%automl_env_file%"=="" SET automl_env_file="automl_env.yml"
SET check_conda_version_script="check_conda_version.py"
IF NOT EXIST %automl_env_file% GOTO YmlMissing
IF "%CONDA_EXE%"=="" GOTO CondaMissing
IF NOT EXIST %check_conda_version_script% GOTO VersionCheckMissing
python "%check_conda_version_script%"
IF errorlevel 1 GOTO ErrorExit:
SET replace_version_script="replace_latest_version.ps1"
IF EXIST %replace_version_script% (
powershell -file %replace_version_script% %automl_env_file%
)
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment %conda_env_name%
call pip install --upgrade azureml-sdk[automl,notebooks,explain]
echo Upgrading existing conda environment %conda_env_name%
call pip uninstall azureml-train-automl -y -q
call conda env update --name %conda_env_name% --file %automl_env_file%
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f %automl_env_file% -n %conda_env_name%
)
python "%conda_prefix%\scripts\pywin32_postinstall.py" -install
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
@@ -53,6 +67,10 @@ echo If you are running an older version of Miniconda or Anaconda,
echo you can upgrade using the command: conda update conda
goto End
:VersionCheckMissing
echo File %check_conda_version_script% not found.
goto End
:YmlMissing
echo File %automl_env_file% not found.

View File

@@ -4,6 +4,7 @@ CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
CHECK_CONDA_VERSION_SCRIPT="check_conda_version.py"
if [ "$CONDA_ENV_NAME" == "" ]
then
@@ -12,7 +13,7 @@ fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_env.yml"
AUTOML_ENV_FILE="automl_env_linux.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
@@ -20,10 +21,23 @@ if [ ! -f $AUTOML_ENV_FILE ]; then
exit 1
fi
if [ ! -f $CHECK_CONDA_VERSION_SCRIPT ]; then
echo "File $CHECK_CONDA_VERSION_SCRIPT not found"
exit 1
fi
python "$CHECK_CONDA_VERSION_SCRIPT"
if [ $? -ne 0 ]; then
exit 1
fi
sed -i 's/AZUREML-SDK-VERSION/latest/' $AUTOML_ENV_FILE
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
echo "Upgrading existing conda environment" $CONDA_ENV_NAME
pip uninstall azureml-train-automl -y -q
conda env update --name $CONDA_ENV_NAME --file $AUTOML_ENV_FILE &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&

View File

@@ -4,6 +4,7 @@ CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
CHECK_CONDA_VERSION_SCRIPT="check_conda_version.py"
if [ "$CONDA_ENV_NAME" == "" ]
then
@@ -20,10 +21,24 @@ if [ ! -f $AUTOML_ENV_FILE ]; then
exit 1
fi
if [ ! -f $CHECK_CONDA_VERSION_SCRIPT ]; then
echo "File $CHECK_CONDA_VERSION_SCRIPT not found"
exit 1
fi
python "$CHECK_CONDA_VERSION_SCRIPT"
if [ $? -ne 0 ]; then
exit 1
fi
sed -i '' 's/AZUREML-SDK-VERSION/latest/' $AUTOML_ENV_FILE
brew install libomp
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
echo "Upgrading existing conda environment" $CONDA_ENV_NAME
pip uninstall azureml-train-automl -y -q
conda env update --name $CONDA_ENV_NAME --file $AUTOML_ENV_FILE &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&

View File

@@ -0,0 +1,26 @@
from setuptools._vendor.packaging import version
import platform
try:
import conda
except Exception:
print('Failed to import conda.')
print('This setup is usually run from the base conda environment.')
print('You can activate the base environment using the command "conda activate base"')
exit(1)
architecture = platform.architecture()[0]
if architecture != "64bit":
print('This setup requires 64bit Anaconda or Miniconda. Found: ' + architecture)
exit(1)
minimumVersion = "4.7.8"
versionInvalid = (version.parse(conda.__version__) < version.parse(minimumVersion))
if versionInvalid:
print('Setup requires conda version ' + minimumVersion + ' or higher.')
print('You can use the command "conda update conda" to upgrade conda.')
exit(versionInvalid)

View File

@@ -1,718 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing/auto-ml-classification-bank-marketing.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Deployment using a Bank Marketing Dataset**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)\n",
"1. [Acknowledgements](#Acknowledgements)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Register the model.\n",
"6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import pandas as pd\n",
"import os\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-bmarketing'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-classification-bankmarketing'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create or Attach existing AmlCompute\n",
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"automlcl\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"cts = ws.compute_targets\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[amlcompute_cluster_name]\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" max_nodes = 6)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
" \n",
"print('Checking cluster status...')\n",
"# Can poll for a minimum number of nodes and for a specific timeout.\n",
"# If no min_node_count is provided, it will use the scale settings for the cluster.\n",
"compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
" \n",
"# For a more detailed view of current AmlCompute status, use get_status()."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data\n",
"\n",
"Here load the data in the get_data() script to be utilized in azure compute. To do this first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_Run_config."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.isdir('data'):\n",
" os.mkdir('data')\n",
" \n",
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"cd = CondaDependencies.create(conda_packages=['numpy','py-xgboost<=0.80'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data\n",
"\n",
"Here we create the script to be run in azure comput for loading the data, we load the bank marketing dataset into X_train and y_train. Next X_train and y_train is returned for training the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"X_train = dataset.drop_columns(columns=['y'])\n",
"y_train = dataset.keep_columns(columns=['y'], validate=True)\n",
"dataset.take(5).to_pandas_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 5,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 2,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": True,\n",
" \"max_concurrent_iterations\": 5,\n",
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" run_configuration=conda_run_config,\n",
" X = X_train,\n",
" y = y_train,\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy\n",
"\n",
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Fitted Model for Deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'\n",
"tags = None\n",
"model = remote_run.register_model(description = description, tags = tags)\n",
"\n",
"print(remote_run.model_id) # This will be written to the script file later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Scoring Script\n",
"The scoring script is required to generate the image for deployment. It contains the code to do the predictions on input data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = np.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a YAML File for the Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dependencies = remote_run.get_run_sdk_dependencies(iteration = 1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for p in ['azureml-train-automl', 'azureml-core']:\n",
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
" pip_packages=['azureml-train-automl'])\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"myenv.save_to_file('.', conda_env_file_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.\n",
"# However, we include this in case this code is used on an experiment from a previous SDK version.\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', remote_run.model_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Container Image\n",
"\n",
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
"or when testing a model that is under development."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"bmData\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",
"image = Image.create(name = \"automlsampleimage\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)\n",
"\n",
"if image.creation_state == 'Failed':\n",
" print(\"Image build log at: \" + image.image_build_log_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the Image as a Web Service on Azure Container Instance\n",
"\n",
"Deploy an image that contains the model and other assets needed by the service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"bmData\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'automl-sample-bankmarketing'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete a Web Service\n",
"\n",
"Deletes the specified web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get Logs from a Deployed Web Service\n",
"\n",
"Gets logs from a deployed web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"\n",
"Now that the model is trained split our data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the bank marketing datasets.\n",
"from numpy import array"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"X_test = dataset.drop_columns(columns=['y'])\n",
"y_test = dataset.keep_columns(columns=['y'], validate=True)\n",
"dataset.take(5).to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_test = X_test.to_pandas_dataframe()\n",
"y_test = y_test.to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred = fitted_model.predict(X_test)\n",
"actual = array(y_test)\n",
"actual = actual[:,0]\n",
"print(y_pred.shape, \" \", actual.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculate metrics for the prediction\n",
"\n",
"Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n",
"from the trained model that was returned."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib notebook\n",
"test_pred = plt.scatter(actual, y_pred, color='b')\n",
"test_test = plt.scatter(actual, actual, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Acknowledgements"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This Bank Marketing dataset is made available under the Creative Commons (CCO: Public Domain) License: https://creativecommons.org/publicdomain/zero/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: https://creativecommons.org/publicdomain/zero/1.0/ and is available at: https://www.kaggle.com/janiobachmann/bank-marketing-dataset .\n",
"\n",
"_**Acknowledgements**_\n",
"This data set is originally available within the UCI Machine Learning Database: https://archive.ics.uci.edu/ml/datasets/bank+marketing\n",
"\n",
"[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014"
]
}
],
"metadata": {
"authors": [
{
"name": "v-rasav"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,10 +0,0 @@
name: auto-ml-classification-bank-marketing
dependencies:
- pip:
- azureml-sdk
- azureml-defaults
- azureml-explain-model
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -21,14 +21,13 @@
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Deployment using Credit Card Dataset**_\n",
"_**Classification of credit card fraudulent transactions on remote compute **_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)\n",
"1. [Acknowledgements](#Acknowledgements)"
]
@@ -39,19 +38,18 @@
"source": [
"## Introduction\n",
"\n",
"In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if a creditcard transaction is or is not considered a fraudulent charge.\n",
"In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
"This notebook is using remote compute to train the model.\n",
"\n",
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"3. Train the model using remote compute.\n",
"4. Explore the results.\n",
"5. Register the model.\n",
"6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service."
"5. Test the fitted model."
]
},
{
@@ -60,7 +58,7 @@
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
@@ -82,6 +80,13 @@
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -91,22 +96,19 @@
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-ccard'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-classification-creditcard'\n",
"experiment_name = \"automl-classification-ccard-remote\"\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -115,10 +117,13 @@
"metadata": {},
"source": [
"## Create or Attach existing AmlCompute\n",
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
@@ -127,78 +132,29 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"automlcl\"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster-1\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"cts = ws.compute_targets\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[amlcompute_cluster_name]\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" max_nodes = 6)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
" \n",
"print('Checking cluster status...')\n",
"# Can poll for a minimum number of nodes and for a specific timeout.\n",
"# If no min_node_count is provided, it will use the scale settings for the cluster.\n",
"compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
"\n",
"# For a more detailed view of current AmlCompute status, use get_status()."
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data\n",
"\n",
"Here load the data in the get_data script to be utilized in azure compute. To do this, first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_run_config."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.isdir('data'):\n",
" os.mkdir('data')\n",
" \n",
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"cd = CondaDependencies.create(conda_packages=['numpy','py-xgboost<=0.80'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
"# Data"
]
},
{
@@ -207,21 +163,21 @@
"source": [
"### Load Data\n",
"\n",
"Here create the script to be run in azure compute for loading the data, load the credit card dataset into cards and store the Class column (y) in the y variable and store the remaining data in the x variable. Next split the data using random_split and return X_train and y_train for training the model."
"Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "load-data"
},
"outputs": [],
"source": [
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"X = dataset.drop_columns(columns=['Class'])\n",
"y = dataset.keep_columns(columns=['Class'], validate=True)\n",
"X_train, X_test = X.random_split(percentage=0.8, seed=223)\n",
"y_train, y_test = y.random_split(percentage=0.8, seed=223)"
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
"label_column_name = \"Class\""
]
},
{
@@ -236,55 +192,46 @@
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### If you would like to see even better results increase \"iteration_time_out minutes\" to 10+ mins and increase \"iterations\" to a minimum of 30"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "automl-config"
},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 5,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 2,\n",
" \"primary_metric\": 'average_precision_score_weighted',\n",
" \"preprocess\": True,\n",
" \"max_concurrent_iterations\": 5,\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": \"average_precision_score_weighted\",\n",
" \"enable_early_stopping\": True,\n",
" \"max_concurrent_iterations\": 2, # This is a limit for testing purpose, please increase it as per cluster size\n",
" \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible\n",
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors_20190417.log',\n",
" path = project_folder,\n",
" run_configuration=conda_run_config,\n",
" X = X_train,\n",
" y = y_train,\n",
" **automl_settings\n",
" )"
"automl_config = AutoMLConfig(\n",
" task=\"classification\",\n",
" debug_log=\"automl_errors.log\",\n",
" compute_target=compute_target,\n",
" training_data=training_data,\n",
" label_column_name=label_column_name,\n",
" **automl_settings,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
@@ -293,7 +240,7 @@
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = True)"
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
@@ -302,7 +249,9 @@
"metadata": {},
"outputs": [],
"source": [
"remote_run"
"# If you need to retrieve a run that already started, use the following code\n",
"# from azureml.train.automl.run import AutoMLRun\n",
"# remote_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')"
]
},
{
@@ -326,22 +275,45 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"widget-rundetails-sample"
]
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(remote_run).show() "
"\n",
"RunDetails(remote_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy\n",
"#### Explain model\n",
"\n",
"Automated ML models can be explained and visualized using the SDK Explainability library. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze results\n",
"\n",
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
@@ -350,260 +322,23 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()"
"best_run, fitted_model = remote_run.get_output()\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Fitted Model for Deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"model = remote_run.register_model(description = description, tags = tags)\n",
"\n",
"print(remote_run.model_id) # This will be written to the script file later in the notebook."
"#### Print the properties of the model\n",
"The fitted_model is a python object and you can read the different properties of the object.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Scoring Script\n",
"The scoring script is required to generate the image for deployment. It contains the code to do the predictions on input data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a YAML File for the Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dependencies = remote_run.get_run_sdk_dependencies(iteration = 1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for p in ['azureml-train-automl', 'azureml-core']:\n",
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
" pip_packages=['azureml-train-automl'])\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"myenv.save_to_file('.', conda_env_file_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.\n",
"# However, we include this in case this code is used on an experiment from a previous SDK version.\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', remote_run.model_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Container Image\n",
"\n",
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
"or when testing a model that is under development."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"cards\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",
"image = Image.create(name = \"automlsampleimage\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)\n",
"\n",
"if image.creation_state == 'Failed':\n",
" print(\"Image build log at: \" + image.image_build_log_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the Image as a Web Service on Azure Container Instance\n",
"\n",
"Deploy an image that contains the model and other assets needed by the service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"cards\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'automl-sample-creditcard'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete a Web Service\n",
"\n",
"Deletes the specified web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get Logs from a Deployed Web Service\n",
"\n",
"Gets logs from a deployed web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"## Test the fitted model\n",
"\n",
"Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values."
]
@@ -614,9 +349,13 @@
"metadata": {},
"outputs": [],
"source": [
"#Randomly select and test\n",
"X_test = X_test.to_pandas_dataframe()\n",
"y_test = y_test.to_pandas_dataframe()\n"
"# convert the test data to dataframe\n",
"X_test_df = validation_data.drop_columns(\n",
" columns=[label_column_name]\n",
").to_pandas_dataframe()\n",
"y_test_df = validation_data.keep_columns(\n",
" columns=[label_column_name], validate=True\n",
").to_pandas_dataframe()"
]
},
{
@@ -625,7 +364,8 @@
"metadata": {},
"outputs": [],
"source": [
"y_pred = fitted_model.predict(X_test)\n",
"# call the predict functions on the model\n",
"y_pred = fitted_model.predict(X_test_df)\n",
"y_pred"
]
},
@@ -645,14 +385,31 @@
"metadata": {},
"outputs": [],
"source": [
"#Randomly select and test\n",
"# Plot outputs\n",
"%matplotlib notebook\n",
"test_pred = plt.scatter(y_test, y_pred, color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()\n",
"\n"
"from sklearn.metrics import confusion_matrix\n",
"import numpy as np\n",
"import itertools\n",
"\n",
"cf = confusion_matrix(y_test_df.values, y_pred)\n",
"plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n",
"plt.colorbar()\n",
"plt.title(\"Confusion Matrix\")\n",
"plt.xlabel(\"Predicted\")\n",
"plt.ylabel(\"Actual\")\n",
"class_labels = [\"False\", \"True\"]\n",
"tick_marks = np.arange(len(class_labels))\n",
"plt.xticks(tick_marks, class_labels)\n",
"plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n",
"# plotting text value inside cells\n",
"thresh = cf.max() / 2.0\n",
"for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n",
" plt.text(\n",
" j,\n",
" i,\n",
" format(cf[i, j], \"d\"),\n",
" horizontalalignment=\"center\",\n",
" color=\"white\" if cf[i, j] > thresh else \"black\",\n",
" )\n",
"plt.show()"
]
},
{
@@ -668,28 +425,56 @@
"source": [
"This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n",
"\n",
"The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u00a9 Libre de Bruxelles) on big data mining and fraud detection.\n",
"More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n",
"\n",
"The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u00a9 Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n",
"Please cite the following works: \n",
"\u00e2\u20ac\u00a2\tAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n",
"\u00e2\u20ac\u00a2\tDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n",
"\u00e2\u20ac\u00a2\tDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n",
"o\tDal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n",
"\u00e2\u20ac\u00a2\tCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n",
"\u00e2\u20ac\u00a2\tCarcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing"
"Please cite the following works:\n",
"\n",
"Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n",
"\n",
"Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n",
"\n",
"Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n",
"\n",
"Dal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n",
"\n",
"Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n",
"\n",
"Carcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing\n",
"\n",
"Bertrand Lebichot, Yann-A\u00c3\u00abl Le Borgne, Liyun He, Frederic Obl\u00c3\u00a9, Gianluca Bontempi Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection, INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019\n",
"\n",
"Fabrizio Carcillo, Yann-A\u00c3\u00abl Le Borgne, Olivier Caelen, Frederic Obl\u00c3\u00a9, Gianluca Bontempi Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection Information Sciences, 2019"
]
}
],
"metadata": {
"authors": [
{
"name": "v-rasav"
"name": "ratanase"
}
],
"category": "tutorial",
"compute": [
"AML Compute"
],
"datasets": [
"Creditcard"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"file_extension": ".py",
"framework": [
"None"
],
"friendly_name": "Classification of credit card fraudulent transactions using Automated ML",
"index_order": 5,
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python36"
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
@@ -702,7 +487,17 @@
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"tags": [
"remote_run",
"AutomatedML"
],
"task": "Classification",
"version": "3.6.7"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -1,10 +0,0 @@
name: auto-ml-classification-credit-card-fraud
dependencies:
- pip:
- azureml-sdk
- azureml-defaults
- azureml-explain-model
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -1,510 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-deployment/auto-ml-classification-with-deployment.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Deployment**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI).\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Register the model.\n",
"6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-deployment'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-classification-deployment'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_train = digits.data[10:,:]\n",
"y_train = digits.target[10:]\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" name = experiment_name,\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 20,\n",
" iterations = 10,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy\n",
"\n",
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Fitted Model for Deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description = description, tags = tags)\n",
"\n",
"print(local_run.model_id) # This will be written to the script file later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Scoring Script"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a YAML File for the Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. The following cells create a file, myenv.yml, which specifies the dependencies from the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dependencies = ml_run.get_run_sdk_dependencies(iteration = 7)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for p in ['azureml-train-automl', 'azureml-core']:\n",
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
" pip_packages=['azureml-train-automl'])\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"myenv.save_to_file('.', conda_env_file_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.\n",
"# However, we include this in case this code is used on an experiment from a previous SDK version.\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', local_run.model_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Container Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",
"image = Image.create(name = \"automlsampleimage\",\n",
" # this is the model object \n",
" models = [model],\n",
" image_config = image_config, \n",
" workspace = ws)\n",
"\n",
"image.wait_for_creation(show_output = True)\n",
"\n",
"if image.creation_state == 'Failed':\n",
" print(\"Image build log at: \" + image.image_build_log_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy the Image as a Web Service on Azure Container Instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'automl-sample-01'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete a Web Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get Logs from a Deployed Web Service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#aci_service.get_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Randomly select digits and test\n",
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"for index in np.random.choice(len(y_test), 3, replace = False):\n",
" print(index)\n",
" test_sample = json.dumps({'data':X_test[index:index + 1].tolist()})\n",
" predicted = aci_service.run(input_data = test_sample)\n",
" label = y_test[index]\n",
" predictedDict = json.loads(predicted)\n",
" title = \"Label value = %d Predicted value = %s \" % ( label,predictedDict['result'][0])\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,8 +0,0 @@
name: auto-ml-classification-with-deployment
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -1,381 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-onnx/auto-ml-classification-with-onnx.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"Please find the ONNX related documentations [here](https://github.com/onnx/onnx).\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute with ONNX compatible config on.\n",
"4. Explore the results and save the ONNX model.\n",
"5. Inference with the ONNX model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig, constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-classification-onnx'\n",
"project_folder = './sample_projects/automl-classification-onnx'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_iris](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iris = datasets.load_iris()\n",
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
" iris.target, \n",
" test_size=0.2, \n",
" random_state=0)\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Ensure the x_train and x_test are pandas DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Convert the X_train and X_test to pandas DataFrame and set column names,\n",
"# This is needed for initializing the input variable names of ONNX model, \n",
"# and the prediction with the ONNX model using the inference helper.\n",
"X_train = pd.DataFrame(X_train, columns=['c1', 'c2', 'c3', 'c4'])\n",
"X_test = pd.DataFrame(X_test, columns=['c1', 'c2', 'c3', 'c4'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"**Note:** Set the parameter enable_onnx_compatible_models=True, if you also want to generate the ONNX compatible models. Please note, the forecasting task and TensorFlow models are not ONNX compatible yet.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set the preprocess=True, currently the InferenceHelper only supports this mode."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" verbosity = logging.INFO, \n",
" X = X_train, \n",
" y = y_train,\n",
" preprocess=True,\n",
" enable_onnx_compatible_models=True,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best ONNX Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.\n",
"\n",
"Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, onnx_mdl = local_run.get_output(return_onnx_model=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save the best ONNX model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.automl.core.onnx_convert import OnnxConverter\n",
"onnx_fl_path = \"./best_model.onnx\"\n",
"OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Predict with the ONNX model, using onnxruntime package"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"import json\n",
"from azureml.automl.core.onnx_convert import OnnxConvertConstants\n",
"\n",
"if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:\n",
" python_version_compatible = True\n",
"else:\n",
" python_version_compatible = False\n",
"\n",
"try:\n",
" import onnxruntime\n",
" from azureml.automl.core.onnx_convert import OnnxInferenceHelper \n",
" onnxrt_present = True\n",
"except ImportError:\n",
" onnxrt_present = False\n",
"\n",
"def get_onnx_res(run):\n",
" res_path = 'onnx_resource.json'\n",
" run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)\n",
" with open(res_path) as f:\n",
" onnx_res = json.load(f)\n",
" return onnx_res\n",
"\n",
"if onnxrt_present and python_version_compatible: \n",
" mdl_bytes = onnx_mdl.SerializeToString()\n",
" onnx_res = get_onnx_res(best_run)\n",
"\n",
" onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)\n",
" pred_onnx, pred_prob_onnx = onnxrt_helper.predict(X_test)\n",
"\n",
" print(pred_onnx)\n",
" print(pred_prob_onnx)\n",
"else:\n",
" if not python_version_compatible:\n",
" print('Please use Python version 3.6 or 3.7 to run the inference helper.') \n",
" if not onnxrt_present:\n",
" print('Please install the onnxruntime package to do the prediction with ONNX model.')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,9 +0,0 @@
name: auto-ml-classification-with-onnx
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml
- onnxruntime

View File

@@ -1,399 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification using whitelist models**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"This notebooks shows how can automl can be trained on a selected list of models, see the readme.md for the models.\n",
"This trains the model exclusively on tensorflow based models.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model on a whilelisted models using local compute. \n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Note: This notebook will install tensorflow if not already installed in the enviornment..\n",
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"import sys\n",
"whitelist_models=[\"LightGBM\"]\n",
"if \"3.7\" != sys.version[0:3]:\n",
" try:\n",
" import tensorflow as tf1\n",
" except ImportError:\n",
" from pip._internal import main\n",
" main(['install', 'tensorflow>=1.10.0,<=1.12.0'])\n",
" logging.getLogger().setLevel(logging.ERROR)\n",
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"]\n",
"\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-whitelist'\n",
"project_folder = './sample_projects/automl-local-whitelist'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" enable_tf=True,\n",
" whitelist_models=whitelist_models,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,8 +0,0 @@
name: auto-ml-classification-with-whitelisting
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -1,486 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification/auto-ml-classification.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Accessing the Azure ML workspace requires authentication with Azure.\n",
"\n",
"The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.\n",
"\n",
"If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"\n",
"If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-classification'\n",
"project_folder = './sample_projects/automl-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|\n",
"\n",
"Automated machine learning trains multiple machine learning pipelines. Each pipelines training is known as an iteration.\n",
"* You can specify a maximum number of iterations using the `iterations` parameter.\n",
"* You can specify a maximum time for the run using the `experiment_timeout_minutes` parameter.\n",
"* If you specify neither the `iterations` nor the `experiment_timeout_minutes`, automated ML keeps running iterations while it continues to see improvements in the scores.\n",
"\n",
"The following example doesn't specify `iterations` or `experiment_timeout_minutes` and so runs until the scores stop improving.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" primary_metric = 'AUC_weighted',\n",
" X = X_train, \n",
" y = y_train,\n",
" n_cross_validations = 3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_train, \n",
" y = y_train, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"widget-rundetails-sample"
]
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Print the properties of the model\n",
"The fitted_model is a python object and you can read the different properties of the object.\n",
"The following shows printing hyperparameters for each step in the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pprint import pprint\n",
"\n",
"def print_model(model, prefix=\"\"):\n",
" for step in model.steps:\n",
" print(prefix + step[0])\n",
" if hasattr(step[1], 'estimators') and hasattr(step[1], 'weights'):\n",
" pprint({'estimators': list(e[0] for e in step[1].estimators), 'weights': step[1].weights})\n",
" print()\n",
" for estimator in step[1].estimators:\n",
" print_model(estimator[1], estimator[0]+ ' - ')\n",
" elif hasattr(step[1], '_base_learners') and hasattr(step[1], '_meta_learner'):\n",
" print(\"\\nMeta Learner\")\n",
" pprint(step[1]._meta_learner)\n",
" print()\n",
" for estimator in step[1]._base_learners:\n",
" print_model(estimator[1], estimator[0]+ ' - ')\n",
" else:\n",
" pprint(step[1].get_params())\n",
" print()\n",
" \n",
"print_model(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print_model(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print_model(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test \n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,8 +0,0 @@
name: auto-ml-classification
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -0,0 +1,602 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning \n",
"**Continuous retraining using Pipelines and Time-Series TabularDataset**\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"2. [Setup](#Setup)\n",
"3. [Compute](#Compute)\n",
"4. [Run Configuration](#Run-Configuration)\n",
"5. [Data Ingestion Pipeline](#Data-Ingestion-Pipeline)\n",
"6. [Training Pipeline](#Training-Pipeline)\n",
"7. [Publish Retraining Pipeline and Schedule](#Publish-Retraining-Pipeline-and-Schedule)\n",
"8. [Test Retraining](#Test-Retraining)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use AutoML and Pipelines to enable contious retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining. \n",
"Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n",
"In this notebook you will learn how to:\n",
"* Create an Experiment in an existing Workspace.\n",
"* Configure AutoML using AutoMLConfig.\n",
"* Create data ingestion pipeline to update a time-series based TabularDataset\n",
"* Create training pipeline to prepare data, run AutoML, register the model and setup pipeline triggers.\n",
"\n",
"## Setup\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Accessing the Azure ML workspace requires authentication with Azure.\n",
"\n",
"The default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.\n",
"\n",
"If you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:\n",
"```\n",
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:\n",
"```\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"For more details, see aka.ms/aml-notebook-auth"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"dstor = ws.get_default_datastore()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = \"retrain-noaaweather\"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute \n",
"\n",
"#### Create or Attach existing AmlCompute\n",
"\n",
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"amlcompute_cluster_name = \"cont-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"\n",
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"cd = CondaDependencies.create(\n",
" pip_packages=[\n",
" \"azureml-sdk[automl]\",\n",
" \"applicationinsights\",\n",
" \"azureml-opendatasets\",\n",
" \"azureml-defaults\",\n",
" ],\n",
" conda_packages=[\"numpy==1.19.5\"],\n",
" pin_sdk_version=False,\n",
")\n",
"conda_run_config.environment.python.conda_dependencies = cd\n",
"\n",
"print(\"run config is ready\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Ingestion Pipeline \n",
"For this demo, we will use NOAA weather data from [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). You can replace this with your own dataset, or you can skip this pipeline if you already have a time-series based `TabularDataset`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The name and target column of the Dataset to create\n",
"dataset = \"NOAA-Weather-DS4\"\n",
"target_column_name = \"temperature\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"### Upload Data Step\n",
"The data ingestion pipeline has a single step with a script to query the latest weather data and upload it to the blob store. During the first run, the script will create and register a time-series based `TabularDataset` with the past one week of weather data. For each subsequent run, the script will create a partition in the blob store by querying NOAA for new weather data since the last modified time of the dataset (`dataset.data_changed_time`) and creating a data.csv file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline, PipelineParameter\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"\n",
"ds_name = PipelineParameter(name=\"ds_name\", default_value=dataset)\n",
"upload_data_step = PythonScriptStep(\n",
" script_name=\"upload_weather_data.py\",\n",
" allow_reuse=False,\n",
" name=\"upload_weather_data\",\n",
" arguments=[\"--ds_name\", ds_name],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit Pipeline Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_pipeline = Pipeline(\n",
" description=\"pipeline_with_uploaddata\", workspace=ws, steps=[upload_data_step]\n",
")\n",
"data_pipeline_run = experiment.submit(\n",
" data_pipeline, pipeline_parameters={\"ds_name\": dataset}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_pipeline_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training Pipeline\n",
"### Prepare Training Data Step\n",
"\n",
"Script to check if new data is available since the model was last trained. If no new data is available, we cancel the remaining pipeline steps. We need to set allow_reuse flag to False to allow the pipeline to run even when inputs don't change. We also need the name of the model to check the time the model was last trained."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import PipelineData\n",
"\n",
"# The model name with which to register the trained model in the workspace.\n",
"model_name = PipelineParameter(\"model_name\", default_value=\"noaaweatherds\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_prep_step = PythonScriptStep(\n",
" script_name=\"check_data.py\",\n",
" allow_reuse=False,\n",
" name=\"check_data\",\n",
" arguments=[\"--ds_name\", ds_name, \"--model_name\", model_name],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Dataset\n",
"\n",
"train_ds = Dataset.get_by_name(ws, dataset)\n",
"train_ds = train_ds.drop_columns([\"partition_date\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AutoMLStep\n",
"Create an AutoMLConfig and a training step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.pipeline.steps import AutoMLStep\n",
"\n",
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 10,\n",
" \"experiment_timeout_hours\": 0.25,\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": \"r2_score\",\n",
" \"max_concurrent_iterations\": 3,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_early_stopping\": True,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(\n",
" task=\"regression\",\n",
" debug_log=\"automl_errors.log\",\n",
" path=\".\",\n",
" compute_target=compute_target,\n",
" training_data=train_ds,\n",
" label_column_name=target_column_name,\n",
" **automl_settings,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import PipelineData, TrainingOutput\n",
"\n",
"metrics_output_name = \"metrics_output\"\n",
"best_model_output_name = \"best_model_output\"\n",
"\n",
"metrics_data = PipelineData(\n",
" name=\"metrics_data\",\n",
" datastore=dstor,\n",
" pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(type=\"Metrics\"),\n",
")\n",
"model_data = PipelineData(\n",
" name=\"model_data\",\n",
" datastore=dstor,\n",
" pipeline_output_name=best_model_output_name,\n",
" training_output=TrainingOutput(type=\"Model\"),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_step = AutoMLStep(\n",
" name=\"automl_module\",\n",
" automl_config=automl_config,\n",
" outputs=[metrics_data, model_data],\n",
" allow_reuse=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Model Step\n",
"Script to register the model to the workspace. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"register_model_step = PythonScriptStep(\n",
" script_name=\"register_model.py\",\n",
" name=\"register_model\",\n",
" allow_reuse=False,\n",
" arguments=[\n",
" \"--model_name\",\n",
" model_name,\n",
" \"--model_path\",\n",
" model_data,\n",
" \"--ds_name\",\n",
" ds_name,\n",
" ],\n",
" inputs=[model_data],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit Pipeline Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_pipeline = Pipeline(\n",
" description=\"training_pipeline\",\n",
" workspace=ws,\n",
" steps=[data_prep_step, automl_step, register_model_step],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_pipeline_run = experiment.submit(\n",
" training_pipeline,\n",
" pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_pipeline_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Publish Retraining Pipeline and Schedule\n",
"Once we are happy with the pipeline, we can publish the training pipeline to the workspace and create a schedule to trigger on blob change. The schedule polls the blob store where the data is being uploaded and runs the retraining pipeline if there is a data change. A new version of the model will be registered to the workspace once the run is complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_name = \"Retraining-Pipeline-NOAAWeather\"\n",
"\n",
"published_pipeline = training_pipeline.publish(\n",
" name=pipeline_name, description=\"Pipeline that retrains AutoML model\"\n",
")\n",
"\n",
"published_pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Schedule\n",
"\n",
"schedule = Schedule.create(\n",
" workspace=ws,\n",
" name=\"RetrainingSchedule\",\n",
" pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
" datastore=dstor,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test Retraining\n",
"Here we setup the data ingestion pipeline to run on a schedule, to verify that the retraining pipeline runs as expected. \n",
"\n",
"Note: \n",
"* Azure NOAA Weather data is updated daily and retraining will not trigger if there is no new data available. \n",
"* Depending on the polling interval set in the schedule, the retraining may take some time trigger after data ingestion pipeline completes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_name = \"DataIngestion-Pipeline-NOAAWeather\"\n",
"\n",
"published_pipeline = training_pipeline.publish(\n",
" name=pipeline_name, description=\"Pipeline that updates NOAAWeather Dataset\"\n",
")\n",
"\n",
"published_pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Schedule\n",
"\n",
"schedule = Schedule.create(\n",
" workspace=ws,\n",
" name=\"RetrainingSchedule-DataIngestion\",\n",
" pipeline_parameters={\"ds_name\": dataset},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
" datastore=dstor,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440,\n",
")"
]
}
],
"metadata": {
"authors": [
{
"name": "vivijay"
}
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,49 @@
import argparse
import os
import azureml.core
from datetime import datetime
import pandas as pd
import pytz
from azureml.core import Dataset, Model
from azureml.core.run import Run, _OfflineRun
from azureml.core import Workspace
run = Run.get_context()
ws = None
if type(run) == _OfflineRun:
ws = Workspace.from_config()
else:
ws = run.experiment.workspace
print("Check for new data.")
parser = argparse.ArgumentParser("split")
parser.add_argument("--ds_name", help="input dataset name")
parser.add_argument("--model_name", help="name of the deployed model")
args = parser.parse_args()
print("Argument 1(ds_name): %s" % args.ds_name)
print("Argument 2(model_name): %s" % args.model_name)
# Get the latest registered model
try:
model = Model(ws, args.model_name)
last_train_time = model.created_time
print("Model was last trained on {0}.".format(last_train_time))
except Exception:
print("Could not get last model train time.")
last_train_time = datetime.min.replace(tzinfo=pytz.UTC)
train_ds = Dataset.get_by_name(ws, args.ds_name)
dataset_changed_time = train_ds.data_changed_time.replace(tzinfo=pytz.UTC)
print("dataset_changed_time=" + str(dataset_changed_time))
print("last_train_time=" + str(last_train_time))
if not dataset_changed_time > last_train_time:
print("Cancelling run since there is no new data.")
run.parent.cancel()
else:
# New data is available since the model was last trained
print("Dataset was last updated on {0}. Retraining...".format(dataset_changed_time))

View File

@@ -0,0 +1,35 @@
from azureml.core.model import Model, Dataset
from azureml.core.run import Run, _OfflineRun
from azureml.core import Workspace
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--model_name")
parser.add_argument("--model_path")
parser.add_argument("--ds_name")
args = parser.parse_args()
print("Argument 1(model_name): %s" % args.model_name)
print("Argument 2(model_path): %s" % args.model_path)
print("Argument 3(ds_name): %s" % args.ds_name)
run = Run.get_context()
ws = None
if type(run) == _OfflineRun:
ws = Workspace.from_config()
else:
ws = run.experiment.workspace
train_ds = Dataset.get_by_name(ws, args.ds_name)
datasets = [(Dataset.Scenario.TRAINING, train_ds)]
# Register model with training dataset
model = Model.register(
workspace=ws,
model_path=args.model_path,
model_name=args.model_name,
datasets=datasets,
)
print("Registered version {0} of model {1}".format(model.version, model.name))

View File

@@ -0,0 +1,161 @@
import argparse
import os
from datetime import datetime
from dateutil.relativedelta import relativedelta
import pandas as pd
import traceback
from azureml.core import Dataset
from azureml.core.run import Run, _OfflineRun
from azureml.core import Workspace
from azureml.opendatasets import NoaaIsdWeather
run = Run.get_context()
ws = None
if type(run) == _OfflineRun:
ws = Workspace.from_config()
else:
ws = run.experiment.workspace
usaf_list = [
"725724",
"722149",
"723090",
"722159",
"723910",
"720279",
"725513",
"725254",
"726430",
"720381",
"723074",
"726682",
"725486",
"727883",
"723177",
"722075",
"723086",
"724053",
"725070",
"722073",
"726060",
"725224",
"725260",
"724520",
"720305",
"724020",
"726510",
"725126",
"722523",
"703333",
"722249",
"722728",
"725483",
"722972",
"724975",
"742079",
"727468",
"722193",
"725624",
"722030",
"726380",
"720309",
"722071",
"720326",
"725415",
"724504",
"725665",
"725424",
"725066",
]
def get_noaa_data(start_time, end_time):
columns = [
"usaf",
"wban",
"datetime",
"latitude",
"longitude",
"elevation",
"windAngle",
"windSpeed",
"temperature",
"stationName",
"p_k",
]
isd = NoaaIsdWeather(start_time, end_time, cols=columns)
noaa_df = isd.to_pandas_dataframe()
df_filtered = noaa_df[noaa_df["usaf"].isin(usaf_list)]
df_filtered.reset_index(drop=True)
print(
"Received {0} rows of training data between {1} and {2}".format(
df_filtered.shape[0], start_time, end_time
)
)
return df_filtered
print("Check for new data and prepare the data")
parser = argparse.ArgumentParser("split")
parser.add_argument("--ds_name", help="name of the Dataset to update")
args = parser.parse_args()
print("Argument 1(ds_name): %s" % args.ds_name)
dstor = ws.get_default_datastore()
register_dataset = False
end_time = datetime.utcnow()
try:
ds = Dataset.get_by_name(ws, args.ds_name)
end_time_last_slice = ds.data_changed_time.replace(tzinfo=None)
print("Dataset {0} last updated on {1}".format(args.ds_name, end_time_last_slice))
except Exception:
print(traceback.format_exc())
print(
"Dataset with name {0} not found, registering new dataset.".format(args.ds_name)
)
register_dataset = True
end_time = datetime(2021, 5, 1, 0, 0)
end_time_last_slice = end_time - relativedelta(weeks=2)
try:
train_df = get_noaa_data(end_time_last_slice, end_time)
except Exception as ex:
print("get_noaa_data failed:", ex)
train_df = None
if train_df is not None and train_df.size > 0:
print(
"Received {0} rows of new data after {1}.".format(
train_df.shape[0], end_time_last_slice
)
)
folder_name = "{}/{:04d}/{:02d}/{:02d}/{:02d}/{:02d}/{:02d}".format(
args.ds_name,
end_time.year,
end_time.month,
end_time.day,
end_time.hour,
end_time.minute,
end_time.second,
)
file_path = "{0}/data.csv".format(folder_name)
# Add a new partition to the registered dataset
os.makedirs(folder_name, exist_ok=True)
train_df.to_csv(file_path, index=False)
dstor.upload_files(
files=[file_path], target_path=folder_name, overwrite=True, show_progress=True
)
else:
print("No new data since {0}.".format(end_time_last_slice))
if register_dataset:
ds = Dataset.Tabular.from_delimited_files(
dstor.path("{}/**/*.csv".format(args.ds_name)),
partition_format="/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv",
)
ds.register(ws, name=args.ds_name)

View File

@@ -1,509 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/dataprep-remote-execution/auto-ml-dataprep-remote-execution.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Load Data using `TabularDataset` for Remote Execution (AmlCompute)**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use AzureML Dataset to load data for AutoML.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create a `TabularDataset` pointing to the training data.\n",
"2. Pass the `TabularDataset` to AutoML for a remote run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-dataset-remote-bai'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-dataprep-remote-bai'\n",
" \n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The data referenced here was a 1MB simple random sample of the Chicago Crime data into a local temporary directory.\n",
"example_data = 'https://dprepdata.blob.core.windows.net/demo/crime0-random.csv'\n",
"dataset = Dataset.Tabular.from_delimited_files(example_data)\n",
"dataset.take(5).to_pandas_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the data\n",
"\n",
"You can peek the result of a `TabularDataset` at any range using `skip(i)` and `take(j).to_pandas_dataframe()`. Doing so evaluates only `j` records, which makes it fast even against large datasets.\n",
"\n",
"`TabularDataset` objects are immutable and are composed of a list of subsetting transformations (optional)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X = dataset.drop_columns(columns=['Primary Type', 'FBI Code'])\n",
"y = dataset.keep_columns(columns=['Primary Type'], validate=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\" : 10,\n",
" \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : True,\n",
" \"verbosity\" : logging.INFO\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach an AmlCompute cluster"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"automlc2\"\n",
"\n",
"found = False\n",
"\n",
"# Check if this compute target already exists in the workspace.\n",
"\n",
"cts = ws.compute_targets\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[amlcompute_cluster_name]\n",
"\n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" max_nodes = 6)\n",
"\n",
" # Create the cluster.\\n\",\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
"\n",
"print('Checking cluster status...')\n",
"# Can poll for a minimum number of nodes and for a specific timeout.\n",
"# If no min_node_count is provided, it will use the scale settings for the cluster.\n",
"compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
"\n",
"# For a more detailed view of current AmlCompute status, use get_status()."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"cd = CondaDependencies.create(conda_packages=['numpy','py-xgboost<=0.80'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `TabularDataset` Objects\n",
"\n",
"The `TabularDataset` objects captured above can also be passed to the `submit` method for a remote run. AutoML will serialize the `TabularDataset` object and send it to the remote compute target. The `TabularDataset` will not be evaluated locally."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" run_configuration=conda_run_config,\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pre-process cache cleanup\n",
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.clean_preprocessor_cache()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Cancelling Runs\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2.\n",
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(remote_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(remote_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the first iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"best_run, fitted_model = remote_run.get_output(iteration = iteration)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"\n",
"#### Load Test Data\n",
"For the test data, it should have the same preparation step as the train data. Otherwise it might get failed at the preprocessing step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset_test = Dataset.Tabular.from_delimited_files(path='https://dprepdata.blob.core.windows.net/demo/crime0-test.csv')\n",
"\n",
"df_test = dataset_test.to_pandas_dataframe()\n",
"df_test = df_test[pd.notnull(df_test['Primary Type'])]\n",
"\n",
"y_test = df_test[['Primary Type']]\n",
"X_test = df_test.drop(['Primary Type', 'FBI Code'], axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will use confusion matrix to see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas_ml import ConfusionMatrix\n",
"\n",
"ypred = fitted_model.predict(X_test)\n",
"\n",
"cm = ConfusionMatrix(y_test['Primary Type'], ypred)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,10 +0,0 @@
name: auto-ml-dataset-remote-execution
dependencies:
- pip:
- azureml-sdk
- azureml-defaults
- azureml-explain-model
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -1,402 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/dataprep/auto-ml-dataprep.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Load Data using `TabularDataset` for Local Execution**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use AzureML Dataset to load data for AutoML.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create a `TabularDataset` pointing to the training data.\n",
"2. Pass the `TabularDataset` to AutoML for a local run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
" \n",
"# choose a name for experiment\n",
"experiment_name = 'automl-dataset-local'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-dataset-local'\n",
" \n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The data referenced here was a 1MB simple random sample of the Chicago Crime data into a local temporary directory.\n",
"example_data = 'https://dprepdata.blob.core.windows.net/demo/crime0-random.csv'\n",
"dataset = Dataset.Tabular.from_delimited_files(example_data)\n",
"dataset.take(5).to_pandas_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the data\n",
"\n",
"You can peek the result of a `TabularDataset` at any range using `skip(i)` and `take(j).to_pandas_dataframe()`. Doing so evaluates only `j` records, which makes it fast even against large datasets.\n",
"\n",
"`TabularDataset` objects are immutable and are composed of a list of subsetting transformations (optional)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X = dataset.drop_columns(columns=['Primary Type', 'FBI Code'])\n",
"y = dataset.keep_columns(columns=['Primary Type'], validate=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\" : 10,\n",
" \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : True,\n",
" \"verbosity\" : logging.INFO\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pass Data with `TabularDataset` Objects\n",
"\n",
"The `TabularDataset` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `TabularDataset` for model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the first iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 0\n",
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test\n",
"\n",
"#### Load Test Data\n",
"For the test data, it should have the same preparation step as the train data. Otherwise it might get failed at the preprocessing step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataset_test = Dataset.Tabular.from_delimited_files(path='https://dprepdata.blob.core.windows.net/demo/crime0-test.csv')\n",
"\n",
"df_test = dataset_test.to_pandas_dataframe()\n",
"df_test = df_test[pd.notnull(df_test['Primary Type'])]\n",
"\n",
"y_test = df_test[['Primary Type']]\n",
"X_test = df_test.drop(['Primary Type', 'FBI Code'], axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will use confusion matrix to see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas_ml import ConfusionMatrix\n",
"\n",
"ypred = fitted_model.predict(X_test)\n",
"\n",
"cm = ConfusionMatrix(y_test['Primary Type'], ypred)\n",
"\n",
"print(cm)\n",
"\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,8 +0,0 @@
name: auto-ml-dataset
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -0,0 +1,92 @@
# Experimental Notebooks for Automated ML
Notebooks listed in this folder are leveraging experimental features. Namespaces or function signitures may change in future SDK releases. The notebooks published here will reflect the latest supported APIs. All of these notebooks can run on a client-only installation of the Automated ML SDK.
The client only installation doesn't contain any of the machine learning libraries, such as scikit-learn, xgboost, or tensorflow, making it much faster to install and is less likely to conflict with any packages in an existing environment. However, since the ML libraries are not available locally, models cannot be downloaded and loaded directly in the client. To replace the functionality of having models locally, these notebooks also demonstrate the ModelProxy feature which will allow you to submit a predict/forecast to the training environment.
<a name="localconda"></a>
## Setup using a Local Conda environment
To run these notebook on your own notebook server, use these installation instructions.
The instructions below will install everything you need and then start a Jupyter notebook.
If you would like to use a lighter-weight version of the client that does not install all of the machine learning libraries locally, you can leverage the [experimental notebooks.](experimental/README.md)
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose 64-bit Python 3.8 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The automated ML sample notebooks are in the "automated-machine-learning" folder.
### 3. Setup a new conda environment
The **automl_setup_thin_client** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl_experimental. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
Packages installed by the **automl_setup** script:
<ul><li>python</li><li>nb_conda</li><li>matplotlib</li><li>numpy</li><li>cython</li><li>urllib3</li><li>pandas</li><li>azureml-sdk</li><li>azureml-widgets</li><li>pandas-ml</li></ul>
For more details refer to the [automl_env_thin_client.yml](./automl_env_thin_client.yml)
## Windows
Start an **Anaconda Prompt** window, cd to the **how-to-use-azureml/automated-machine-learning/experimental** folder where the sample notebooks were extracted and then run:
```
automl_setup_thin_client
```
## Mac
Install "Command line developer tools" if it is not already installed (you can use the command: `xcode-select --install`).
Start a Terminal windows, cd to the **how-to-use-azureml/automated-machine-learning/experimental** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_thin_client_mac.sh
```
## Linux
cd to the **how-to-use-azureml/automated-machine-learning/experimental** folder where the sample notebooks were extracted and then run:
```
bash automl_setup_thin_client_linux.sh
```
### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on [configuration](../../configuration.ipynb) notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl_experimental] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in automated ML.
### 6. Starting jupyter notebook manually
To start your Jupyter notebook manually, use:
```
conda activate azure_automl
jupyter notebook
```
or on Mac or Linux:
```
source activate azure_automl
jupyter notebook
```
<a name="samples"></a>
# Automated ML SDK Sample Notebooks
- [auto-ml-regression-model-proxy.ipynb](regression-model-proxy/auto-ml-regression-model-proxy.ipynb)
- Dataset: Hardware Performance Dataset
- Simple example of using automated ML for regression
- Uses azure compute for training
- Uses ModelProxy for submitting prediction to training environment on azure compute
<a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
<a name="pythoncommand"></a>
# Running using python command
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
However, on Windows the file needs to be modified before it can be run.
The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.

View File

@@ -0,0 +1,346 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/custom-model-training-from-autofeaturization-run.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning - Codegen for AutoFeaturization \n",
"_**Autofeaturization of credit card fraudulent transactions dataset on remote compute and codegen functionality**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Autofeaturization](#Autofeaturization)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Introduction'></a>\n",
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Autofeaturization** lets you run an AutoML experiment to only featurize the datasets. These datasets along with the transformer are stored in AML Storage and linked to the run which can later be retrieved and used to train models. \n",
"\n",
"**To run Autofeaturization, set the number of iterations to zero and featurization as auto.**\n",
"\n",
"Please refer to [Autofeaturization and custom model training](../autofeaturization-custom-model-training/custom-model-training-from-autofeaturization-run.ipynb) for more details on the same.\n",
"\n",
"[Codegen](https://github.com/Azure/automl-codegen-preview) is a feature, which when enabled, provides a user with the script of the underlying functionality and a notebook to tweak inputs or code and rerun the same.\n",
"\n",
"In this example we use the credit card fraudulent transactions dataset to showcase how you can use AutoML for autofeaturization and further how you can enable the `Codegen` feature.\n",
"\n",
"This notebook is using remote compute to complete the featurization.\n",
"\n",
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../configuration.ipynb) notebook first if you haven't already, to establish your connection to the AzureML Workspace. \n",
"\n",
"Here you will learn how to create an autofeaturization experiment using an existing workspace with codegen feature enabled."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Setup'></a>\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import pandas as pd\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-autofeaturization-ccard-codegen-remote'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach existing AmlCompute\n",
"A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-codegen\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=6)\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Data'></a>\n",
"## Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data\n",
"\n",
"Load the credit card fraudulent transactions dataset from a CSV file, containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. \n",
"\n",
"Here the autofeaturization run will featurize the training data passed in."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Training Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard_train.csv\"\n",
"training_dataset = Dataset.Tabular.from_delimited_files(training_data) # Tabular dataset\n",
"\n",
"label_column_name = 'Class' # output label"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Autofeaturization'></a>\n",
"## AutoFeaturization\n",
"\n",
"Instantiate an AutoMLConfig object. This defines the settings and data used to run the autofeaturization experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression or forecasting|\n",
"|**training_data**|Input training dataset, containing both features and label column.|\n",
"|**iterations**|For an autofeaturization run, iterations will be 0.|\n",
"|**featurization**|For an autofeaturization run, featurization can be 'auto' or 'custom'.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**enable_code_generation**|For enabling codegen for the run, value would be True|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" iterations = 0, # autofeaturization run can be triggered by setting iterations to 0\n",
" compute_target = compute_target,\n",
" training_data = training_dataset,\n",
" label_column_name = label_column_name,\n",
" featurization = 'auto',\n",
" verbosity = logging.INFO,\n",
" enable_code_generation = True # enable codegen\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(remote_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Codegen Script and Notebook"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Codegen script and notebook can be found under the `Outputs + logs` section from the details page of the remote run. Please check for the `autofeaturization_notebook.ipynb` under `/outputs/generated_code`. To modify the featurization code, open `script.py` and make changes. The codegen notebook can be run with the same environment configuration as the above AutoML run."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Experiment Complete!"
]
}
],
"metadata": {
"authors": [
{
"name": "bhavanatumma"
}
],
"interpreter": {
"hash": "adb464b67752e4577e3dc163235ced27038d19b7d88def00d75d1975bde5d9ab"
},
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,729 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/custom-model-training-from-autofeaturization-run.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning - AutoFeaturization (Part 1)\n",
"_**Autofeaturization of credit card fraudulent transactions dataset on remote compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Autofeaturization](#Autofeaturization)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Introduction'></a>\n",
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Autofeaturization is a new feature to let you as the user run an AutoML experiment to only featurize the datasets. These datasets along with the transformer will be stored in the experiment which can later be retrieved and used to train models, either via AutoML or custom training. \n",
"\n",
"**To run Autofeaturization, pass in zero iterations and featurization as auto. This will featurize the datasets and terminate the experiment. Training will not occur.**\n",
"\n",
"*Limitations - Sparse data cannot be supported at the moment. Any dataset that has extensive categorical data might be featurized into sparse data which will not be allowed as input to AutoML. Efforts are underway to support sparse data and will be updated soon.* \n",
"\n",
"In this example we use the credit card fraudulent transactions dataset to showcase how you can use AutoML for autofeaturization. The goal is to clean and featurize the training dataset.\n",
"\n",
"This notebook is using remote compute to complete the featurization.\n",
"\n",
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../configuration.ipynb) notebook first if you haven't already, to establish your connection to the AzureML Workspace. \n",
"\n",
"In the below steps, you will learn how to:\n",
"1. Create an autofeaturization experiment using an existing workspace.\n",
"2. View the featurized datasets and transformer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Setup'></a>\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import pandas as pd\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-autofeaturization-ccard-remote'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach existing AmlCompute\n",
"A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=6)\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Data'></a>\n",
"## Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data\n",
"\n",
"Load the credit card fraudulent transactions dataset from a CSV file, containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. \n",
"\n",
"Here the autofeaturization run will featurize the training data passed in."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Training Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard_train.csv\"\n",
"training_dataset = Dataset.Tabular.from_delimited_files(training_data) # Tabular dataset\n",
"\n",
"label_column_name = 'Class' # output label"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Autofeaturization'></a>\n",
"## AutoFeaturization\n",
"\n",
"Instantiate an AutoMLConfig object. This defines the settings and data used to run the autofeaturization experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**training_data**|Input training dataset, containing both features and label column.|\n",
"|**iterations**|For an autofeaturization run, iterations will be 0.|\n",
"|**featurization**|For an autofeaturization run, featurization will be 'auto'.|\n",
"|**label_column_name**|The name of the label column.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" iterations = 0, # autofeaturization run can be triggered by setting iterations to 0\n",
" compute_target = compute_target,\n",
" training_data = training_dataset,\n",
" label_column_name = label_column_name,\n",
" featurization = 'auto',\n",
" verbosity = logging.INFO\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Transformer and Featurized Datasets\n",
"The given datasets have been featurized and stored under `Outputs + logs` from the details page of the remote run. The structure is shown below. The featurized dataset is stored under `/outputs/featurization/data` and the transformer is saved under `/outputs/featurization/pipeline` \n",
"\n",
"Below you will learn how to refer to the data saved in your run and retrieve the same."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Featurized Data](https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/autofeaturization_img.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(remote_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning - AutoFeaturization (Part 2)\n",
"_**Training using a custom model with the featurized data from Autofeaturization run of credit card fraudulent transactions dataset**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Data Setup](#DataSetup)\n",
"1. [Autofeaturization Data](#AutofeaturizationData)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Introduction'></a>\n",
"## Introduction\n",
"\n",
"Here we use the featurized dataset saved in the above run to showcase how you can perform custom training by using the transformer from an autofeaturization run to transform validation / test datasets. \n",
"\n",
"The goal is to use autofeaturized run data and transformer to transform and run a custom training experiment independently\n",
"\n",
"In the below steps, you will learn how to:\n",
"1. Read transformer from a completed autofeaturization run and transform data\n",
"2. Pull featurized data from a completed autofeaturization run\n",
"3. Run a custom training experiment with the above data\n",
"4. Check results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='DataSetup'></a>\n",
"## Data Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will load the featurized training data and also load the transformer from the above autofeaturized run. This transformer can then be used to transform the test data to check the accuracy of the custom model after training."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Test Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"load test dataset from CSV and split into X and y columns to featurize with the transformer going forward."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard_test.csv\"\n",
"\n",
"test_dataset = pd.read_csv(test_data)\n",
"label_column_name = 'Class'\n",
"\n",
"X_test_data = test_dataset[test_dataset.columns.difference([label_column_name])]\n",
"y_test_data = test_dataset[label_column_name].values\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load data_transformer from the above remote run artifact"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (Method 1)\n",
"\n",
"Method 1 allows you to read the transformer from the remote storage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import mlflow\n",
"mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())\n",
"\n",
"# Set uri to fetch data transformer from remote parent run.\n",
"artifact_path = \"/outputs/featurization/pipeline/\"\n",
"uri = \"runs:/\" + remote_run.id + artifact_path\n",
"\n",
"print(uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (Method 2)\n",
"\n",
"Method 2 downloads the transformer to the local directory and then can be used to transform the data. Uncomment to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"''' import pathlib\n",
"\n",
"# Download the transformer to the local directory\n",
"transformers_file_path = \"/outputs/featurization/pipeline/\"\n",
"local_path = \"./transformer\"\n",
"remote_run.download_files(prefix=transformers_file_path, output_directory=local_path, batch_size=500)\n",
"\n",
"path = pathlib.Path(\"transformer\") \n",
"path = str(path.absolute()) + transformers_file_path\n",
"str_uri = \"file:///\" + path\n",
"\n",
"print(str_uri) '''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Transform Data"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** Not all datasets produce a y_transformer. The dataset used in the current notebook requires a transformer as the y column data is categorical. \n",
"\n",
"We will go ahead and download the mlflow transformer model and use it to transform test data that can be used for further experimentation below. To run the commented code, make sure the environment requirement is satisfied. You can go ahead and create the environment from the `conda.yaml` file under `/outputs/featurization/pipeline/` and run the given code in it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"''' from azureml.automl.core.shared.constants import Transformers\n",
"\n",
"transformers = mlflow.sklearn.load_model(uri) # Using method 1\n",
"data_transformers = transformers.get_transformers()\n",
"x_transformer = data_transformers[Transformers.X_TRANSFORMER]\n",
"y_transformer = data_transformers[Transformers.Y_TRANSFORMER]\n",
"\n",
"X_test = x_transformer.transform(X_test_data)\n",
"y_test = y_transformer.transform(y_test_data) '''"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the following cell to see the featurization summary of X and y transformers. Uncomment to use. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"''' X_data_summary = x_transformer.get_featurization_summary(is_user_friendly=False)\n",
"\n",
"summary_df = pd.DataFrame.from_records(X_data_summary)\n",
"summary_df '''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Datastore\n",
"\n",
"The below data store holds the featurized datasets, hence we load and access the data. Check the path and file names according to the saved structure in your experiment `Outputs + logs` as seen in <i>Autofeaturization Part 1</i>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.datastore import Datastore\n",
"\n",
"ds = Datastore.get(ws, \"workspaceartifactstore\")\n",
"experiment_loc = \"ExperimentRun/dcid.\" + remote_run.id\n",
"\n",
"remote_data_path = \"/outputs/featurization/data/\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='AutofeaturizationData'></a>\n",
"## Autofeaturization Data\n",
"\n",
"We will load the training data from the previously completed Autofeaturization experiment. The resulting featurized dataframe can be passed into the custom model for training. Here we are saving the file to local from the experiment storage and reading the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_data_file_path = \"full_training_dataset.df.parquet\"\n",
"local_data_path = \"./data/\" + train_data_file_path\n",
"\n",
"remote_run.download_file(remote_data_path + train_data_file_path, local_data_path)\n",
"\n",
"full_training_data = pd.read_parquet(local_data_path)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Another way to load the data is to go to the above autofeaturization experiment and check for the featurized dataset ids under `Output datasets`. Uncomment and replace them accordingly below, to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# train_data = Dataset.get_by_id(ws, 'cb4418ee-bac4-45ac-b055-600653bdf83a') # replace the featurized full_training_dataset id\n",
"# full_training_data = train_data.to_pandas_dataframe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are dropping the y column and weights column from the featurized training dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Y_COLUMN = \"automl_y\"\n",
"SW_COLUMN = \"automl_weights\"\n",
"\n",
"X_train = full_training_data[full_training_data.columns.difference([Y_COLUMN, SW_COLUMN])]\n",
"y_train = full_training_data[Y_COLUMN].values\n",
"sample_weight = full_training_data[SW_COLUMN].values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Train'></a>\n",
"## Train"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we are passing our training data to the lightgbm classifier, any custom model can be used with your data. Let us first install lightgbm."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip install lightgbm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import lightgbm as lgb\n",
"\n",
"model = lgb.LGBMClassifier(learning_rate=0.08,max_depth=-5,random_state=42)\n",
"model.fit(X_train, y_train, sample_weight=sample_weight)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Once training is done, the test data obtained after transforming from the above downloaded transformer can be used to calculate the accuracy "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('Training accuracy {:.4f}'.format(model.score(X_train, y_train)))\n",
"\n",
"# Uncomment below to test the model on test data \n",
"# print('Testing accuracy {:.4f}'.format(model.score(X_test, y_test)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Results'></a>\n",
"## Analyze results\n",
"\n",
"### Retrieve the Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='Test'></a>\n",
"## Test the fitted model\n",
"\n",
"Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Uncomment below to test the model on test data\n",
"# y_pred = model.predict(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Experiment Complete!"
]
}
],
"metadata": {
"authors": [
{
"name": "bhavanatumma"
}
],
"interpreter": {
"hash": "adb464b67752e4577e3dc163235ced27038d19b7d88def00d75d1975bde5d9ab"
},
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,63 @@
@echo off
set conda_env_name=%1
set automl_env_file=%2
set options=%3
set PIP_NO_WARN_SCRIPT_LOCATION=0
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl_experimental"
IF "%automl_env_file%"=="" SET automl_env_file="automl_thin_client_env.yml"
IF NOT EXIST %automl_env_file% GOTO YmlMissing
IF "%CONDA_EXE%"=="" GOTO CondaMissing
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading existing conda environment %conda_env_name%
call pip uninstall azureml-train-automl -y -q
call conda env update --name %conda_env_name% --file %automl_env_file%
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f %automl_env_file% -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
REM azureml.widgets is now installed as part of the pip install under the conda env.
REM Removing the old user install so that the notebooks will use the latest widget.
call jupyter nbextension uninstall --user --py azureml.widgets
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
IF NOT "%options%"=="nolaunch" (
echo.
echo Starting jupyter notebook - please run the configuration notebook
echo.
jupyter notebook --log-level=50 --notebook-dir='..\..'
)
goto End
:CondaMissing
echo Please run this script from an Anaconda Prompt window.
echo You can start an Anaconda Prompt window by
echo typing Anaconda Prompt on the Start menu.
echo If you don't see the Anaconda Prompt app, install Miniconda.
echo If you are running an older version of Miniconda or Anaconda,
echo you can upgrade using the command: conda update conda
goto End
:YmlMissing
echo File %automl_env_file% not found.
:ErrorExit
echo Install failed
:End

View File

@@ -0,0 +1,53 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl_experimental"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_thin_client_env.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading existing conda environment" $CONDA_ENV_NAME
pip uninstall azureml-train-automl -y -q
conda env update --name $CONDA_ENV_NAME --file $AUTOML_ENV_FILE &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl_experimental"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_thin_client_env_mac.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading existing conda environment" $CONDA_ENV_NAME
pip uninstall azureml-train-automl -y -q
conda env update --name $CONDA_ENV_NAME --file $AUTOML_ENV_FILE &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -0,0 +1,15 @@
name: azure_automl_experimental
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.7.0 and later.
- pip<=22.3.1
- python>=3.7.0,<3.11
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- azureml-sdk
- azureml-widgets
- azureml-mlflow
- pandas
- mlflow

View File

@@ -0,0 +1,24 @@
name: azure_automl_experimental
channels:
- conda-forge
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.7.0 and later.
- pip<=20.2.4
- nomkl
- python>=3.7.0,<3.11
- urllib3==1.26.7
- PyJWT < 2.0.0
- numpy>=1.21.6,<=1.22.3
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azure-core==1.24.1
- azure-identity==1.7.0
- azureml-defaults
- azureml-sdk
- azureml-widgets
- azureml-mlflow
- pandas
- mlflow

View File

@@ -0,0 +1,470 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Regression with Aml Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use an experimental feature, Model Proxy, to do a predict on the best generated model without downloading the model locally. The prediction will happen on same compute and environment that was used to train the model. This feature is currently in the experimental state, which means that the API is prone to changing, please make sure to run on the latest version of this notebook if you face any issues.\n",
"This notebook will also leverage MLFlow for saving models, allowing for more portability of the resulting models. See https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow for more details around MLFlow is AzureML.\n",
"\n",
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using remote compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"import json\n",
"\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment.\n",
"experiment_name = 'automl-regression-model-proxy'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Run History Name'] = experiment_name\n",
"output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you use `AmlCompute` as your training compute resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"# Try to ensure that the cluster name is unique across the notebooks\n",
"cpu_cluster_name = \"reg-model-proxy\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=4)\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data\n",
"Load the hardware dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"\n",
"# Split the dataset into train and test datasets\n",
"train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n",
"\n",
"label = \"ERP\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The split data will be used in the remote compute by ModelProxy and locally to compare results.\n",
"So, we need to persist the split data to avoid descrepencies from different package versions in the local and remote."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"\n",
"train_data = Dataset.Tabular.register_pandas_dataframe(\n",
" train_data.to_pandas_dataframe(), target=(ds, \"machineTrainData\"), name=\"train_data\")\n",
"\n",
"test_data = Dataset.Tabular.register_pandas_dataframe(\n",
" test_data.to_pandas_dataframe(), target=(ds, \"machineTestData\"), name=\"test_data\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification, regression or forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"automlconfig-remarks-sample"
]
},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": 'r2_score',\n",
" \"enable_early_stopping\": True, \n",
" \"experiment_timeout_hours\": 0.3, #for real scenarios we recommend a timeout of at least one hour \n",
" \"max_concurrent_iterations\": 4,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"verbosity\": logging.INFO,\n",
" \"save_mlflow\": True,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'regression',\n",
" compute_target = compute_target,\n",
" training_data = train_data,\n",
" label_column_name = label,\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of remote runs is asynchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If you need to retrieve a run that already started, use the following code\n",
"#from azureml.train.automl.run import AutoMLRun\n",
"#remote_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Child Run\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_best_child` method returns the best run. Overloads on `get_best_child` allow you to retrieve the best run for *any* logged metric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run = remote_run.get_best_child()\n",
"print(best_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Show hyperparameters\n",
"Show the model pipeline used for the best run with its hyperparameters.\n",
"For ensemble pipelines it shows the iterations and algorithms that are ensembled."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_properties = best_run.get_details()['properties']\n",
"pipeline_script = json.loads(run_properties['pipeline_script'])\n",
"print(json.dumps(pipeline_script, indent = 1)) \n",
"\n",
"if 'ensembled_iterations' in run_properties:\n",
" print(\"\")\n",
" print(\"Ensembled Iterations\")\n",
" print(run_properties['ensembled_iterations'])\n",
" \n",
"if 'ensembled_algorithms' in run_properties:\n",
" print(\"\")\n",
" print(\"Ensembled Algorithms\")\n",
" print(run_properties['ensembled_algorithms'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Child Run Based on Any Other Metric\n",
"Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"root_mean_squared_error\"\n",
"best_run = remote_run.get_best_child(metric = lookup_metric)\n",
"print(best_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test = test_data.keep_columns('ERP')\n",
"test_data = test_data.drop_columns('ERP')\n",
"\n",
"\n",
"y_train = train_data.keep_columns('ERP')\n",
"train_data = train_data.drop_columns('ERP')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Creating ModelProxy for submitting prediction runs to the training environment.\n",
"We will create a ModelProxy for the best child run, which will allow us to submit a run that does the prediction in the training environment. Unlike the local client, which can have different versions of some libraries, the training environment will have all the compatible libraries for the model already."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.model_proxy import ModelProxy\n",
"best_model_proxy = ModelProxy(best_run)\n",
"y_pred_train = best_model_proxy.predict(train_data)\n",
"y_pred_test = best_model_proxy.predict(test_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Exploring results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred_train = y_pred_train.to_pandas_dataframe().values.flatten()\n",
"y_train = y_train.to_pandas_dataframe().values.flatten()\n",
"y_residual_train = y_train - y_pred_train\n",
"\n",
"y_pred_test = y_pred_test.to_pandas_dataframe().values.flatten()\n",
"y_test = y_test.to_pandas_dataframe().values.flatten()\n",
"y_residual_test = y_test - y_pred_test\n",
"print(y_residual_train)\n",
"print(y_residual_test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "sekrupa"
}
],
"categories": [
"how-to-use-azureml",
"automated-machine-learning"
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,349 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/exploring-previous-runs/auto-ml-exploring-previous-runs.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Exploring Previous Runs**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Explore](#Explore)\n",
"1. [Download](#Download)\n",
"1. [Register](#Register)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import json\n",
"\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List Experiments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_list = Experiment.list(workspace=ws)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"for experiment in experiment_list:\n",
" automl_runs = list(experiment.get_runs(type='automl'))\n",
" summary_df[experiment.name] = [len(automl_runs)]\n",
" \n",
"pd.set_option('display.max_colwidth', -1)\n",
"summary_df.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List runs for an experiment\n",
"Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n",
"\n",
"proj = ws.experiments[experiment_name]\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"automl_runs = list(proj.get_runs(type='automl'))\n",
"automl_runs_project = []\n",
"for run in automl_runs:\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
" iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" if run.get_details()['status'] == 'Completed':\n",
" automl_runs_project.append(run.id)\n",
" \n",
"from IPython.display import HTML\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
"\n",
"from IPython.display import display\n",
"display(projname_html)\n",
"display(summary_df.T)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get details for a run\n",
"\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_id = automl_runs_project[0] # Replace with your own run_id from above run ids\n",
"assert (run_id in summary_df.keys()), \"Run id not found! Please set run id to a value from above run ids\"\n",
"\n",
"from azureml.widgets import RunDetails\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n",
"\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time'])\n",
"properties = ml_run.get_properties()\n",
"tags = ml_run.get_tags()\n",
"status = ml_run.get_details()\n",
"amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
"if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
"else:\n",
" iterations = properties['num_iterations']\n",
"start_time = None\n",
"if 'startTimeUtc' in status:\n",
" start_time = status['startTimeUtc']\n",
"end_time = None\n",
"if 'endTimeUtc' in status:\n",
" end_time = status['endTimeUtc']\n",
"summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time]\n",
"display(HTML('<h3>Runtime Details</h3>'))\n",
"display(summary_df)\n",
"\n",
"#settings_df = pd.DataFrame(data = amlsettings, index = [''])\n",
"display(HTML('<h3>AutoML Settings</h3>'))\n",
"display(amlsettings)\n",
"\n",
"display(HTML('<h3>Iterations</h3>'))\n",
"RunDetails(ml_run).show() \n",
"\n",
"all_metrics = ml_run.get_metrics(recursive=True)\n",
"metricslist = {}\n",
"for run_id, metrics in all_metrics.items():\n",
" iteration = int(run_id.split('_')[-1])\n",
" float_metrics = {k: v for k, v in metrics.items() if isinstance(v, float)}\n",
" metricslist[iteration] = float_metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"display(HTML('<h3>Metrics</h3>'))\n",
"display(rundata)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the Best Model for Any Given Metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name.\n",
"best_run, fitted_model = ml_run.get_output(metric = metric)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the Model for Any Given Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 1 # Replace with an iteration number.\n",
"best_run, fitted_model = ml_run.get_output(iteration = iteration)\n",
"fitted_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Best Model for Any Given Metric"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metric = 'AUC_weighted' # Replace with a metric name.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, metric = metric)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Model for Any Given Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 1 # Replace with an iteration number.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, iteration = iteration)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,8 +0,0 @@
name: auto-ml-exploring-previous-runs
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -0,0 +1,174 @@
from typing import Any, Dict, Optional, List
import argparse
import json
import os
import re
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from azureml.automl.core.shared import constants
from azureml.automl.core.shared.types import GrainType
from azureml.automl.runtime.shared.score import scoring
GRAIN = "time_series_id"
BACKTEST_ITER = "backtest_iteration"
ACTUALS = "actual_level"
PREDICTIONS = "predicted_level"
ALL_GRAINS = "all_sets"
FORECASTS_FILE = "forecast.csv"
SCORES_FILE = "scores.csv"
PLOTS_FILE = "plots_fcst_vs_actual.pdf"
RE_INVALID_SYMBOLS = re.compile("[: ]")
def _compute_metrics(df: pd.DataFrame, metrics: List[str]):
"""
Compute metrics for one data frame.
:param df: The data frame which contains actual_level and predicted_level columns.
:return: The data frame with two columns - metric_name and metric.
"""
scores = scoring.score_regression(
y_test=df[ACTUALS], y_pred=df[PREDICTIONS], metrics=metrics
)
metrics_df = pd.DataFrame(list(scores.items()), columns=["metric_name", "metric"])
metrics_df.sort_values(["metric_name"], inplace=True)
metrics_df.reset_index(drop=True, inplace=True)
return metrics_df
def _format_grain_name(grain: GrainType) -> str:
"""
Convert grain name to string.
:param grain: the grain name.
:return: the string representation of the given grain.
"""
if not isinstance(grain, tuple) and not isinstance(grain, list):
return str(grain)
grain = list(map(str, grain))
return "|".join(grain)
def compute_all_metrics(
fcst_df: pd.DataFrame,
ts_id_colnames: List[str],
metric_names: Optional[List[set]] = None,
):
"""
Calculate metrics per grain.
:param fcst_df: forecast data frame. Must contain 2 columns: 'actual_level' and 'predicted_level'
:param metric_names: (optional) the list of metric names to return
:param ts_id_colnames: (optional) list of grain column names
:return: dictionary of summary table for all tests and final decision on stationary vs nonstaionary
"""
if not metric_names:
metric_names = list(constants.Metric.SCALAR_REGRESSION_SET)
if ts_id_colnames is None:
ts_id_colnames = []
metrics_list = []
if ts_id_colnames:
for grain, df in fcst_df.groupby(ts_id_colnames):
one_grain_metrics_df = _compute_metrics(df, metric_names)
one_grain_metrics_df[GRAIN] = _format_grain_name(grain)
metrics_list.append(one_grain_metrics_df)
# overall metrics
one_grain_metrics_df = _compute_metrics(fcst_df, metric_names)
one_grain_metrics_df[GRAIN] = ALL_GRAINS
metrics_list.append(one_grain_metrics_df)
# collect into a data frame
return pd.concat(metrics_list)
def _draw_one_plot(
df: pd.DataFrame,
time_column_name: str,
grain_column_names: List[str],
pdf: PdfPages,
) -> None:
"""
Draw the single plot.
:param df: The data frame with the data to build plot.
:param time_column_name: The name of a time column.
:param grain_column_names: The name of grain columns.
:param pdf: The pdf backend used to render the plot.
"""
fig, _ = plt.subplots(figsize=(20, 10))
df = df.set_index(time_column_name)
plt.plot(df[[ACTUALS, PREDICTIONS]])
plt.xticks(rotation=45)
iteration = df[BACKTEST_ITER].iloc[0]
if grain_column_names:
grain_name = [df[grain].iloc[0] for grain in grain_column_names]
plt.title(f"Time series ID: {_format_grain_name(grain_name)} {iteration}")
plt.legend(["actual", "forecast"])
plt.close(fig)
pdf.savefig(fig)
def calculate_scores_and_build_plots(
input_dir: str, output_dir: str, automl_settings: Dict[str, Any]
):
os.makedirs(output_dir, exist_ok=True)
grains = automl_settings.get(
constants.TimeSeries.TIME_SERIES_ID_COLUMN_NAMES,
automl_settings.get(constants.TimeSeries.GRAIN_COLUMN_NAMES, None),
)
time_column_name = automl_settings.get(constants.TimeSeries.TIME_COLUMN_NAME)
if grains is None:
grains = []
if isinstance(grains, str):
grains = [grains]
while BACKTEST_ITER in grains:
grains.remove(BACKTEST_ITER)
dfs = []
for fle in os.listdir(input_dir):
file_path = os.path.join(input_dir, fle)
if os.path.isfile(file_path) and file_path.endswith(".csv"):
df_iter = pd.read_csv(file_path, parse_dates=[time_column_name])
for _, iteration in df_iter.groupby(BACKTEST_ITER):
dfs.append(iteration)
forecast_df = pd.concat(dfs, sort=False, ignore_index=True)
# To make sure plots are in order, sort the predictions by grain and iteration.
ts_index = grains + [BACKTEST_ITER]
forecast_df.sort_values(by=ts_index, inplace=True)
pdf = PdfPages(os.path.join(output_dir, PLOTS_FILE))
for _, one_forecast in forecast_df.groupby(ts_index):
_draw_one_plot(one_forecast, time_column_name, grains, pdf)
pdf.close()
forecast_df.to_csv(os.path.join(output_dir, FORECASTS_FILE), index=False)
# Remove np.NaN and np.inf from the prediction and actuals data.
forecast_df.replace([np.inf, -np.inf], np.nan, inplace=True)
forecast_df.dropna(subset=[ACTUALS, PREDICTIONS], inplace=True)
metrics = compute_all_metrics(forecast_df, grains + [BACKTEST_ITER])
metrics.to_csv(os.path.join(output_dir, SCORES_FILE), index=False)
if __name__ == "__main__":
args = {"forecasts": "--forecasts", "scores_out": "--output-dir"}
parser = argparse.ArgumentParser("Parsing input arguments.")
for argname, arg in args.items():
parser.add_argument(arg, dest=argname, required=True)
parsed_args, _ = parser.parse_known_args()
input_dir = parsed_args.forecasts
output_dir = parsed_args.scores_out
with open(
os.path.join(
os.path.dirname(os.path.realpath(__file__)), "automl_settings.json"
)
) as json_file:
automl_settings = json.load(json_file)
calculate_scores_and_build_plots(input_dir, output_dir, automl_settings)

View File

@@ -0,0 +1,779 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Many Models with Backtesting - Automated ML\n",
"**_Backtest many models time series forecasts with Automated Machine Learning_**\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this notebook we are using a synthetic dataset to demonstrate the back testing in many model scenario. This allows us to check historical performance of AutoML on a historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.<br>\n",
"\n",
"Thus, it is a quick way of evaluating AutoML as if it was in production. Here, we do not test historical performance of a particular model, for this see the [notebook](../forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb). Instead, the best model for every backtest iteration can be different since AutoML chooses the best model for a given training set.\n",
"\n",
"![Backtesting](Backtesting.png)\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"You'll need to create a compute Instance by following [these](https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-manage-compute-instance?tabs=python) instructions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.0 Set up workspace, datastore, experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003526897
}
},
"outputs": [],
"source": [
"import os\n",
"\n",
"import azureml.core\n",
"from azureml.core import Workspace, Datastore\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from pandas.tseries.frequencies import to_offset\n",
"\n",
"# Set up your workspace\n",
"ws = Workspace.from_config()\n",
"ws.get_details()\n",
"\n",
"# Set up your datastores\n",
"dstore = ws.get_default_datastore()\n",
"\n",
"output = {}\n",
"output[\"SDK version\"] = azureml.core.VERSION\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is compatible with Azure ML SDK version 1.35.1 or later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose an experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003540729
}
},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment = Experiment(ws, \"automl-many-models-backtest\")\n",
"\n",
"print(\"Experiment name: \" + experiment.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.0 Data\n",
"\n",
"#### 2.1 Data generation\n",
"For this notebook we will generate the artificial data set with two [time series IDs](https://docs.microsoft.com/en-us/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters?view=azure-ml-py). Then we will generate backtest folds and will upload it to the default BLOB storage and create a [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# simulate data: 2 grains - 700\n",
"TIME_COLNAME = \"date\"\n",
"TARGET_COLNAME = \"value\"\n",
"TIME_SERIES_ID_COLNAME = \"ts_id\"\n",
"\n",
"sample_size = 700\n",
"# Set the random seed for reproducibility of results.\n",
"np.random.seed(20)\n",
"X1 = pd.DataFrame(\n",
" {\n",
" TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n",
" TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n",
" TIME_SERIES_ID_COLNAME: \"ts_A\",\n",
" }\n",
")\n",
"X2 = pd.DataFrame(\n",
" {\n",
" TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n",
" TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n",
" TIME_SERIES_ID_COLNAME: \"ts_B\",\n",
" }\n",
")\n",
"\n",
"X = pd.concat([X1, X2], ignore_index=True, sort=False)\n",
"print(\"Simulated dataset contains {} rows \\n\".format(X.shape[0]))\n",
"X.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we will generate 8 backtesting folds with backtesting period of 7 days and with the same forecasting horizon. We will add the column \"backtest_iteration\", which will identify the backtesting period by the last training date."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"offset_type = \"7D\"\n",
"NUMBER_OF_BACKTESTS = 8 # number of train/test sets to generate\n",
"\n",
"dfs_train = []\n",
"dfs_test = []\n",
"for ts_id, df_one in X.groupby(TIME_SERIES_ID_COLNAME):\n",
"\n",
" data_end = df_one[TIME_COLNAME].max()\n",
"\n",
" for i in range(NUMBER_OF_BACKTESTS):\n",
" train_cutoff_date = data_end - to_offset(offset_type)\n",
" df_one = df_one.copy()\n",
" df_one[\"backtest_iteration\"] = \"iteration_\" + str(train_cutoff_date)\n",
" train = df_one[df_one[TIME_COLNAME] <= train_cutoff_date]\n",
" test = df_one[\n",
" (df_one[TIME_COLNAME] > train_cutoff_date)\n",
" & (df_one[TIME_COLNAME] <= data_end)\n",
" ]\n",
" data_end = train[TIME_COLNAME].max()\n",
" dfs_train.append(train)\n",
" dfs_test.append(test)\n",
"\n",
"X_train = pd.concat(dfs_train, sort=False, ignore_index=True)\n",
"X_test = pd.concat(dfs_test, sort=False, ignore_index=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 Create the Tabular Data Set.\n",
"\n",
"A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n",
"\n",
"Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n",
"\n",
"In this next step, we will upload the data and create a TabularDataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data.dataset_factory import TabularDatasetFactory\n",
"\n",
"ds = ws.get_default_datastore()\n",
"# Upload saved data to the default data store.\n",
"train_data = TabularDatasetFactory.register_pandas_dataframe(\n",
" X_train, target=(ds, \"data_mm\"), name=\"data_train\"\n",
")\n",
"test_data = TabularDatasetFactory.register_pandas_dataframe(\n",
" X_test, target=(ds, \"data_mm\"), name=\"data_test\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.0 Build the training pipeline\n",
"Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose a compute target\n",
"\n",
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n",
"\n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007037308
}
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"\n",
"# Name your cluster\n",
"compute_name = \"backtest-mm\"\n",
"\n",
"\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print(\"Found compute target: \" + compute_name)\n",
"else:\n",
" print(\"Creating a new compute target...\")\n",
" provisioning_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" # Create the compute target\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
"\n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(\n",
" show_output=True, min_node_count=None, timeout_in_minutes=20\n",
" )\n",
"\n",
" # For a more detailed view of current cluster status, use the 'status' property\n",
" print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up training parameters\n",
"\n",
"We need to provide ``ForecastingParameters``, ``AutoMLConfig`` and ``ManyModelsTrainParameters`` objects. For the forecasting task we also need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name(s) definition.\n",
"\n",
"#### ``ForecastingParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **time_column_name** | The name of your time column. |\n",
"| **time_series_id_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **cv_step_size** | Number of periods between two consecutive cross-validation folds. The default value is \\\"auto\\\", in which case AutoMl determines the cross-validation step size automatically, if a validation set is not provided. Or users could specify an integer value. |\n",
"\n",
"#### ``AutoMLConfig`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **task** | forecasting |\n",
"| **primary_metric** | This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i> |\n",
"| **blocked_models** | Blocked models won't be used by AutoML. |\n",
"| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **experiment_timeout_hours** | Maximum amount of time in hours that each experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. **It does not control the overall timeout for the pipeline run, instead controls the timeout for each training run per partitioned time series.** |\n",
"| **label_column_name** | The name of the label column. |\n",
"| **n_cross_validations** | Number of cross validation splits. The default value is \\\"auto\\\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or users could specify an integer value. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n",
"| **enable_early_stopping** | Flag to enable early termination if the primary metric is no longer improving. |\n",
"| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n",
"| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n",
"| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n",
"\n",
"\n",
"#### ``ManyModelsTrainParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **automl_settings** | The ``AutoMLConfig`` object defined above. |\n",
"| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007061544
}
},
"outputs": [],
"source": [
"from azureml.train.automl.runtime._many_models.many_models_parameters import (\n",
" ManyModelsTrainParameters,\n",
")\n",
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"from azureml.train.automl.automlconfig import AutoMLConfig\n",
"\n",
"partition_column_names = [TIME_SERIES_ID_COLNAME, \"backtest_iteration\"]\n",
"\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=TIME_COLNAME,\n",
" forecast_horizon=6,\n",
" time_series_id_column_names=partition_column_names,\n",
" cv_step_size=\"auto\",\n",
")\n",
"\n",
"automl_settings = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" iteration_timeout_minutes=10,\n",
" iterations=15,\n",
" experiment_timeout_hours=0.25,\n",
" label_column_name=TARGET_COLNAME,\n",
" n_cross_validations=\"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" track_child_runs=False,\n",
" forecasting_parameters=forecasting_parameters,\n",
")\n",
"\n",
"\n",
"mm_paramters = ManyModelsTrainParameters(\n",
" automl_settings=automl_settings, partition_column_names=partition_column_names\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up many models pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for training. |\n",
"| **train_data** | The file dataset to be used as input to the training run. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n",
"| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |\n",
"| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n",
"| **run_invocation_timeout** | Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. This must be greater than ``experiment_timeout_hours`` by at least 300 seconds. |\n",
"\n",
"Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution.\n",
"\n",
"**Note**: Total time taken for the **training step** in the pipeline to complete = $ \\frac{t}{ p \\times n } \\times ts $\n",
"where,\n",
"- $ t $ is time taken for training one partition (can be viewed in the training logs)\n",
"- $ p $ is ``process_count_per_node``\n",
"- $ n $ is ``node_count``\n",
"- $ ts $ is total number of partitions in time series based on ``partition_column_names``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n",
"\n",
"\n",
"training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n",
" experiment=experiment,\n",
" train_data=train_data,\n",
" compute_target=compute_target,\n",
" node_count=2,\n",
" process_count_per_node=2,\n",
" run_invocation_timeout=1200,\n",
" train_pipeline_parameters=mm_paramters,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"training_pipeline = Pipeline(ws, steps=training_pipeline_steps)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the pipeline to run\n",
"Next we submit our pipeline to run. The whole training pipeline takes about 20 minutes using a STANDARD_DS12_V2 VM with our current ParallelRunConfig setting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run = experiment.submit(training_pipeline)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check the run status, if training_run is in completed state, continue to next section. Otherwise, check the portal for failures."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.0 Backtesting\n",
"Now that we selected the best AutoML model for each backtest fold, we will use these models to generate the forecasts and compare with the actuals."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up output dataset for inference data\n",
"Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data import OutputFileDatasetConfig\n",
"\n",
"output_inference_data_ds = OutputFileDatasetConfig(\n",
" name=\"many_models_inference_output\",\n",
" destination=(dstore, \"backtesting/inference_data/\"),\n",
").register_on_complete(name=\"backtesting_data_ds\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For many models we need to provide the ManyModelsInferenceParameters object.\n",
"\n",
"#### ``ManyModelsInferenceParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **partition_column_names** | List of column names that identifies groups. |\n",
"| **target_column_name** | \\[Optional] Column name only if the inference dataset has the target. |\n",
"| **time_column_name** | \\[Optional] Time column name only if it is timeseries. |\n",
"| **inference_type** | \\[Optional] Which inference method to use on the model. Possible values are 'forecast', 'predict_proba', and 'predict'. |\n",
"| **forecast_mode** | \\[Optional] The type of forecast to be used, either 'rolling' or 'recursive'; defaults to 'recursive'. |\n",
"| **step** | \\[Optional] Number of periods to advance the forecasting window in each iteration **(for rolling forecast only)**; defaults to 1. |\n",
"\n",
"#### ``get_many_models_batch_inference_steps`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for inference run. |\n",
"| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n",
"| **compute_target** | The compute target that runs the inference pipeline. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** | \\[Optional] The number of processes per node. By default it's 2 (should be at most half of the number of cores in a single node of the compute cluster that will be used for the experiment).\n",
"| **inference_pipeline_parameters** | \\[Optional] The ``ManyModelsInferenceParameters`` object defined above. |\n",
"| **append_row_file_name** | \\[Optional] The name of the output file (optional, default value is 'parallel_run_step.txt'). Supports 'txt' and 'csv' file extension. A 'txt' file extension generates the output in 'txt' format with space as separator without column names. A 'csv' file extension generates the output in 'csv' format with comma as separator and with column names. |\n",
"| **train_run_id** | \\[Optional] The run id of the **training pipeline**. By default it is the latest successful training pipeline run in the experiment. |\n",
"| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
"| **run_invocation_timeout** | \\[Optional] Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. |\n",
"| **output_datastore** | \\[Optional] The ``Datastore`` or ``OutputDatasetConfig`` to be used for output. If specified any pipeline output will be written to that location. If unspecified the default datastore will be used. |\n",
"| **arguments** | \\[Optional] Arguments to be passed to inference script. Possible argument is '--forecast_quantiles' followed by quantile values. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n",
"from azureml.train.automl.runtime._many_models.many_models_parameters import (\n",
" ManyModelsInferenceParameters,\n",
")\n",
"\n",
"mm_parameters = ManyModelsInferenceParameters(\n",
" partition_column_names=partition_column_names,\n",
" time_column_name=TIME_COLNAME,\n",
" target_column_name=TARGET_COLNAME,\n",
")\n",
"\n",
"output_file_name = \"parallel_run_step.csv\"\n",
"\n",
"inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n",
" experiment=experiment,\n",
" inference_data=test_data,\n",
" node_count=2,\n",
" process_count_per_node=2,\n",
" compute_target=compute_target,\n",
" run_invocation_timeout=300,\n",
" output_datastore=output_inference_data_ds,\n",
" train_run_id=training_run.id,\n",
" train_experiment_name=training_run.experiment.name,\n",
" inference_pipeline_parameters=mm_parameters,\n",
" append_row_file_name=output_file_name,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"inference_pipeline = Pipeline(ws, steps=inference_steps)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_run = experiment.submit(inference_pipeline)\n",
"inference_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5.0 Retrieve results and calculate metrics\n",
"\n",
"The pipeline returns one file with the predictions for each times series ID and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n",
"\n",
"The next code snippet does the following:\n",
"1. Downloads the contents of the output folder that is passed in the parallel run step \n",
"2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe \n",
"3. Saves the table in csv format and \n",
"4. Displays the top 10 rows of the predictions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n",
"\n",
"PREDICTION_COLNAME = \"Predictions\"\n",
"forecasting_results_name = \"forecasting_results\"\n",
"forecasting_output_name = \"many_models_inference_output\"\n",
"forecast_file = get_output_from_mm_pipeline(\n",
" inference_run, forecasting_results_name, forecasting_output_name, output_file_name\n",
")\n",
"df = pd.read_csv(forecast_file, parse_dates=[0])\n",
"print(\n",
" \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n",
")\n",
"# Save the csv file to read it in the next step.\n",
"df.rename(\n",
" columns={TARGET_COLNAME: \"actual_level\", PREDICTION_COLNAME: \"predicted_level\"},\n",
" inplace=True,\n",
")\n",
"df.to_csv(os.path.join(forecasting_results_name, \"forecast.csv\"), index=False)\n",
"df.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View metrics\n",
"We will read in the obtained results and run the helper script, which will generate metrics and create the plots of predicted versus actual values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from assets.score import calculate_scores_and_build_plots\n",
"\n",
"backtesting_results = \"backtesting_mm_results\"\n",
"os.makedirs(backtesting_results, exist_ok=True)\n",
"calculate_scores_and_build_plots(\n",
" forecasting_results_name,\n",
" backtesting_results,\n",
" automl_settings.as_serializable_dict(),\n",
")\n",
"pd.DataFrame({\"File\": os.listdir(backtesting_results)})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The directory contains a set of files with results:\n",
"- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n",
"- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series ids, which are marked as \"all_sets\"\n",
"- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and, eash time series is saved as separate plot.\n",
"\n",
"For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". We will create the utility function, which will build the table with metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_metrics_for_ts(all_metrics, ts):\n",
" \"\"\"\n",
" Get the metrics for the time series with ID ts and return it as pandas data frame.\n",
"\n",
" :param all_metrics: The table with all the metrics.\n",
" :param ts: The ID of a time series of interest.\n",
" :return: The pandas DataFrame with metrics for one time series.\n",
" \"\"\"\n",
" results_df = None\n",
" for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n",
" if not ts_id.startswith(ts):\n",
" continue\n",
" iteration = ts_id.split(\"|\")[-1]\n",
" df = one_series[[\"metric_name\", \"metric\"]]\n",
" df.rename({\"metric\": iteration}, axis=1, inplace=True)\n",
" df.set_index(\"metric_name\", inplace=True)\n",
" if results_df is None:\n",
" results_df = df\n",
" else:\n",
" results_df = results_df.merge(\n",
" df, how=\"inner\", left_index=True, right_index=True\n",
" )\n",
" results_df.sort_index(axis=1, inplace=True)\n",
" return results_df\n",
"\n",
"\n",
"metrics_df = pd.read_csv(os.path.join(backtesting_results, \"scores.csv\"))\n",
"ts = \"ts_A\"\n",
"get_metrics_for_ts(metrics_df, ts)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Forecast vs actuals plots."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import IFrame\n",
"\n",
"IFrame(\"./backtesting_mm_results/plots_fcst_vs_actual.pdf\", width=800, height=300)"
]
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"categories": [
"how-to-use-azureml",
"automated-machine-learning"
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
},
"vscode": {
"interpreter": {
"hash": "6bd77c88278e012ef31757c15997a7bea8c943977c43d6909403c00ae11d43ca"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -0,0 +1,45 @@
import argparse
import os
import pandas as pd
import azureml.train.automl.runtime._hts.hts_runtime_utilities as hru
from azureml.core import Run
from azureml.core.dataset import Dataset
# Parse the arguments.
args = {
"step_size": "--step-size",
"step_number": "--step-number",
"time_column_name": "--time-column-name",
"time_series_id_column_names": "--time-series-id-column-names",
"out_dir": "--output-dir",
}
parser = argparse.ArgumentParser("Parsing input arguments.")
for argname, arg in args.items():
parser.add_argument(arg, dest=argname, required=True)
parsed_args, _ = parser.parse_known_args()
step_number = int(parsed_args.step_number)
step_size = int(parsed_args.step_size)
# Create the working dirrectory to store the temporary csv files.
working_dir = parsed_args.out_dir
os.makedirs(working_dir, exist_ok=True)
# Set input and output
script_run = Run.get_context()
input_dataset = script_run.input_datasets["training_data"]
X_train = input_dataset.to_pandas_dataframe()
# Split the data.
for i in range(step_number):
file_name = os.path.join(working_dir, "backtest_{}.csv".format(i))
if parsed_args.time_series_id_column_names:
dfs = []
for _, one_series in X_train.groupby([parsed_args.time_series_id_column_names]):
one_series = one_series.sort_values(
by=[parsed_args.time_column_name], inplace=False
)
dfs.append(one_series.iloc[: len(one_series) - step_size * i])
pd.concat(dfs, sort=False, ignore_index=True).to_csv(file_name, index=False)
else:
X_train.sort_values(by=[parsed_args.time_column_name], inplace=True)
X_train.iloc[: len(X_train) - step_size * i].to_csv(file_name, index=False)

View File

@@ -0,0 +1,178 @@
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
"""The batch script needed for back testing of models using PRS."""
import argparse
import json
import logging
import os
import pickle
import re
import pandas as pd
from azureml.core.experiment import Experiment
from azureml.core.model import Model
from azureml.core.run import Run
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from azureml.train.automl import AutoMLConfig
RE_INVALID_SYMBOLS = re.compile(r"[:\s]")
model_name = None
target_column_name = None
current_step_run = None
output_dir = None
logger = logging.getLogger(__name__)
def _get_automl_settings():
with open(
os.path.join(
os.path.dirname(os.path.realpath(__file__)), "automl_settings.json"
)
) as json_file:
return json.load(json_file)
def init():
global model_name
global target_column_name
global output_dir
global automl_settings
global model_uid
global forecast_quantiles
logger.info("Initialization of the run.")
parser = argparse.ArgumentParser("Parsing input arguments.")
parser.add_argument("--output-dir", dest="out", required=True)
parser.add_argument("--model-name", dest="model", default=None)
parser.add_argument("--model-uid", dest="model_uid", default=None)
parser.add_argument(
"--forecast_quantiles",
nargs="*",
type=float,
help="forecast quantiles list",
default=None,
)
parsed_args, _ = parser.parse_known_args()
model_name = parsed_args.model
automl_settings = _get_automl_settings()
target_column_name = automl_settings.get("label_column_name")
output_dir = parsed_args.out
model_uid = parsed_args.model_uid
forecast_quantiles = parsed_args.forecast_quantiles
os.makedirs(output_dir, exist_ok=True)
os.environ["AUTOML_IGNORE_PACKAGE_VERSION_INCOMPATIBILITIES".lower()] = "True"
def get_run():
global current_step_run
if current_step_run is None:
current_step_run = Run.get_context()
return current_step_run
def run_backtest(data_input_name: str, file_name: str, experiment: Experiment):
"""Re-train the model and return metrics."""
data_input = pd.read_csv(
data_input_name,
parse_dates=[automl_settings[constants.TimeSeries.TIME_COLUMN_NAME]],
)
print(data_input.head())
if not automl_settings.get(constants.TimeSeries.GRAIN_COLUMN_NAMES):
# There is no grains.
data_input.sort_values(
[automl_settings[constants.TimeSeries.TIME_COLUMN_NAME]], inplace=True
)
X_train = data_input.iloc[: -automl_settings["max_horizon"]]
y_train = X_train.pop(target_column_name).values
X_test = data_input.iloc[-automl_settings["max_horizon"] :]
y_test = X_test.pop(target_column_name).values
else:
# The data contain grains.
dfs_train = []
dfs_test = []
for _, one_series in data_input.groupby(
automl_settings.get(constants.TimeSeries.GRAIN_COLUMN_NAMES)
):
one_series.sort_values(
[automl_settings[constants.TimeSeries.TIME_COLUMN_NAME]], inplace=True
)
dfs_train.append(one_series.iloc[: -automl_settings["max_horizon"]])
dfs_test.append(one_series.iloc[-automl_settings["max_horizon"] :])
X_train = pd.concat(dfs_train, sort=False, ignore_index=True)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(dfs_test, sort=False, ignore_index=True)
y_test = X_test.pop(target_column_name).values
last_training_date = str(
X_train[automl_settings[constants.TimeSeries.TIME_COLUMN_NAME]].max()
)
if file_name:
# If file name is provided, we will load model and retrain it on backtest data.
with open(file_name, "rb") as fp:
fitted_model = pickle.load(fp)
fitted_model.fit(X_train, y_train)
else:
# We will run the experiment and select the best model.
X_train[target_column_name] = y_train
automl_config = AutoMLConfig(training_data=X_train, **automl_settings)
automl_run = current_step_run.submit_child(automl_config, show_output=True)
best_run, fitted_model = automl_run.get_output()
# As we have generated models, we need to register them for the future use.
description = "Backtest model example"
tags = {"last_training_date": last_training_date, "experiment": experiment.name}
if model_uid:
tags["model_uid"] = model_uid
automl_run.register_model(
model_name=best_run.properties["model_name"],
description=description,
tags=tags,
)
print(f"The model {best_run.properties['model_name']} was registered.")
# By default we will have forecast quantiles of 0.5, which is our target
if forecast_quantiles:
if 0.5 not in forecast_quantiles:
forecast_quantiles.append(0.5)
fitted_model.quantiles = forecast_quantiles
x_pred = fitted_model.forecast_quantiles(X_test)
x_pred["actual_level"] = y_test
x_pred["backtest_iteration"] = f"iteration_{last_training_date}"
x_pred.rename({0.5: "predicted_level"}, axis=1, inplace=True)
date_safe = RE_INVALID_SYMBOLS.sub("_", last_training_date)
x_pred.to_csv(os.path.join(output_dir, f"iteration_{date_safe}.csv"), index=False)
return x_pred
def run(input_files):
"""Run the script"""
logger.info("Running mini batch.")
ws = get_run().experiment.workspace
file_name = None
if model_name:
models = Model.list(ws, name=model_name)
cloud_model = None
if models:
for one_mod in models:
if cloud_model is None or one_mod.version > cloud_model.version:
logger.info(
"Using existing model from the workspace. Model version: {}".format(
one_mod.version
)
)
cloud_model = one_mod
file_name = cloud_model.download(exist_ok=True)
forecasts = []
logger.info("Running backtest.")
for input_file in input_files:
forecasts.append(run_backtest(input_file, file_name, get_run().experiment))
return pd.concat(forecasts)

View File

@@ -0,0 +1,171 @@
from typing import Any, Dict, Optional, List
import argparse
import json
import os
import re
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from azureml.automl.core.shared import constants
from azureml.automl.core.shared.types import GrainType
from azureml.automl.runtime.shared.score import scoring
GRAIN = "time_series_id"
BACKTEST_ITER = "backtest_iteration"
ACTUALS = "actual_level"
PREDICTIONS = "predicted_level"
ALL_GRAINS = "all_sets"
FORECASTS_FILE = "forecast.csv"
SCORES_FILE = "scores.csv"
PLOTS_FILE = "plots_fcst_vs_actual.pdf"
RE_INVALID_SYMBOLS = re.compile("[: ]")
def _compute_metrics(df: pd.DataFrame, metrics: List[str]):
"""
Compute metrics for one data frame.
:param df: The data frame which contains actual_level and predicted_level columns.
:return: The data frame with two columns - metric_name and metric.
"""
scores = scoring.score_regression(
y_test=df[ACTUALS], y_pred=df[PREDICTIONS], metrics=metrics
)
metrics_df = pd.DataFrame(list(scores.items()), columns=["metric_name", "metric"])
metrics_df.sort_values(["metric_name"], inplace=True)
metrics_df.reset_index(drop=True, inplace=True)
return metrics_df
def _format_grain_name(grain: GrainType) -> str:
"""
Convert grain name to string.
:param grain: the grain name.
:return: the string representation of the given grain.
"""
if not isinstance(grain, tuple) and not isinstance(grain, list):
return str(grain)
grain = list(map(str, grain))
return "|".join(grain)
def compute_all_metrics(
fcst_df: pd.DataFrame,
ts_id_colnames: List[str],
metric_names: Optional[List[set]] = None,
):
"""
Calculate metrics per grain.
:param fcst_df: forecast data frame. Must contain 2 columns: 'actual_level' and 'predicted_level'
:param metric_names: (optional) the list of metric names to return
:param ts_id_colnames: (optional) list of grain column names
:return: dictionary of summary table for all tests and final decision on stationary vs nonstaionary
"""
if not metric_names:
metric_names = list(constants.Metric.SCALAR_REGRESSION_SET)
if ts_id_colnames is None:
ts_id_colnames = []
metrics_list = []
if ts_id_colnames:
for grain, df in fcst_df.groupby(ts_id_colnames):
one_grain_metrics_df = _compute_metrics(df, metric_names)
one_grain_metrics_df[GRAIN] = _format_grain_name(grain)
metrics_list.append(one_grain_metrics_df)
# overall metrics
one_grain_metrics_df = _compute_metrics(fcst_df, metric_names)
one_grain_metrics_df[GRAIN] = ALL_GRAINS
metrics_list.append(one_grain_metrics_df)
# collect into a data frame
return pd.concat(metrics_list)
def _draw_one_plot(
df: pd.DataFrame,
time_column_name: str,
grain_column_names: List[str],
pdf: PdfPages,
) -> None:
"""
Draw the single plot.
:param df: The data frame with the data to build plot.
:param time_column_name: The name of a time column.
:param grain_column_names: The name of grain columns.
:param pdf: The pdf backend used to render the plot.
"""
fig, _ = plt.subplots(figsize=(20, 10))
df = df.set_index(time_column_name)
plt.plot(df[[ACTUALS, PREDICTIONS]])
plt.xticks(rotation=45)
iteration = df[BACKTEST_ITER].iloc[0]
if grain_column_names:
grain_name = [df[grain].iloc[0] for grain in grain_column_names]
plt.title(f"Time series ID: {_format_grain_name(grain_name)} {iteration}")
plt.legend(["actual", "forecast"])
plt.close(fig)
pdf.savefig(fig)
def calculate_scores_and_build_plots(
input_dir: str, output_dir: str, automl_settings: Dict[str, Any]
):
os.makedirs(output_dir, exist_ok=True)
grains = automl_settings.get(constants.TimeSeries.GRAIN_COLUMN_NAMES)
time_column_name = automl_settings.get(constants.TimeSeries.TIME_COLUMN_NAME)
if grains is None:
grains = []
if isinstance(grains, str):
grains = [grains]
while BACKTEST_ITER in grains:
grains.remove(BACKTEST_ITER)
dfs = []
for fle in os.listdir(input_dir):
file_path = os.path.join(input_dir, fle)
if os.path.isfile(file_path) and file_path.endswith(".csv"):
df_iter = pd.read_csv(file_path, parse_dates=[time_column_name])
for _, iteration in df_iter.groupby(BACKTEST_ITER):
dfs.append(iteration)
forecast_df = pd.concat(dfs, sort=False, ignore_index=True)
# To make sure plots are in order, sort the predictions by grain and iteration.
ts_index = grains + [BACKTEST_ITER]
forecast_df.sort_values(by=ts_index, inplace=True)
pdf = PdfPages(os.path.join(output_dir, PLOTS_FILE))
for _, one_forecast in forecast_df.groupby(ts_index):
_draw_one_plot(one_forecast, time_column_name, grains, pdf)
pdf.close()
forecast_df.to_csv(os.path.join(output_dir, FORECASTS_FILE), index=False)
# Remove np.NaN and np.inf from the prediction and actuals data.
forecast_df.replace([np.inf, -np.inf], np.nan, inplace=True)
forecast_df.dropna(subset=[ACTUALS, PREDICTIONS], inplace=True)
metrics = compute_all_metrics(forecast_df, grains + [BACKTEST_ITER])
metrics.to_csv(os.path.join(output_dir, SCORES_FILE), index=False)
if __name__ == "__main__":
args = {"forecasts": "--forecasts", "scores_out": "--output-dir"}
parser = argparse.ArgumentParser("Parsing input arguments.")
for argname, arg in args.items():
parser.add_argument(arg, dest=argname, required=True)
parsed_args, _ = parser.parse_known_args()
input_dir = parsed_args.forecasts
output_dir = parsed_args.scores_out
with open(
os.path.join(
os.path.dirname(os.path.realpath(__file__)), "automl_settings.json"
)
) as json_file:
automl_settings = json.load(json_file)
calculate_scores_and_build_plots(input_dir, output_dir, automl_settings)

View File

@@ -0,0 +1,729 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License.\n",
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/auto-ml-forecasting-backtest-single-model.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated MachineLearning\n",
"_**The model backtesting**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"2. [Setup](#Setup)\n",
"3. [Data](#Data)\n",
"4. [Prepare remote compute and data.](#prepare_remote)\n",
"5. [Create the configuration for AutoML backtesting](#train)\n",
"6. [Backtest AutoML](#backtest_automl)\n",
"7. [View metrics](#Metrics)\n",
"8. [Backtest the best model](#backtest_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"Model backtesting is used to evaluate its performance on historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.<br>\n",
"This notebook is intended to demonstrate backtesting on a single model, this is the best solution for small data sets with a few or one time series in it. For scenarios where we would like to choose the best AutoML model for every backtest iteration, please see [AutoML Forecasting Backtest Many Models Example](../forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb) notebook.\n",
"![Backtesting](Backtesting.png)\n",
"This notebook demonstrates two ways of backtesting:\n",
"- AutoML backtesting: we will train separate AutoML models for historical data\n",
"- Model backtesting: from the first run we will select the best model trained on the most recent data, retrain it on the past data and evaluate."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import pandas as pd\n",
"import shutil\n",
"\n",
"import azureml.core\n",
"from azureml.core import Experiment, Model, Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is compatible with Azure ML SDK version 1.35.1 or later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created a <b>Workspace</b>."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"output = {}\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"SKU\"] = ws.sku\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"For the demonstration purposes we will simulate one year of daily data. To do this we need to specify the following parameters: time column name, time series ID column names and label column name. Our intention is to forecast for two weeks ahead."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"TIME_COLUMN_NAME = \"date\"\n",
"TIME_SERIES_ID_COLUMN_NAMES = \"time_series_id\"\n",
"LABEL_COLUMN_NAME = \"y\"\n",
"FORECAST_HORIZON = 14\n",
"FREQUENCY = \"D\"\n",
"\n",
"\n",
"def simulate_timeseries_data(\n",
" train_len: int,\n",
" test_len: int,\n",
" time_column_name: str,\n",
" target_column_name: str,\n",
" time_series_id_column_name: str,\n",
" time_series_number: int = 1,\n",
" freq: str = \"H\",\n",
"):\n",
" \"\"\"\n",
" Return the time series of designed length.\n",
"\n",
" :param train_len: The length of training data (one series).\n",
" :type train_len: int\n",
" :param test_len: The length of testing data (one series).\n",
" :type test_len: int\n",
" :param time_column_name: The desired name of a time column.\n",
" :type time_column_name: str\n",
" :param time_series_number: The number of time series in the data set.\n",
" :type time_series_number: int\n",
" :param freq: The frequency string representing pandas offset.\n",
" see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n",
" :type freq: str\n",
" :returns: the tuple of train and test data sets.\n",
" :rtype: tuple\n",
"\n",
" \"\"\"\n",
" data_train = [] # type: List[pd.DataFrame]\n",
" data_test = [] # type: List[pd.DataFrame]\n",
" data_length = train_len + test_len\n",
" for i in range(time_series_number):\n",
" X = pd.DataFrame(\n",
" {\n",
" time_column_name: pd.date_range(\n",
" start=\"2000-01-01\", periods=data_length, freq=freq\n",
" ),\n",
" target_column_name: np.arange(data_length).astype(float)\n",
" + np.random.rand(data_length)\n",
" + i * 5,\n",
" \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n",
" time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n",
" }\n",
" )\n",
" data_train.append(X[:train_len])\n",
" data_test.append(X[train_len:])\n",
" train = pd.concat(data_train)\n",
" label_train = train.pop(target_column_name).values\n",
" test = pd.concat(data_test)\n",
" label_test = test.pop(target_column_name).values\n",
" return train, label_train, test, label_test\n",
"\n",
"\n",
"n_test_periods = FORECAST_HORIZON\n",
"n_train_periods = 365\n",
"X_train, y_train, X_test, y_test = simulate_timeseries_data(\n",
" train_len=n_train_periods,\n",
" test_len=n_test_periods,\n",
" time_column_name=TIME_COLUMN_NAME,\n",
" target_column_name=LABEL_COLUMN_NAME,\n",
" time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAMES,\n",
" time_series_number=2,\n",
" freq=FREQUENCY,\n",
")\n",
"X_train[LABEL_COLUMN_NAME] = y_train"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see what the training data looks like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train.tail()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare remote compute and data. <a id=\"prepare_remote\"></a>\n",
"The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data.dataset_factory import TabularDatasetFactory\n",
"\n",
"ds = ws.get_default_datastore()\n",
"# Upload saved data to the default data store.\n",
"train_data = TabularDatasetFactory.register_pandas_dataframe(\n",
" X_train, target=(ds, \"data\"), name=\"data_backtest\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will need to create a compute target for backtesting. In this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute), you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"amlcompute_cluster_name = \"backtest-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the configuration for AutoML backtesting <a id=\"train\"></a>\n",
"\n",
"This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **task** | forecasting |\n",
"| **primary_metric** | This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>normalized_root_mean_squared_error</i><br><i>normalized_mean_absolute_error</i> |\n",
"| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n",
"| **label_column_name** | The name of the label column. |\n",
"| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **n_cross_validations** | Number of cross validation splits. The default value is \"auto\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or users could specify an integer value. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n",
"|**cv_step_size**|Number of periods between two consecutive cross-validation folds. The default value is \"auto\", in which case AutoMl determines the cross-validation step size automatically, if a validation set is not provided. Or users could specify an integer value.\n",
"| **time_column_name** | The name of your time column. |\n",
"| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"task\": \"forecasting\",\n",
" \"primary_metric\": \"normalized_root_mean_squared_error\",\n",
" \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n",
" \"iterations\": 15,\n",
" \"experiment_timeout_hours\": 1, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.\n",
" \"label_column_name\": LABEL_COLUMN_NAME,\n",
" \"n_cross_validations\": \"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" \"cv_step_size\": \"auto\",\n",
" \"time_column_name\": TIME_COLUMN_NAME,\n",
" \"max_horizon\": FORECAST_HORIZON,\n",
" \"track_child_runs\": False,\n",
" \"grain_column_names\": TIME_SERIES_ID_COLUMN_NAMES,\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Backtest AutoML <a id=\"backtest_automl\"></a>\n",
"First we set backtesting parameters: we will step back by 30 days and will make 5 such steps; for each step we will forecast for next two weeks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The number of periods to step back on each backtest iteration.\n",
"BACKTESTING_PERIOD = 30\n",
"# The number of times we will back test the model.\n",
"NUMBER_OF_BACKTESTS = 5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To train AutoML on backtesting folds we will use the [Azure Machine Learning pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines). It will generate backtest folds, then train model for each of them and calculate the accuracy metrics. To run pipeline, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve (here, it is a forecasting), while a Run corresponds to a specific approach to the problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from uuid import uuid1\n",
"\n",
"from pipeline_helper import get_backtest_pipeline\n",
"\n",
"pipeline_exp = Experiment(ws, \"automl-backtesting\")\n",
"\n",
"# We will create the unique identifier to mark our models.\n",
"model_uid = str(uuid1())\n",
"\n",
"pipeline = get_backtest_pipeline(\n",
" experiment=pipeline_exp,\n",
" dataset=train_data,\n",
" # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n",
" process_per_node=2,\n",
" # The maximum number of nodes for our compute is 6.\n",
" node_count=6,\n",
" compute_target=compute_target,\n",
" automl_settings=automl_settings,\n",
" step_size=BACKTESTING_PERIOD,\n",
" step_number=NUMBER_OF_BACKTESTS,\n",
" model_uid=model_uid,\n",
" forecast_quantiles=[0.025, 0.975], # Optional\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the pipeline and wait for results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run = pipeline_exp.submit(pipeline)\n",
"pipeline_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the run is complete, we can download the results. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metrics_output = pipeline_run.get_pipeline_output(\"results\")\n",
"metrics_output.download(\"backtest_metrics\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View metrics<a id=\"Metrics\"></a>\n",
"To distinguish these metrics from the model backtest, which we will obtain in the next section, we will move the directory with metrics out of the backtest_metrics and will remove the parent folder. We will create the utility function for that."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def copy_scoring_directory(new_name):\n",
" scores_path = os.path.join(\"backtest_metrics\", \"azureml\")\n",
" directory_list = [os.path.join(scores_path, d) for d in os.listdir(scores_path)]\n",
" latest_file = max(directory_list, key=os.path.getctime)\n",
" print(\n",
" f\"The output directory {latest_file} was created on {pd.Timestamp(os.path.getctime(latest_file), unit='s')} GMT.\"\n",
" )\n",
" shutil.move(os.path.join(latest_file, \"results\"), new_name)\n",
" shutil.rmtree(\"backtest_metrics\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Move the directory and list its contents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"copy_scoring_directory(\"automl_backtest\")\n",
"pd.DataFrame({\"File\": os.listdir(\"automl_backtest\")})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The directory contains a set of files with results:\n",
"- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n",
"- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series id are marked as \"all_sets\"\n",
"- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and time series.\n",
"\n",
"For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". Again, we will create the utility function, which will be re used in model backtesting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_metrics_for_ts(all_metrics, ts):\n",
" \"\"\"\n",
" Get the metrics for the time series with ID ts and return it as pandas data frame.\n",
"\n",
" :param all_metrics: The table with all the metrics.\n",
" :param ts: The ID of a time series of interest.\n",
" :return: The pandas DataFrame with metrics for one time series.\n",
" \"\"\"\n",
" results_df = None\n",
" for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n",
" if not ts_id.startswith(ts):\n",
" continue\n",
" iteration = ts_id.split(\"|\")[-1]\n",
" df = one_series[[\"metric_name\", \"metric\"]]\n",
" df.rename({\"metric\": iteration}, axis=1, inplace=True)\n",
" df.set_index(\"metric_name\", inplace=True)\n",
" if results_df is None:\n",
" results_df = df\n",
" else:\n",
" results_df = results_df.merge(\n",
" df, how=\"inner\", left_index=True, right_index=True\n",
" )\n",
" results_df.sort_index(axis=1, inplace=True)\n",
" return results_df\n",
"\n",
"\n",
"metrics_df = pd.read_csv(os.path.join(\"automl_backtest\", \"scores.csv\"))\n",
"ts_id = \"ts0\"\n",
"get_metrics_for_ts(metrics_df, ts_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Forecast vs actuals plots."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import IFrame\n",
"\n",
"IFrame(\"./automl_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# <font color='blue'>Backtest the best model</font> <a id=\"backtest_model\"></a>\n",
"\n",
"For model backtesting we will use the same parameters we used to backtest AutoML. All the models, we have obtained in the previous run were registered in our workspace. To identify the model, each was assigned a tag with the last trainig date."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_list = Model.list(ws, tags=[[\"experiment\", \"automl-backtesting\"]])\n",
"model_data = {\"name\": [], \"last_training_date\": []}\n",
"for model in model_list:\n",
" if (\n",
" \"last_training_date\" not in model.tags\n",
" or \"model_uid\" not in model.tags\n",
" or model.tags[\"model_uid\"] != model_uid\n",
" ):\n",
" continue\n",
" model_data[\"name\"].append(model.name)\n",
" model_data[\"last_training_date\"].append(\n",
" pd.Timestamp(model.tags[\"last_training_date\"])\n",
" )\n",
"df_models = pd.DataFrame(model_data)\n",
"df_models.sort_values([\"last_training_date\"], inplace=True)\n",
"df_models.reset_index(inplace=True, drop=True)\n",
"df_models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will backtest the model trained on the most recet data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = df_models[\"name\"].iloc[-1]\n",
"model_name"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrain the models.\n",
"Assemble the pipeline, which will retrain the best model from AutoML run on historical data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_exp = Experiment(ws, \"model-backtesting\")\n",
"\n",
"pipeline = get_backtest_pipeline(\n",
" experiment=pipeline_exp,\n",
" dataset=train_data,\n",
" # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n",
" process_per_node=2,\n",
" # The maximum number of nodes for our compute is 6.\n",
" node_count=6,\n",
" compute_target=compute_target,\n",
" automl_settings=automl_settings,\n",
" step_size=BACKTESTING_PERIOD,\n",
" step_number=NUMBER_OF_BACKTESTS,\n",
" model_name=model_name,\n",
" forecast_quantiles=[0.025, 0.975],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Launch the backtesting pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run = pipeline_exp.submit(pipeline)\n",
"pipeline_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The metrics are stored in the pipeline output named \"score\". The next code will download the table with metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metrics_output = pipeline_run.get_pipeline_output(\"results\")\n",
"metrics_output.download(\"backtest_metrics\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, we will copy the data files from the downloaded directory, but in this case we will call the folder \"model_backtest\"; it will contain the same files as the one for AutoML backtesting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"copy_scoring_directory(\"model_backtest\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we will display the metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_metrics_df = pd.read_csv(os.path.join(\"model_backtest\", \"scores.csv\"))\n",
"get_metrics_for_ts(model_metrics_df, ts_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Forecast vs actuals plots."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import IFrame\n",
"\n",
"IFrame(\"./model_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)"
]
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"category": "tutorial",
"compute": [
"Remote"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"Azure ML AutoML"
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
},
"vscode": {
"interpreter": {
"hash": "6bd77c88278e012ef31757c15997a7bea8c943977c43d6909403c00ae11d43ca"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,169 @@
from typing import Any, Dict, Optional
import os
import azureml.train.automl.runtime._hts.hts_runtime_utilities as hru
from azureml._restclient.jasmine_client import JasmineClient
from azureml.contrib.automl.pipeline.steps import utilities
from azureml.core import RunConfiguration
from azureml.core.compute import ComputeTarget
from azureml.core.experiment import Experiment
from azureml.data import LinkTabularOutputDatasetConfig, TabularDataset
from azureml.pipeline.core import Pipeline, PipelineData, PipelineParameter
from azureml.pipeline.steps import ParallelRunConfig, ParallelRunStep, PythonScriptStep
from azureml.train.automl.constants import Scenarios
from azureml.data.dataset_consumption_config import DatasetConsumptionConfig
PROJECT_FOLDER = "assets"
SETTINGS_FILE = "automl_settings.json"
def get_backtest_pipeline(
experiment: Experiment,
dataset: TabularDataset,
process_per_node: int,
node_count: int,
compute_target: ComputeTarget,
automl_settings: Dict[str, Any],
step_size: int,
step_number: int,
model_name: Optional[str] = None,
model_uid: Optional[str] = None,
forecast_quantiles: Optional[list] = None,
) -> Pipeline:
"""
:param experiment: The experiment used to run the pipeline.
:param dataset: Tabular data set to be used for model training.
:param process_per_node: The number of processes per node. Generally it should be the number of cores
on the node divided by two.
:param node_count: The number of nodes to be used.
:param compute_target: The compute target to be used to run the pipeline.
:param model_name: The name of a model to be back tested.
:param automl_settings: The dictionary with automl settings.
:param step_size: The number of periods to step back in backtesting.
:param step_number: The number of backtesting iterations.
:param model_uid: The uid to mark models from this run of the experiment.
:param forecast_quantiles: The forecast quantiles that are required in the inference.
:return: The pipeline to be used for model retraining.
**Note:** The output will be uploaded in the pipeline output
called 'score'.
"""
jasmine_client = JasmineClient(
service_context=experiment.workspace.service_context,
experiment_name=experiment.name,
experiment_id=experiment.id,
)
env = jasmine_client.get_curated_environment(
scenario=Scenarios.AUTOML,
enable_dnn=False,
enable_gpu=False,
compute=compute_target,
compute_sku=experiment.workspace.compute_targets.get(
compute_target.name
).vm_size,
)
data_results = PipelineData(
name="results", datastore=None, pipeline_output_name="results"
)
############################################################
# Split the data set using python script.
############################################################
run_config = RunConfiguration()
run_config.docker.use_docker = True
run_config.environment = env
utilities.set_environment_variables_for_run(run_config)
split_data = PipelineData(name="split_data_output", datastore=None).as_dataset()
split_step = PythonScriptStep(
name="split_data_for_backtest",
script_name="data_split.py",
inputs=[dataset.as_named_input("training_data")],
outputs=[split_data],
source_directory=PROJECT_FOLDER,
arguments=[
"--step-size",
step_size,
"--step-number",
step_number,
"--time-column-name",
automl_settings.get("time_column_name"),
"--time-series-id-column-names",
automl_settings.get("grain_column_names"),
"--output-dir",
split_data,
],
runconfig=run_config,
compute_target=compute_target,
allow_reuse=False,
)
############################################################
# We will do the backtest the parallel run step.
############################################################
settings_path = os.path.join(PROJECT_FOLDER, SETTINGS_FILE)
hru.dump_object_to_json(automl_settings, settings_path)
mini_batch_size = PipelineParameter(name="batch_size_param", default_value=str(1))
back_test_config = ParallelRunConfig(
source_directory=PROJECT_FOLDER,
entry_script="retrain_models.py",
mini_batch_size=mini_batch_size,
error_threshold=-1,
output_action="append_row",
append_row_file_name="outputs.txt",
compute_target=compute_target,
environment=env,
process_count_per_node=process_per_node,
run_invocation_timeout=3600,
node_count=node_count,
)
utilities.set_environment_variables_for_run(back_test_config)
forecasts = PipelineData(name="forecasts", datastore=None)
if model_name:
parallel_step_name = "{}-backtest".format(model_name.replace("_", "-"))
else:
parallel_step_name = "AutoML-backtest"
prs_args = [
"--target_column_name",
automl_settings.get("label_column_name"),
"--output-dir",
forecasts,
]
if model_name is not None:
prs_args.append("--model-name")
prs_args.append(model_name)
if model_uid is not None:
prs_args.append("--model-uid")
prs_args.append(model_uid)
if forecast_quantiles:
prs_args.append("--forecast_quantiles")
prs_args.extend(forecast_quantiles)
backtest_prs = ParallelRunStep(
name=parallel_step_name,
parallel_run_config=back_test_config,
arguments=prs_args,
inputs=[split_data],
output=forecasts,
allow_reuse=False,
)
############################################################
# Then we collect the output and return it as scores output.
############################################################
collection_step = PythonScriptStep(
name="score",
script_name="score.py",
inputs=[forecasts.as_mount()],
outputs=[data_results],
source_directory=PROJECT_FOLDER,
arguments=["--forecasts", forecasts, "--output-dir", data_results],
runconfig=run_config,
compute_target=compute_target,
allow_reuse=False,
)
# Build and return the pipeline.
return Pipeline(
workspace=experiment.workspace,
steps=[split_step, backtest_prs, collection_step],
)

View File

@@ -16,6 +16,13 @@
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=\"red\" size=\"5\"><strong>!Important!</strong> </br>This notebook is outdated and is not supported by the AutoML Team. Please use the supported version ([link](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share)).</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -26,8 +33,10 @@
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Compute](#Compute)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Featurization](#Featurization)\n",
"1. [Evaluate](#Evaluate)"
]
},
@@ -40,7 +49,7 @@
"\n",
"AutoML highlights here include built-in holiday featurization, accessing engineered feature names, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) before running this notebook.\n",
"\n",
"Notebook synopsis:\n",
"1. Creating an Experiment in an existing Workspace\n",
@@ -56,28 +65,42 @@
"## Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1680248038565
}
},
"outputs": [],
"source": [
"import json\n",
"import logging\n",
"from datetime import datetime\n",
"\n",
"import azureml.core\n",
"import numpy as np\n",
"import pandas as pd\n",
"from azureml.automl.core.featurization import FeaturizationConfig\n",
"from azureml.core import Dataset, Experiment, Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import logging\n",
"import warnings\n",
"\n",
"from pandas.tseries.frequencies import to_offset\n",
"\n",
"# Squash warning messages for cleaner output in the notebook\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from matplotlib import pyplot as plt\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
@@ -96,22 +119,20 @@
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-bikeshareforecasting'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-bikeshareforecasting'\n",
"experiment_name = \"automl-bikeshareforecasting\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"SKU\"] = ws.sku\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -119,8 +140,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"Read bike share demand data from file, and preview data."
"## Compute\n",
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
@@ -129,23 +156,36 @@
"metadata": {},
"outputs": [],
"source": [
"data = pd.read_csv('bike-no.csv', parse_dates=['date'])\n",
"data.head()"
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"bike-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"Let's set up what we know about the dataset. \n",
"\n",
"**Target column** is what we want to forecast.\n",
"\n",
"**Time column** is the time axis along which to predict.\n",
"\n",
"**Grain** is another word for an individual time series in your dataset. Grains are identified by values of the columns listed `grain_column_names`, for example \"store\" and \"item\" if your data has multiple time series of sales, one series for each combination of store and item sold.\n",
"\n",
"This dataset has only one time series. Please see the [orange juice notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales) for an example of a multi-time series dataset."
"**Time column** is the time axis along which to predict."
]
},
{
@@ -154,39 +194,161 @@
"metadata": {},
"outputs": [],
"source": [
"target_column_name = 'cnt'\n",
"time_column_name = 'date'\n",
"grain_column_names = []"
"target_column_name = \"cnt\"\n",
"time_column_name = \"date\""
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"You are now ready to load the historical bike share data. We will load the CSV file into a plain pandas DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"all_data = pd.read_csv(\"bike-no.csv\", parse_dates=[time_column_name])\n",
"\n",
"# Drop the columns 'casual' and 'registered' as these columns are a breakdown of the total and therefore a leak.\n",
"all_data.drop([\"casual\", \"registered\"], axis=1, inplace=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Split the data\n",
"\n",
"The first split we make is into train and test sets. Note we are splitting on time. Data before 9/1 will be used for training, and data after and including 9/1 will be used for testing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1680247376789
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"# select data that occurs before a specified date\n",
"train = all_data[all_data[time_column_name] <= pd.Timestamp(\"2012-08-31\")].copy()\n",
"test = all_data[all_data[time_column_name] >= pd.Timestamp(\"2012-09-01\")].copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Split the data\n",
"### Upload data to datastore\n",
"\n",
"The first split we make is into train and test sets. Note we are splitting on time."
"The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"train = data[data[time_column_name] < '2012-09-01']\n",
"test = data[data[time_column_name] >= '2012-09-01']\n",
"from azureml.data.dataset_factory import TabularDatasetFactory\n",
"\n",
"X_train = train.copy()\n",
"y_train = X_train.pop(target_column_name).values\n",
"datastore = ws.get_default_datastore()\n",
"\n",
"X_test = test.copy()\n",
"y_test = X_test.pop(target_column_name).values\n",
"train_dataset = TabularDatasetFactory.register_pandas_dataframe(\n",
" train, target=(datastore, \"dataset/\"), name=\"bike_no_train\"\n",
")\n",
"\n",
"print(X_train.shape)\n",
"print(y_train.shape)\n",
"print(X_test.shape)\n",
"print(y_test.shape)"
"test_dataset = TabularDatasetFactory.register_pandas_dataframe(\n",
" test, target=(datastore, \"dataset/\"), name=\"bike_no_test\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Forecasting Parameters\n",
"To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**time_column_name**|The name of your time column.|\n",
"|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n",
"|**country_or_region_for_holidays**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n",
"|**target_lags**|The target_lags specifies how far back we will construct the lags of the target variable.|\n",
"|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.\n",
"|**cv_step_size**|Number of periods between two consecutive cross-validation folds. The default value is \"auto\", in which case AutoMl determines the cross-validation step size automatically, if a validation set is not provided. Or users could specify an integer value."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n",
"|**experiment_timeout_hours**|Experimentation timeout in hours.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**compute_target**|The remote compute for training.|\n",
"|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection. The default value is \"auto\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or users could specify an integer value.\n",
"|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|\n",
"|**forecasting_parameters**|A class that holds all the forecasting related parameters.|\n",
"\n",
"This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
]
},
{
@@ -204,28 +366,15 @@
"metadata": {},
"outputs": [],
"source": [
"max_horizon = 14"
"forecast_horizon = 14"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**country_or_region**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
"### Convert prediction type to integer\n",
"The featurization configuration can be used to change the default prediction type from decimal numbers to integer. This customization can be used in the scenario when the target column is expected to contain whole values as the number of rented bikes per day."
]
},
{
@@ -234,33 +383,58 @@
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" 'time_column_name': time_column_name,\n",
" 'max_horizon': max_horizon,\n",
" # knowing the country/region allows Automated ML to bring in holidays\n",
" 'country_or_region': 'US',\n",
" 'target_lags': 1,\n",
" # these columns are a breakdown of the total and therefore a leak\n",
" 'drop_column_names': ['casual', 'registered']\n",
"}\n",
"featurization_config = FeaturizationConfig()\n",
"# Force the target column, to be integer type.\n",
"featurization_config.add_prediction_transform_type(\"Integer\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Config AutoML"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"\n",
"automl_config = AutoMLConfig(task='forecasting', \n",
" primary_metric='normalized_root_mean_squared_error',\n",
" iterations=10,\n",
" iteration_timeout_minutes=5,\n",
" X=X_train,\n",
" y=y_train,\n",
" n_cross_validations=3, \n",
" path=project_folder,\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=time_column_name,\n",
" forecast_horizon=forecast_horizon,\n",
" country_or_region_for_holidays=\"US\", # set country_or_region will trigger holiday featurizer\n",
" target_lags=\"auto\", # use heuristic based lag setting\n",
" freq=\"D\", # Set the forecast frequency to be daily\n",
" cv_step_size=\"auto\",\n",
")\n",
"\n",
"automl_config = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" featurization=featurization_config,\n",
" blocked_models=[\"ExtremeRandomTrees\"],\n",
" experiment_timeout_hours=0.3,\n",
" training_data=train_dataset,\n",
" label_column_name=target_column_name,\n",
" compute_target=compute_target,\n",
" enable_early_stopping=True,\n",
" n_cross_validations=\"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" max_concurrent_iterations=4,\n",
" max_cores_per_iteration=-1,\n",
" verbosity=logging.INFO,\n",
" **automl_settings)"
" forecasting_parameters=forecasting_parameters,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required. You will see the currently running iterations printing to the console."
"We will now run the experiment, you can go to Azure ML portal to view the run details. "
]
},
{
@@ -269,14 +443,7 @@
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
@@ -285,15 +452,15 @@
"metadata": {},
"outputs": [],
"source": [
"local_run"
"remote_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
"### Retrieve the Best Run details\n",
"Below we retrieve the best Run object from among all the runs in the experiment."
]
},
{
@@ -302,17 +469,17 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"fitted_model.steps"
"best_run = remote_run.get_best_child()\n",
"best_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the engineered names for featurized data\n",
"## Featurization\n",
"\n",
"You can accees the engineered feature names generated in time-series featurization. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization."
"We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization."
]
},
{
@@ -321,7 +488,14 @@
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()"
"# Download the JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\"\n",
")\n",
"with open(\"engineered_feature_names.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"\n",
"records"
]
},
{
@@ -345,10 +519,26 @@
"metadata": {},
"outputs": [],
"source": [
"# Get the featurization summary as a list of JSON\n",
"featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()\n",
"# View the featurization summary as a pandas dataframe\n",
"pd.DataFrame.from_records(featurization_summary)"
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"fs = pd.DataFrame.from_records(records)\n",
"\n",
"# View a summary of the featurization\n",
"fs[\n",
" [\n",
" \"RawFeatureName\",\n",
" \"TypeDetected\",\n",
" \"Dropped\",\n",
" \"EngineeredFeatureCount\",\n",
" \"Transformations\",\n",
" ]\n",
"]"
]
},
{
@@ -362,9 +552,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We now use the best fitted model from the AutoML Run to make forecasts for the test set. \n",
"We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n",
"\n",
"We always score on the original dataset whose schema matches the training set schema."
"The scoring will run on a remote compute. In this example, it will reuse the training compute."
]
},
{
@@ -373,16 +563,15 @@
"metadata": {},
"outputs": [],
"source": [
"X_test.head()"
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now define some functions for aligning output to input and for producing rolling forecasts over the full test set. As previously stated, the forecast horizon of 14 days is shorter than the length of the test set - which is about 120 days. To get predictions over the full test set, we iterate over the test set, making forecasts 14 days at a time and combining the results. We also make sure that each 14-day forecast uses up-to-date actuals - the current context - to construct lag features. \n",
"\n",
"It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows."
"### Retrieving forecasts from the model\n",
"To run the forecast on the remote compute we will use a helper script: forecasting_script. This script contains the utility methods which will be used by the remote estimator. We copy the script to the project folder to upload it to remote compute."
]
},
{
@@ -391,99 +580,19 @@
"metadata": {},
"outputs": [],
"source": [
"def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name='predicted',\n",
" horizon_colname='horizon_origin'):\n",
" \"\"\"\n",
" Demonstrates how to get the output aligned to the inputs\n",
" using pandas indexes. Helps understand what happened if\n",
" the output's shape differs from the input shape, or if\n",
" the data got re-sorted by time and grain during forecasting.\n",
" \n",
" Typical causes of misalignment are:\n",
" * we predicted some periods that were missing in actuals -> drop from eval\n",
" * model was asked to predict past max_horizon -> increase max horizon\n",
" * data at start of X_test was needed for lags -> provide previous periods\n",
" \"\"\"\n",
" df_fcst = pd.DataFrame({predicted_column_name : y_predicted,\n",
" horizon_colname: X_trans[horizon_colname]})\n",
" # y and X outputs are aligned by forecast() function contract\n",
" df_fcst.index = X_trans.index\n",
" \n",
" # align original X_test to y_test \n",
" X_test_full = X_test.copy()\n",
" X_test_full[target_column_name] = y_test\n",
"import os\n",
"import shutil\n",
"\n",
" # X_test_full's index does not include origin, so reset for merge\n",
" df_fcst.reset_index(inplace=True)\n",
" X_test_full = X_test_full.reset_index().drop(columns='index')\n",
" together = df_fcst.merge(X_test_full, how='right')\n",
" \n",
" # drop rows where prediction or actuals are nan \n",
" # happens because of missing actuals \n",
" # or at edges of time due to lags/rolling windows\n",
" clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]\n",
" return(clean)\n",
"\n",
"def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):\n",
" \"\"\"\n",
" Produce forecasts on a rolling origin over the given test set.\n",
" \n",
" Each iteration makes a forecast for the next 'max_horizon' periods \n",
" with respect to the current origin, then advances the origin by the horizon time duration. \n",
" The prediction context for each forecast is set so that the forecaster uses \n",
" the actual target values prior to the current origin time for constructing lag features.\n",
" \n",
" This function returns a concatenated DataFrame of rolling forecasts.\n",
" \"\"\"\n",
" df_list = []\n",
" origin_time = X_test[time_column_name].min()\n",
" while origin_time <= X_test[time_column_name].max():\n",
" # Set the horizon time - end date of the forecast\n",
" horizon_time = origin_time + max_horizon * to_offset(freq)\n",
" \n",
" # Extract test data from an expanding window up-to the horizon \n",
" expand_wind = (X_test[time_column_name] < horizon_time)\n",
" X_test_expand = X_test[expand_wind]\n",
" y_query_expand = np.zeros(len(X_test_expand)).astype(np.float)\n",
" y_query_expand.fill(np.NaN)\n",
" \n",
" if origin_time != X_test[time_column_name].min():\n",
" # Set the context by including actuals up-to the origin time\n",
" test_context_expand_wind = (X_test[time_column_name] < origin_time)\n",
" context_expand_wind = (X_test_expand[time_column_name] < origin_time)\n",
" y_query_expand[context_expand_wind] = y_test[test_context_expand_wind]\n",
" \n",
" # Make a forecast out to the maximum horizon\n",
" y_fcst, X_trans = fitted_model.forecast(X_test_expand, y_query_expand)\n",
" \n",
" # Align forecast with test set for dates within the current rolling window \n",
" trans_tindex = X_trans.index.get_level_values(time_column_name)\n",
" trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)\n",
" test_roll_wind = expand_wind & (X_test[time_column_name] >= origin_time)\n",
" df_list.append(align_outputs(y_fcst[trans_roll_wind], X_trans[trans_roll_wind],\n",
" X_test[test_roll_wind], y_test[test_roll_wind]))\n",
" \n",
" # Advance the origin time\n",
" origin_time = horizon_time\n",
" \n",
" return pd.concat(df_list, ignore_index=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_all = do_rolling_forecast(fitted_model, X_test, y_test, max_horizon)\n",
"df_all"
"script_folder = os.path.join(os.getcwd(), \"forecast\")\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"shutil.copy(\"forecasting_script.py\", script_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now calculate some error metrics for the forecasts and vizualize the predictions vs. the actuals."
"For brevity, we have created a function called run_forecast that submits the test data to the best model determined during the training run and retrieves forecasts. The test set is longer than the forecast horizon specified at train time, so the forecasting script uses a so-called rolling evaluation to generate predictions over the whole test set. A rolling evaluation iterates the forecaster over the test set, using the actuals in the test set to make lag features as needed. "
]
},
{
@@ -492,23 +601,12 @@
"metadata": {},
"outputs": [],
"source": [
"def APE(actual, pred):\n",
" \"\"\"\n",
" Calculate absolute percentage error.\n",
" Returns a vector of APE values with same length as actual/pred.\n",
" \"\"\"\n",
" return 100*np.abs((actual - pred)/actual)\n",
"from run_forecast import run_rolling_forecast\n",
"\n",
"def MAPE(actual, pred):\n",
" \"\"\"\n",
" Calculate mean absolute percentage error.\n",
" Remove NA and values where actual is close to zero\n",
" \"\"\"\n",
" not_na = ~(np.isnan(actual) | np.isnan(pred))\n",
" not_zero = ~np.isclose(actual, 0.0)\n",
" actual_safe = actual[not_na & not_zero]\n",
" pred_safe = pred[not_na & not_zero]\n",
" return np.mean(APE(actual_safe, pred_safe))"
"remote_run = run_rolling_forecast(\n",
" test_experiment, compute_target, best_run, test_dataset, target_column_name\n",
")\n",
"remote_run"
]
},
{
@@ -517,18 +615,111 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"Simple forecasting model\")\n",
"rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))\n",
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % rmse)\n",
"mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])\n",
"print('mean_absolute_error score: %.2f' % mae)\n",
"print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))\n",
"remote_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the prediction result for metrics calculation\n",
"The test data with predictions are saved in artifact outputs/predictions.csv. You can download it and calculation some error metrics for the forecasts and vizualize the predictions vs. the actuals."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.download_file(\"outputs/predictions.csv\", \"predictions.csv\")\n",
"fcst_df = pd.read_csv(\"predictions.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the rolling forecast can contain multiple predictions for each date, each from a different forecast origin. For example, consider 2012-09-05:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fcst_df[fcst_df.date == \"2012-09-05\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, the forecast origin refers to the latest date of actuals available for a given forecast. The earliest origin in the rolling forecast, 2012-08-31, is the last day in the training data. For origin date 2012-09-01, the forecasts use actual recorded counts from the training data *and* the actual count recorded on 2012-09-01. Note that the model is not retrained for origin dates later than 2012-08-31, but the values for model features, such as lagged values of daily count, are updated.\n",
"\n",
"Let's calculate the metrics over all rolling forecasts:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.automl.core.shared import constants\n",
"from azureml.automl.runtime.shared.score import scoring\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error\n",
"\n",
"# use automl metrics module\n",
"scores = scoring.score_regression(\n",
" y_test=fcst_df[target_column_name],\n",
" y_pred=fcst_df[\"predicted\"],\n",
" metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n",
")\n",
"\n",
"print(\"[Test data scores]\\n\")\n",
"for key, value in scores.items():\n",
" print(\"{}: {:.3f}\".format(key, value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For more details on what metrics are included and how they are calculated, please refer to [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics). You could also calculate residuals, like described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n",
"\n",
"The rolling forecast metric values are very high in comparison to the validation metrics reported by the AutoML job. What's going on here? We will investigate in the following cells!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Forecast versus actuals plot\n",
"We will plot predictions and actuals on a time series plot. Since there are many forecasts for each date, we select the 14-day-ahead forecast from each forecast origin for our comparison."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from matplotlib import pyplot as plt\n",
"\n",
"# Plot outputs\n",
"%matplotlib inline\n",
"test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"\n",
"fcst_df_h14 = (\n",
" fcst_df.groupby(\"forecast_origin\", as_index=False)\n",
" .last()\n",
" .drop(columns=[\"forecast_origin\"])\n",
")\n",
"fcst_df_h14.set_index(time_column_name, inplace=True)\n",
"plt.plot(fcst_df_h14[[target_column_name, \"predicted\"]])\n",
"plt.xticks(rotation=45)\n",
"plt.title(f\"Predicted vs. Actuals\")\n",
"plt.legend([\"actual\", \"14-day-ahead forecast\"])\n",
"plt.show()"
]
},
@@ -536,7 +727,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The MAPE seems high; it is being skewed by an actual with a small absolute value. For a more informative evaluation, we can calculate the metrics by forecast horizon:"
"Looking at the plot, there are two clear issues:\n",
"1. An anomalously low count value on October 29th, 2012.\n",
"2. End-of-year holidays (Thanksgiving and Christmas) in late November and late December.\n",
"\n",
"What happened on Oct. 29th, 2012? That day, Hurricane Sandy brought severe storm surge flooding to the east coast of the United States, particularly around New York City. This is certainly an anomalous event that the model did not account for!\n",
"\n",
"As for the late year holidays, the model apparently did not learn to account for the full reduction of bike share rentals on these major holidays. The training data covers 2011 and early 2012, so the model fit only had access to a single occurrence of these holidays. This makes it challenging to resolve holiday effects; however, a larger AutoML model search may result in a better model that is more holiday-aware.\n",
"\n",
"If we filter the predictions prior to the Thanksgiving holiday and remove the anomalous day of 2012-10-29, the metrics are closer to validation levels:"
]
},
{
@@ -545,49 +744,49 @@
"metadata": {},
"outputs": [],
"source": [
"df_all.groupby('horizon_origin').apply(\n",
" lambda df: pd.Series({'MAPE': MAPE(df[target_column_name], df['predicted']),\n",
" 'RMSE': np.sqrt(mean_squared_error(df[target_column_name], df['predicted'])),\n",
" 'MAE': mean_absolute_error(df[target_column_name], df['predicted'])}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's also interesting to see the distributions of APE (absolute percentage error) by horizon. On a log scale, the outlying APE in the horizon-3 group is clear."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_all_APE = df_all.assign(APE=APE(df_all[target_column_name], df_all['predicted']))\n",
"APEs = [df_all_APE[df_all['horizon_origin'] == h].APE.values for h in range(1, max_horizon + 1)]\n",
"date_filter = (fcst_df.date != \"2012-10-29\") & (fcst_df.date < \"2012-11-22\")\n",
"scores = scoring.score_regression(\n",
" y_test=fcst_df[date_filter][target_column_name],\n",
" y_pred=fcst_df[date_filter][\"predicted\"],\n",
" metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n",
")\n",
"\n",
"%matplotlib inline\n",
"plt.boxplot(APEs)\n",
"plt.yscale('log')\n",
"plt.xlabel('horizon')\n",
"plt.ylabel('APE (%)')\n",
"plt.title('Absolute Percentage Errors by Forecast Horizon')\n",
"\n",
"plt.show()"
"print(\"[Test data scores (filtered)]\\n\")\n",
"for key, value in scores.items():\n",
" print(\"{}: {:.3f}\".format(key, value))"
]
}
],
"metadata": {
"authors": [
{
"name": "erwright"
"name": "jialiu"
}
],
"category": "tutorial",
"compute": [
"Remote"
],
"datasets": [
"BikeShare"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"file_extension": ".py",
"framework": [
"Azure ML AutoML"
],
"friendly_name": "Forecasting BikeShare Demand",
"index_order": 1,
"kernel_info": {
"name": "python38-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python36"
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
@@ -599,9 +798,31 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"version": "3.8.10"
},
"microsoft": {
"ms_spell_check": {
"ms_spell_check_language": "en"
}
},
"mimetype": "text/x-python",
"name": "python",
"npconvert_exporter": "python",
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"pygments_lexer": "ipython3",
"tags": [
"Forecasting"
],
"task": "Forecasting",
"version": 3,
"vscode": {
"interpreter": {
"hash": "6bd77c88278e012ef31757c15997a7bea8c943977c43d6909403c00ae11d43ca"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -1,9 +0,0 @@
name: auto-ml-forecasting-bike-share
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml
- statsmodels

View File

@@ -0,0 +1,53 @@
import argparse
from azureml.core import Dataset, Run
import joblib
parser = argparse.ArgumentParser()
parser.add_argument(
"--target_column_name",
type=str,
dest="target_column_name",
help="Target Column Name",
)
parser.add_argument(
"--test_dataset", type=str, dest="test_dataset", help="Test Dataset"
)
args = parser.parse_args()
target_column_name = args.target_column_name
test_dataset_id = args.test_dataset
run = Run.get_context()
ws = run.experiment.workspace
# get the input dataset by id
test_dataset = Dataset.get_by_id(ws, id=test_dataset_id)
X_test_df = (
test_dataset.drop_columns(columns=[target_column_name])
.to_pandas_dataframe()
.reset_index(drop=True)
)
y_test_df = (
test_dataset.with_timestamp_columns(None)
.keep_columns(columns=[target_column_name])
.to_pandas_dataframe()
)
fitted_model = joblib.load("model.pkl")
X_rf = fitted_model.rolling_forecast(X_test_df, y_test_df.values, step=1)
# Add predictions, actuals, and horizon relative to rolling origin to the test feature data
assign_dict = {
fitted_model.forecast_origin_column_name: "forecast_origin",
fitted_model.forecast_column_name: "predicted",
fitted_model.actual_column_name: target_column_name,
}
X_rf.rename(columns=assign_dict, inplace=True)
file_name = "outputs/predictions.csv"
export_csv = X_rf.to_csv(file_name, header=True)
# Upload the predictions into artifacts
run.upload_file(name=file_name, path_or_stream=file_name)

View File

@@ -0,0 +1,22 @@
import pandas as pd
import numpy as np
def APE(actual, pred):
"""
Calculate absolute percentage error.
Returns a vector of APE values with same length as actual/pred.
"""
return 100 * np.abs((actual - pred) / actual)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
return np.mean(APE(actual_safe, pred_safe))

View File

@@ -0,0 +1,40 @@
from azureml.core import ScriptRunConfig
def run_rolling_forecast(
test_experiment,
compute_target,
train_run,
test_dataset,
target_column_name,
inference_folder="./forecast",
):
train_run.download_file("outputs/model.pkl", inference_folder + "/model.pkl")
inference_env = train_run.get_environment()
config = ScriptRunConfig(
source_directory=inference_folder,
script="forecasting_script.py",
arguments=[
"--target_column_name",
target_column_name,
"--test_dataset",
test_dataset.as_named_input(test_dataset.name),
],
compute_target=compute_target,
environment=inference_env,
)
run = test_experiment.submit(
config,
tags={
"training_run_id": train_run.id,
"run_algorithm": train_run.properties["run_algorithm"],
"valid_score": train_run.properties["score"],
"primary_metric": train_run.properties["primary_metric"],
},
)
run.log("run_algorithm", run.tags["run_algorithm"])
return run

View File

@@ -1,10 +0,0 @@
name: auto-ml-forecasting-energy-demand
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml
- statsmodels
- azureml-explain-model

View File

@@ -0,0 +1,61 @@
"""
This is the script that is executed on the compute instance. It relies
on the model.pkl file which is uploaded along with this script to the
compute instance.
"""
import argparse
from azureml.core import Dataset, Run
import joblib
from pandas.tseries.frequencies import to_offset
parser = argparse.ArgumentParser()
parser.add_argument(
"--target_column_name",
type=str,
dest="target_column_name",
help="Target Column Name",
)
parser.add_argument(
"--test_dataset", type=str, dest="test_dataset", help="Test Dataset"
)
args = parser.parse_args()
target_column_name = args.target_column_name
test_dataset_id = args.test_dataset
run = Run.get_context()
ws = run.experiment.workspace
# get the input dataset by id
test_dataset = Dataset.get_by_id(ws, id=test_dataset_id)
X_test = test_dataset.to_pandas_dataframe().reset_index(drop=True)
y_test = X_test.pop(target_column_name).values
# generate forecast
fitted_model = joblib.load("model.pkl")
# We have default quantiles values set as below(95th percentile)
quantiles = [0.025, 0.5, 0.975]
predicted_column_name = "predicted"
PI = "prediction_interval"
fitted_model.quantiles = quantiles
pred_quantiles = fitted_model.forecast_quantiles(X_test)
pred_quantiles[PI] = pred_quantiles[[min(quantiles), max(quantiles)]].apply(
lambda x: "[{}, {}]".format(x[0], x[1]), axis=1
)
X_test[target_column_name] = y_test
X_test[PI] = pred_quantiles[PI]
X_test[predicted_column_name] = pred_quantiles[0.5]
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = X_test[
X_test[[target_column_name, predicted_column_name]].notnull().all(axis=1)
]
file_name = "outputs/predictions.csv"
export_csv = clean.to_csv(file_name, header=True, index=False) # added Index
# Upload the predictions into artifacts
run.upload_file(name=file_name, path_or_stream=file_name)

View File

@@ -0,0 +1,49 @@
import os
import shutil
from azureml.core import ScriptRunConfig
def run_remote_inference(
test_experiment,
compute_target,
train_run,
test_dataset,
target_column_name,
inference_folder="./forecast",
):
# Create local directory to copy the model.pkl and forecsting_script.py files into.
# These files will be uploaded to and executed on the compute instance.
os.makedirs(inference_folder, exist_ok=True)
shutil.copy("forecasting_script.py", inference_folder)
train_run.download_file(
"outputs/model.pkl", os.path.join(inference_folder, "model.pkl")
)
inference_env = train_run.get_environment()
config = ScriptRunConfig(
source_directory=inference_folder,
script="forecasting_script.py",
arguments=[
"--target_column_name",
target_column_name,
"--test_dataset",
test_dataset.as_named_input(test_dataset.name),
],
compute_target=compute_target,
environment=inference_env,
)
run = test_experiment.submit(
config,
tags={
"training_run_id": train_run.id,
"run_algorithm": train_run.properties["run_algorithm"],
"valid_score": train_run.properties["score"],
"primary_metric": train_run.properties["primary_metric"],
},
)
run.log("run_algorithm", run.tags["run_algorithm"])
return run

View File

@@ -0,0 +1,935 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"\n",
"#### Forecasting away from training data\n",
"\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"2. [Setup](#Setup)\n",
"3. [Data](#Data)\n",
"4. [Prepare remote compute and data.](#prepare_remote)\n",
"4. [Create the configuration and train a forecaster](#train)\n",
"5. [Forecasting from the trained model](#forecasting)\n",
"6. [Forecasting away from training data](#forecasting_away)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This notebook demonstrates the full interface of the `forecast()` function. \n",
"\n",
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
"\n",
"However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.\n",
"\n",
"Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.\n",
"\n",
"Terminology:\n",
"* forecast origin: the last period when the target value is known\n",
"* forecast periods(s): the period(s) for which the value of the target is desired.\n",
"* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.\n",
"* prediction context: `lookback` periods immediately preceding the forecast origin\n",
"\n",
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl-forecasting-function.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please make sure you have followed the [configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) so that your ML workspace information is saved in the config file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import pandas as pd\n",
"import numpy as np\n",
"import logging\n",
"import warnings\n",
"\n",
"import azureml.core\n",
"from azureml.core.dataset import Dataset\n",
"from pandas.tseries.frequencies import to_offset\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# Squash warning messages for cleaner output in the notebook\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"np.set_printoptions(precision=4, suppress=True, linewidth=120)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"\n",
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = \"automl-forecast-function-demo\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"SKU\"] = ws.sku\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"For the demonstration purposes we will generate the data artificially and use them for the forecasting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"TIME_COLUMN_NAME = \"date\"\n",
"TIME_SERIES_ID_COLUMN_NAME = \"time_series_id\"\n",
"TARGET_COLUMN_NAME = \"y\"\n",
"\n",
"\n",
"def get_timeseries(\n",
" train_len: int,\n",
" test_len: int,\n",
" time_column_name: str,\n",
" target_column_name: str,\n",
" time_series_id_column_name: str,\n",
" time_series_number: int = 1,\n",
" freq: str = \"H\",\n",
"):\n",
" \"\"\"\n",
" Return the time series of designed length.\n",
"\n",
" :param train_len: The length of training data (one series).\n",
" :type train_len: int\n",
" :param test_len: The length of testing data (one series).\n",
" :type test_len: int\n",
" :param time_column_name: The desired name of a time column.\n",
" :type time_column_name: str\n",
" :param time_series_number: The number of time series in the data set.\n",
" :type time_series_number: int\n",
" :param freq: The frequency string representing pandas offset.\n",
" see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n",
" :type freq: str\n",
" :returns: the tuple of train and test data sets.\n",
" :rtype: tuple\n",
"\n",
" \"\"\"\n",
" data_train = [] # type: List[pd.DataFrame]\n",
" data_test = [] # type: List[pd.DataFrame]\n",
" data_length = train_len + test_len\n",
" for i in range(time_series_number):\n",
" X = pd.DataFrame(\n",
" {\n",
" time_column_name: pd.date_range(\n",
" start=\"2000-01-01\", periods=data_length, freq=freq\n",
" ),\n",
" target_column_name: np.arange(data_length).astype(float)\n",
" + np.random.rand(data_length)\n",
" + i * 5,\n",
" \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n",
" time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n",
" }\n",
" )\n",
" data_train.append(X[:train_len])\n",
" data_test.append(X[train_len:])\n",
" X_train = pd.concat(data_train)\n",
" y_train = X_train.pop(target_column_name).values\n",
" X_test = pd.concat(data_test)\n",
" y_test = X_test.pop(target_column_name).values\n",
" return X_train, y_train, X_test, y_test\n",
"\n",
"\n",
"n_test_periods = 6\n",
"n_train_periods = 30\n",
"X_train, y_train, X_test, y_test = get_timeseries(\n",
" train_len=n_train_periods,\n",
" test_len=n_test_periods,\n",
" time_column_name=TIME_COLUMN_NAME,\n",
" target_column_name=TARGET_COLUMN_NAME,\n",
" time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n",
" time_series_number=2,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see what the training data looks like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train.tail()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# plot the example time series\n",
"import matplotlib.pyplot as plt\n",
"\n",
"whole_data = X_train.copy()\n",
"target_label = \"y\"\n",
"whole_data[target_label] = y_train\n",
"for g in whole_data.groupby(\"time_series_id\"):\n",
" plt.plot(g[1][\"date\"].values, g[1][\"y\"].values, label=g[0])\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare remote compute and data. <a id=\"prepare_remote\"></a>\n",
"The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We need to save thw artificial data and then upload them to default workspace datastore.\n",
"DATA_PATH = \"fc_fn_data\"\n",
"DATA_PATH_X = \"{}/data_train.csv\".format(DATA_PATH)\n",
"if not os.path.isdir(\"data\"):\n",
" os.mkdir(\"data\")\n",
"pd.DataFrame(whole_data).to_csv(\"data/data_train.csv\", index=False)\n",
"# Upload saved data to the default data store.\n",
"ds = ws.get_default_datastore()\n",
"ds.upload(src_dir=\"./data\", target_path=DATA_PATH, overwrite=True, show_progress=True)\n",
"train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"amlcompute_cluster_name = \"fcfn-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the configuration and train a forecaster <a id=\"train\"></a>\n",
"First generate the configuration, in which we:\n",
"* Set metadata columns: target, time column and time-series id column names.\n",
"* Validate our data using cross validation with rolling window method.\n",
"* Set normalized root mean squared error as a metric to select the best model.\n",
"* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.\n",
"* Set limitations on the length of experiment run to 15 minutes.\n",
"* Finally, we set the task to be forecasting.\n",
"* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.\n",
"* [Optional] Forecast frequency parameter (freq) represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"\n",
"lags = [1, 2, 3]\n",
"forecast_horizon = n_test_periods\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=TIME_COLUMN_NAME,\n",
" forecast_horizon=forecast_horizon,\n",
" time_series_id_column_names=[TIME_SERIES_ID_COLUMN_NAME],\n",
" target_lags=lags,\n",
" freq=\"H\", # Set the forecast frequency to be hourly,\n",
" cv_step_size=\"auto\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the model selection and training process. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"\n",
"\n",
"automl_config = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" debug_log=\"automl_forecasting_function.log\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" experiment_timeout_hours=0.25,\n",
" enable_early_stopping=True,\n",
" training_data=train_data,\n",
" compute_target=compute_target,\n",
" n_cross_validations=\"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" verbosity=logging.INFO,\n",
" max_concurrent_iterations=4,\n",
" max_cores_per_iteration=-1,\n",
" label_column_name=target_label,\n",
" forecasting_parameters=forecasting_parameters,\n",
")\n",
"\n",
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.wait_for_completion()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Retrieve the best model to use it further.\n",
"_, fitted_model = remote_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Forecasting from the trained model <a id=\"forecasting\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### X_train is directly followed by the X_test\n",
"\n",
"Let's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.\n",
"\n",
"![Forecasting after training](forecast_function_at_train.png)\n",
"\n",
"We use `X_test` as a **forecast request** to generate the predictions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Typical path: X_test is known, forecast all upcoming periods"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00\n",
"\n",
"# These are predictions we are asking the model to make (does not contain thet target column y),\n",
"# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data\n",
"X_test"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)\n",
"\n",
"# xy_nogap contains the predictions in the _automl_target_col column.\n",
"# Those same numbers are output in y_pred_no_gap\n",
"xy_nogap"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Confidence intervals"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. \n",
"This method accepts the same parameters as forecast()."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"quantiles = fitted_model.forecast_quantiles(X_test)\n",
"quantiles"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Distribution forecasts\n",
"\n",
"Often the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. \n",
"This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like \"we want the item to be in stock and not run out 99% of the time\". This is called a \"service level\". Here is how you get quantile forecasts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# specify which quantiles you would like\n",
"fitted_model.quantiles = [0.01, 0.5, 0.95]\n",
"# use forecast_quantiles function, not the forecast() one\n",
"y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n",
"\n",
"# quantile forecasts returned in a Dataframe along with the time and time series id columns\n",
"y_pred_quantiles"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Destination-date forecast: \"just do something\"\n",
"\n",
"In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to \"destination date\". The destination date still needs to fit within the forecast horizon from training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will take the destination date as a last date in the test set.\n",
"dest = max(X_test[TIME_COLUMN_NAME])\n",
"y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)\n",
"\n",
"# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)\n",
"xy_dest"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Forecasting away from training data <a id=\"forecasting_away\"></a>\n",
"\n",
"Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model \"looks back\" -- uses previous values of the target -- then we somehow need to provide those values to the model.\n",
"\n",
"![Forecasting after training](forecast_function_away_from_train.png)\n",
"\n",
"The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per time-series, so each time-series can have a different forecast origin. \n",
"\n",
"The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# generate the same kind of test data we trained on,\n",
"# but now make the train set much longer, so that the test set will be in the future\n",
"X_context, y_context, X_away, y_away = get_timeseries(\n",
" train_len=42, # train data was 30 steps long\n",
" test_len=4,\n",
" time_column_name=TIME_COLUMN_NAME,\n",
" target_column_name=TARGET_COLUMN_NAME,\n",
" time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n",
" time_series_number=2,\n",
")\n",
"\n",
"# end of the data we trained on\n",
"print(X_train.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())\n",
"# start of the data we want to predict on\n",
"print(X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" y_pred_away, xy_away = fitted_model.forecast(X_away)\n",
" xy_away\n",
"except Exception as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the forecast horizon. We need to provide a define `y` value to establish the forecast origin.\n",
"\n",
"We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def make_forecasting_query(\n",
" fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback\n",
"):\n",
"\n",
" \"\"\"\n",
" This function will take the full dataset, and create the query\n",
" to predict all values of the time series from the `forecast_origin`\n",
" forward for the next `horizon` horizons. Context from previous\n",
" `lookback` periods will be included.\n",
"\n",
"\n",
"\n",
" fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.\n",
" time_column_name: string which column (must be in fulldata) is the time axis\n",
" target_column_name: string which column (must be in fulldata) is to be forecast\n",
" forecast_origin: datetime type the last time we (pretend to) have target values\n",
" horizon: timedelta how far forward, in time units (not periods)\n",
" lookback: timedelta how far back does the model look\n",
"\n",
" Example:\n",
"\n",
"\n",
" ```\n",
"\n",
" forecast_origin = pd.to_datetime(\"2012-09-01\") + pd.DateOffset(days=5) # forecast 5 days after end of training\n",
" print(forecast_origin)\n",
"\n",
" X_query, y_query = make_forecasting_query(data,\n",
" forecast_origin = forecast_origin,\n",
" horizon = pd.DateOffset(days=7), # 7 days into the future\n",
" lookback = pd.DateOffset(days=1), # model has lag 1 period (day)\n",
" )\n",
"\n",
" ```\n",
" \"\"\"\n",
"\n",
" X_past = fulldata[\n",
" (fulldata[time_column_name] > forecast_origin - lookback)\n",
" & (fulldata[time_column_name] <= forecast_origin)\n",
" ]\n",
"\n",
" X_future = fulldata[\n",
" (fulldata[time_column_name] > forecast_origin)\n",
" & (fulldata[time_column_name] <= forecast_origin + horizon)\n",
" ]\n",
"\n",
" y_past = X_past.pop(target_column_name).values.astype(float)\n",
" y_future = X_future.pop(target_column_name).values.astype(float)\n",
"\n",
" # Now take y_future and turn it into question marks\n",
" y_query = y_future.copy().astype(float) # because sometimes life hands you an int\n",
" y_query.fill(np.NaN)\n",
"\n",
" print(\"X_past is \" + str(X_past.shape) + \" - shaped\")\n",
" print(\"X_future is \" + str(X_future.shape) + \" - shaped\")\n",
" print(\"y_past is \" + str(y_past.shape) + \" - shaped\")\n",
" print(\"y_query is \" + str(y_query.shape) + \" - shaped\")\n",
"\n",
" X_pred = pd.concat([X_past, X_future])\n",
" y_pred = np.concatenate([y_past, y_query])\n",
" return X_pred, y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see where the context data ends - it ends, by construction, just before the testing data starts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\n",
" X_context.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n",
" [\"min\", \"max\", \"count\"]\n",
" )\n",
")\n",
"print(\n",
" X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n",
" [\"min\", \"max\", \"count\"]\n",
" )\n",
")\n",
"X_context.tail(5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Since the length of the lookback is 3,\n",
"# we need to add 3 periods from the context to the request\n",
"# so that the model has the data it needs\n",
"\n",
"# Put the X and y back together for a while.\n",
"# They like each other and it makes them happy.\n",
"X_context[TARGET_COLUMN_NAME] = y_context\n",
"X_away[TARGET_COLUMN_NAME] = y_away\n",
"fulldata = pd.concat([X_context, X_away])\n",
"\n",
"# forecast origin is the last point of data, which is one 1-hr period before test\n",
"forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)\n",
"# it is indeed the last point of the context\n",
"assert forecast_origin == X_context[TIME_COLUMN_NAME].max()\n",
"print(\"Forecast origin: \" + str(forecast_origin))\n",
"\n",
"# the model uses lags and rolling windows to look back in time\n",
"n_lookback_periods = max(lags)\n",
"lookback = pd.DateOffset(hours=n_lookback_periods)\n",
"\n",
"horizon = pd.DateOffset(hours=forecast_horizon)\n",
"\n",
"# now make the forecast query from context (refer to figure)\n",
"X_pred, y_pred = make_forecasting_query(\n",
" fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME, forecast_origin, horizon, lookback\n",
")\n",
"\n",
"# show the forecast request aligned\n",
"X_show = X_pred.copy()\n",
"X_show[TARGET_COLUMN_NAME] = y_pred\n",
"X_show"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the forecast origin is at 17:00 for both time-series, and periods from 18:00 are to be forecast."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now everything works\n",
"y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)\n",
"\n",
"# show the forecast aligned\n",
"X_show = xy_away.reset_index()\n",
"# without the generated features\n",
"X_show[[\"date\", \"time_series_id\", \"ext_predictor\", \"_automl_target_col\"]]\n",
"# prediction is in _automl_target_col"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Forecasting farther than the forecast horizon <a id=\"recursive forecasting\"></a>\n",
"When the forecast destination, or the latest date in the prediction data frame, is farther into the future than the specified forecast horizon, the forecaster must be iteratively applied. Here, we advance the forecast origin on each iteration over the prediction window, predicting `max_horizon` periods ahead on each iteration. There are two choices for the context data to use as the forecaster advances into the prediction window:\n",
"\n",
"1. We can use forecasted values from previous iterations (recursive forecast),\n",
"2. We can use known, actual values of the target if they are available (rolling forecast).\n",
"\n",
"The first method is useful in a true forecasting scenario when we do not yet know the actual target values while the second is useful in an evaluation scenario where we want to compute accuracy metrics for the `max_horizon`-period-ahead forecaster over a long test set. We refer to the first as a **recursive forecast** since we apply the forecaster recursively over the prediction window and the second as a **rolling forecast** since we roll forward over known actuals.\n",
"\n",
"### Recursive forecasting\n",
"By default, the `forecast()` function will make point predictions out to the later date using a recursive operation mode. Internally, the method recursively applies the regular forecaster to generate context so that we can forecast further into the future. \n",
"\n",
"To illustrate the use-case and operation of recursive forecasting, we'll consider an example with a single time-series where the forecasting period directly follows the training period and is twice as long as the forecasting horizon given at training time.\n",
"\n",
"![Recursive_forecast_overview](recursive_forecast_overview_small.png)\n",
"\n",
"Internally, we apply the forecaster in an iterative manner and finish the forecast task in two interations. In the first iteration, we apply the forecaster and get the prediction for the first forecast-horizon periods (y_pred1). In the second iteraction, y_pred1 is used as the context to produce the prediction for the next forecast-horizon periods (y_pred2). The combination of (y_pred1 and y_pred2) gives the results for the total forecast periods. \n",
"\n",
"A caveat: forecast accuracy will likely be worse the farther we predict into the future since errors are compounded with recursive application of the forecaster.\n",
"\n",
"![Recursive_forecast_iter1](recursive_forecast_iter1.png)\n",
"![Recursive_forecast_iter2](recursive_forecast_iter2.png)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# generate the same kind of test data we trained on, but with a single time-series and test period twice as long\n",
"# as the forecast_horizon.\n",
"_, _, X_test_long, y_test_long = get_timeseries(\n",
" train_len=n_train_periods,\n",
" test_len=forecast_horizon * 2,\n",
" time_column_name=TIME_COLUMN_NAME,\n",
" target_column_name=TARGET_COLUMN_NAME,\n",
" time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n",
" time_series_number=1,\n",
")\n",
"\n",
"print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())\n",
"print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# forecast() function will invoke the recursive forecast method internally.\n",
"y_pred_long, X_trans_long = fitted_model.forecast(X_test_long)\n",
"y_pred_long"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# What forecast() function does in this case is equivalent to iterating it twice over the test set as the following.\n",
"y_pred1, _ = fitted_model.forecast(X_test_long[:forecast_horizon])\n",
"y_pred_all, _ = fitted_model.forecast(\n",
" X_test_long, np.concatenate((y_pred1, np.full(forecast_horizon, np.nan)))\n",
")\n",
"np.array_equal(y_pred_all, y_pred_long)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Rolling forecasts\n",
"A rolling forecast is a similar concept to the recursive forecasts described above except that we use known actual values of the target for our context data. We have provided a different, public method for this called `rolling_forecast`. In addition to test data and actuals (`X_test` and `y_test`), `rolling_forecast` also accepts an optional `step` parameter that controls how far the origin advances on each iteration. The recursive forecast mode uses a fixed step of `max_horizon` while `rolling_forecast` defaults to a step size of 1, but can be set to any integer from 1 to `max_horizon`, inclusive.\n",
"\n",
"Let's see what the rolling forecast looks like on the long test set with the step set to 1:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_rf = fitted_model.rolling_forecast(X_test_long, y_test_long, step=1)\n",
"X_rf.head(n=12)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that `rolling_forecast` has returned a single DataFrame containing all results and has generated some new columns: `_automl_forecast_origin`, `_automl_forecast_y`, and `_automl_actual_y`. These are the origin date for each forecast, the forecasted value and the actual value, respectively. Note that \"y\" in the forecast and actual column names will generally be replaced by the target column name supplied to AutoML.\n",
"\n",
"The output above shows forecasts for two prediction windows, the first with origin at the end of the training set and the second including the first observation in the test set (2000-01-01 06:00:00). Since the forecast windows overlap, there are multiple forecasts for most dates which are associated with different origin dates."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Confidence interval and distributional forecasts\n",
"AutoML cannot currently estimate forecast errors beyond the forecast horizon set during training, so the `forecast_quantiles()` function will return missing values for quantiles not equal to 0.5 beyond the forecast horizon. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.forecast_quantiles(X_test_long)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly with the simple senarios illustrated above, forecasting farther than the forecast horizon in other senarios like 'multiple time-series', 'Destination-date forecast', and 'forecast away from the training data' are also automatically handled by the `forecast()` function. "
]
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"category": "tutorial",
"compute": [
"Remote"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"Azure ML AutoML"
],
"friendly_name": "Forecasting away from training data",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13"
},
"tags": [
"Forecasting",
"Confidence Intervals"
],
"task": "Forecasting",
"vscode": {
"interpreter": {
"hash": "6bd77c88278e012ef31757c15997a7bea8c943977c43d6909403c00ae11d43ca"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

View File

@@ -0,0 +1,710 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=\"red\" size=\"5\"><strong>!Important!</strong> </br>This notebook is outdated and is not supported by the AutoML Team. Please use the supported version ([link](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau)).</font>"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"# Automated Machine Learning\n",
"**Github DAU Forecasting**\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Evaluate](#Evaluate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Introduction\n",
"This notebook demonstrates demand forecasting for Github Daily Active Users Dataset using AutoML.\n",
"\n",
"AutoML highlights here include using Deep Learning forecasts, Arima, Prophet, Remote Execution and Remote Inferencing, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n",
"\n",
"Make sure you have executed the [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) before running this notebook.\n",
"\n",
"Notebook synopsis:\n",
"\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Configuration and remote run of AutoML for a time-series model exploring DNNs\n",
"4. Evaluating the fitted model using a rolling test "
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import logging\n",
"import warnings\n",
"\n",
"from pandas.tseries.frequencies import to_offset\n",
"\n",
"# Squash warning messages for cleaner output in the notebook\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"from azureml.core import Workspace, Experiment, Dataset\n",
"from azureml.train.automl import AutoMLConfig\n",
"from matplotlib import pyplot as plt\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error\n",
"from azureml.train.estimator import Estimator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = \"github-remote-cpu\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Using AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you use `AmlCompute` as your training compute resource.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"github-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Data\n",
"Read Github DAU data from file, and preview data."
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Let's set up what we know about the dataset. \n",
"\n",
"**Target column** is what we want to forecast.\n",
"\n",
"**Time column** is the time axis along which to predict.\n",
"\n",
"**Time series identifier columns** are identified by values of the columns listed `time_series_id_column_names`, for example \"store\" and \"item\" if your data has multiple time series of sales, one series for each combination of store and item sold.\n",
"\n",
"**Forecast frequency (freq)** This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.\n",
"\n",
"This dataset has only one time series. Please see the [orange juice notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales) for an example of a multi-time series dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"import pandas as pd\n",
"from pandas import DataFrame\n",
"from pandas import Grouper\n",
"from pandas import concat\n",
"from pandas.plotting import register_matplotlib_converters\n",
"\n",
"register_matplotlib_converters()\n",
"plt.figure(figsize=(20, 10))\n",
"plt.tight_layout()\n",
"\n",
"plt.subplot(2, 1, 1)\n",
"plt.title(\"Github Daily Active User By Year\")\n",
"df = pd.read_csv(\"github_dau_2011-2018_train.csv\", parse_dates=True, index_col=\"date\")\n",
"test_df = pd.read_csv(\n",
" \"github_dau_2011-2018_test.csv\", parse_dates=True, index_col=\"date\"\n",
")\n",
"plt.plot(df)\n",
"\n",
"plt.subplot(2, 1, 2)\n",
"plt.title(\"Github Daily Active User By Month\")\n",
"groups = df.groupby(df.index.month)\n",
"months = concat([DataFrame(x[1].values) for x in groups], axis=1)\n",
"months = DataFrame(months)\n",
"months.columns = range(1, 49)\n",
"months.boxplot()\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"target_column_name = \"count\"\n",
"time_column_name = \"date\"\n",
"time_series_id_column_names = []\n",
"freq = \"D\" # Daily data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split Training data into Train and Validation set and Upload to Datastores"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from helper import split_fraction_by_grain\n",
"from helper import split_full_for_forecasting\n",
"\n",
"train, valid = split_full_for_forecasting(df, time_column_name)\n",
"\n",
"# Reset index to create a Tabualr Dataset.\n",
"train.reset_index(inplace=True)\n",
"valid.reset_index(inplace=True)\n",
"test_df.reset_index(inplace=True)\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"train_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" train, target=(datastore, \"dataset/\"), name=\"Github_DAU_train\"\n",
")\n",
"valid_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" valid, target=(datastore, \"dataset/\"), name=\"Github_DAU_valid\"\n",
")\n",
"test_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" test_df, target=(datastore, \"dataset/\"), name=\"Github_DAU_test\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Setting forecaster maximum horizon \n",
"\n",
"The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 14 periods (i.e. 14 days). Notice that this is much shorter than the number of months in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the [energy demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"forecast_horizon = 14"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**enable_dnn**|Enable Forecasting DNNs|\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=time_column_name,\n",
" forecast_horizon=forecast_horizon,\n",
" freq=\"D\", # Set the forecast frequency to be daily\n",
")\n",
"\n",
"# To only allow the TCNForecaster we set the allowed_models parameter to reflect this.\n",
"automl_config = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" experiment_timeout_hours=1.5,\n",
" training_data=train_dataset,\n",
" label_column_name=target_column_name,\n",
" validation_data=valid_dataset,\n",
" verbosity=logging.INFO,\n",
" compute_target=compute_target,\n",
" max_concurrent_iterations=4,\n",
" max_cores_per_iteration=-1,\n",
" enable_dnn=True,\n",
" allowed_models=[\"TCNForecaster\"],\n",
" forecasting_parameters=forecasting_parameters,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"# If you need to retrieve a run that already started, use the following code\n",
"# from azureml.train.automl.run import AutoMLRun\n",
"# remote_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"### Retrieve the Best Model for Each Algorithm\n",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from helper import get_result_df\n",
"\n",
"summary_df = get_result_df(remote_run)\n",
"summary_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"from azureml.core.run import Run\n",
"from azureml.widgets import RunDetails\n",
"\n",
"forecast_model = \"TCNForecaster\"\n",
"if not forecast_model in summary_df[\"run_id\"]:\n",
" forecast_model = \"ForecastTCN\"\n",
"\n",
"best_dnn_run_id = summary_df[summary_df[\"Score\"] == summary_df[\"Score\"].min()][\n",
" \"run_id\"\n",
"][forecast_model]\n",
"best_dnn_run = Run(experiment, best_dnn_run_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"best_dnn_run.parent\n",
"RunDetails(best_dnn_run.parent).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"best_dnn_run\n",
"RunDetails(best_dnn_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"## Evaluate on Test Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"source": [
"We now use the best fitted model from the AutoML Run to make forecasts for the test set. \n",
"\n",
"We always score on the original dataset whose schema matches the training set schema."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"# preview the first 3 rows of the dataset\n",
"test_dataset.take(5).to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"compute_target = ws.compute_targets[\"github-cluster\"]\n",
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"import os\n",
"import shutil\n",
"\n",
"script_folder = os.path.join(os.getcwd(), \"inference\")\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"shutil.copy(\"infer.py\", script_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from helper import run_inference\n",
"\n",
"test_run = run_inference(\n",
" test_experiment,\n",
" compute_target,\n",
" script_folder,\n",
" best_dnn_run,\n",
" test_dataset,\n",
" valid_dataset,\n",
" forecast_horizon,\n",
" target_column_name,\n",
" time_column_name,\n",
" freq,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(test_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from helper import run_multiple_inferences\n",
"\n",
"summary_df = run_multiple_inferences(\n",
" summary_df,\n",
" experiment,\n",
" test_experiment,\n",
" compute_target,\n",
" script_folder,\n",
" test_dataset,\n",
" valid_dataset,\n",
" forecast_horizon,\n",
" target_column_name,\n",
" time_column_name,\n",
" freq,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"for run_name, run_summary in summary_df.iterrows():\n",
" print(run_name)\n",
" print(run_summary)\n",
" run_id = run_summary.run_id\n",
" test_run_id = run_summary.test_run_id\n",
" test_run = Run(test_experiment, test_run_id)\n",
" test_run.wait_for_completion()\n",
" test_score = test_run.get_metrics()[run_summary.primary_metric]\n",
" summary_df.loc[summary_df.run_id == run_id, \"Test Score\"] = test_score\n",
" print(\"Test Score: \", test_score)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"hideCode": false,
"hidePrompt": false
},
"outputs": [],
"source": [
"summary_df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"hide_code_all_hidden": false,
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,455 @@
date,count,day_of_week,month_of_year,holiday
2017-06-04,104663,6.0,5.0,0.0
2017-06-05,155824,0.0,5.0,0.0
2017-06-06,164908,1.0,5.0,0.0
2017-06-07,170309,2.0,5.0,0.0
2017-06-08,164256,3.0,5.0,0.0
2017-06-09,153406,4.0,5.0,0.0
2017-06-10,97024,5.0,5.0,0.0
2017-06-11,103442,6.0,5.0,0.0
2017-06-12,160768,0.0,5.0,0.0
2017-06-13,166288,1.0,5.0,0.0
2017-06-14,163819,2.0,5.0,0.0
2017-06-15,157593,3.0,5.0,0.0
2017-06-16,149259,4.0,5.0,0.0
2017-06-17,95579,5.0,5.0,0.0
2017-06-18,98723,6.0,5.0,0.0
2017-06-19,159076,0.0,5.0,0.0
2017-06-20,163340,1.0,5.0,0.0
2017-06-21,163344,2.0,5.0,0.0
2017-06-22,159528,3.0,5.0,0.0
2017-06-23,146563,4.0,5.0,0.0
2017-06-24,92631,5.0,5.0,0.0
2017-06-25,96549,6.0,5.0,0.0
2017-06-26,153249,0.0,5.0,0.0
2017-06-27,160357,1.0,5.0,0.0
2017-06-28,159941,2.0,5.0,0.0
2017-06-29,156781,3.0,5.0,0.0
2017-06-30,144709,4.0,5.0,0.0
2017-07-01,89101,5.0,6.0,0.0
2017-07-02,93046,6.0,6.0,0.0
2017-07-03,144113,0.0,6.0,0.0
2017-07-04,143061,1.0,6.0,1.0
2017-07-05,154603,2.0,6.0,0.0
2017-07-06,157200,3.0,6.0,0.0
2017-07-07,147213,4.0,6.0,0.0
2017-07-08,92348,5.0,6.0,0.0
2017-07-09,97018,6.0,6.0,0.0
2017-07-10,157192,0.0,6.0,0.0
2017-07-11,161819,1.0,6.0,0.0
2017-07-12,161998,2.0,6.0,0.0
2017-07-13,160280,3.0,6.0,0.0
2017-07-14,146818,4.0,6.0,0.0
2017-07-15,93041,5.0,6.0,0.0
2017-07-16,97505,6.0,6.0,0.0
2017-07-17,156167,0.0,6.0,0.0
2017-07-18,162855,1.0,6.0,0.0
2017-07-19,162519,2.0,6.0,0.0
2017-07-20,159941,3.0,6.0,0.0
2017-07-21,148460,4.0,6.0,0.0
2017-07-22,93431,5.0,6.0,0.0
2017-07-23,98553,6.0,6.0,0.0
2017-07-24,156202,0.0,6.0,0.0
2017-07-25,162503,1.0,6.0,0.0
2017-07-26,158479,2.0,6.0,0.0
2017-07-27,158192,3.0,6.0,0.0
2017-07-28,147108,4.0,6.0,0.0
2017-07-29,93799,5.0,6.0,0.0
2017-07-30,97920,6.0,6.0,0.0
2017-07-31,152197,0.0,6.0,0.0
2017-08-01,158477,1.0,7.0,0.0
2017-08-02,159089,2.0,7.0,0.0
2017-08-03,157182,3.0,7.0,0.0
2017-08-04,146345,4.0,7.0,0.0
2017-08-05,92534,5.0,7.0,0.0
2017-08-06,97128,6.0,7.0,0.0
2017-08-07,151359,0.0,7.0,0.0
2017-08-08,159895,1.0,7.0,0.0
2017-08-09,158329,2.0,7.0,0.0
2017-08-10,155468,3.0,7.0,0.0
2017-08-11,144914,4.0,7.0,0.0
2017-08-12,92258,5.0,7.0,0.0
2017-08-13,95933,6.0,7.0,0.0
2017-08-14,147706,0.0,7.0,0.0
2017-08-15,151115,1.0,7.0,0.0
2017-08-16,157640,2.0,7.0,0.0
2017-08-17,156600,3.0,7.0,0.0
2017-08-18,146980,4.0,7.0,0.0
2017-08-19,94592,5.0,7.0,0.0
2017-08-20,99320,6.0,7.0,0.0
2017-08-21,145727,0.0,7.0,0.0
2017-08-22,160260,1.0,7.0,0.0
2017-08-23,160440,2.0,7.0,0.0
2017-08-24,157830,3.0,7.0,0.0
2017-08-25,145822,4.0,7.0,0.0
2017-08-26,94706,5.0,7.0,0.0
2017-08-27,99047,6.0,7.0,0.0
2017-08-28,152112,0.0,7.0,0.0
2017-08-29,162440,1.0,7.0,0.0
2017-08-30,162902,2.0,7.0,0.0
2017-08-31,159498,3.0,7.0,0.0
2017-09-01,145689,4.0,8.0,0.0
2017-09-02,93589,5.0,8.0,0.0
2017-09-03,100058,6.0,8.0,0.0
2017-09-04,140865,0.0,8.0,1.0
2017-09-05,165715,1.0,8.0,0.0
2017-09-06,167463,2.0,8.0,0.0
2017-09-07,164811,3.0,8.0,0.0
2017-09-08,156157,4.0,8.0,0.0
2017-09-09,101358,5.0,8.0,0.0
2017-09-10,107915,6.0,8.0,0.0
2017-09-11,167845,0.0,8.0,0.0
2017-09-12,172756,1.0,8.0,0.0
2017-09-13,172851,2.0,8.0,0.0
2017-09-14,171675,3.0,8.0,0.0
2017-09-15,159266,4.0,8.0,0.0
2017-09-16,103547,5.0,8.0,0.0
2017-09-17,110964,6.0,8.0,0.0
2017-09-18,170976,0.0,8.0,0.0
2017-09-19,177864,1.0,8.0,0.0
2017-09-20,173567,2.0,8.0,0.0
2017-09-21,172017,3.0,8.0,0.0
2017-09-22,161357,4.0,8.0,0.0
2017-09-23,104681,5.0,8.0,0.0
2017-09-24,111711,6.0,8.0,0.0
2017-09-25,173517,0.0,8.0,0.0
2017-09-26,180049,1.0,8.0,0.0
2017-09-27,178307,2.0,8.0,0.0
2017-09-28,174157,3.0,8.0,0.0
2017-09-29,161707,4.0,8.0,0.0
2017-09-30,110536,5.0,8.0,0.0
2017-10-01,106505,6.0,9.0,0.0
2017-10-02,157565,0.0,9.0,0.0
2017-10-03,164764,1.0,9.0,0.0
2017-10-04,163383,2.0,9.0,0.0
2017-10-05,162847,3.0,9.0,0.0
2017-10-06,153575,4.0,9.0,0.0
2017-10-07,107472,5.0,9.0,0.0
2017-10-08,116127,6.0,9.0,0.0
2017-10-09,174457,0.0,9.0,1.0
2017-10-10,185217,1.0,9.0,0.0
2017-10-11,185120,2.0,9.0,0.0
2017-10-12,180844,3.0,9.0,0.0
2017-10-13,170178,4.0,9.0,0.0
2017-10-14,112754,5.0,9.0,0.0
2017-10-15,121251,6.0,9.0,0.0
2017-10-16,183906,0.0,9.0,0.0
2017-10-17,188945,1.0,9.0,0.0
2017-10-18,187297,2.0,9.0,0.0
2017-10-19,183867,3.0,9.0,0.0
2017-10-20,173021,4.0,9.0,0.0
2017-10-21,115851,5.0,9.0,0.0
2017-10-22,126088,6.0,9.0,0.0
2017-10-23,189452,0.0,9.0,0.0
2017-10-24,194412,1.0,9.0,0.0
2017-10-25,192293,2.0,9.0,0.0
2017-10-26,190163,3.0,9.0,0.0
2017-10-27,177053,4.0,9.0,0.0
2017-10-28,114934,5.0,9.0,0.0
2017-10-29,125289,6.0,9.0,0.0
2017-10-30,189245,0.0,9.0,0.0
2017-10-31,191480,1.0,9.0,0.0
2017-11-01,182281,2.0,10.0,0.0
2017-11-02,186351,3.0,10.0,0.0
2017-11-03,175422,4.0,10.0,0.0
2017-11-04,118160,5.0,10.0,0.0
2017-11-05,127602,6.0,10.0,0.0
2017-11-06,191067,0.0,10.0,0.0
2017-11-07,197083,1.0,10.0,0.0
2017-11-08,194333,2.0,10.0,0.0
2017-11-09,193914,3.0,10.0,0.0
2017-11-10,179933,4.0,10.0,1.0
2017-11-11,121346,5.0,10.0,0.0
2017-11-12,131900,6.0,10.0,0.0
2017-11-13,196969,0.0,10.0,0.0
2017-11-14,201949,1.0,10.0,0.0
2017-11-15,198424,2.0,10.0,0.0
2017-11-16,196902,3.0,10.0,0.0
2017-11-17,183893,4.0,10.0,0.0
2017-11-18,122767,5.0,10.0,0.0
2017-11-19,130890,6.0,10.0,0.0
2017-11-20,194515,0.0,10.0,0.0
2017-11-21,198601,1.0,10.0,0.0
2017-11-22,191041,2.0,10.0,0.0
2017-11-23,170321,3.0,10.0,1.0
2017-11-24,155623,4.0,10.0,0.0
2017-11-25,115759,5.0,10.0,0.0
2017-11-26,128771,6.0,10.0,0.0
2017-11-27,199419,0.0,10.0,0.0
2017-11-28,207253,1.0,10.0,0.0
2017-11-29,205406,2.0,10.0,0.0
2017-11-30,200674,3.0,10.0,0.0
2017-12-01,187017,4.0,11.0,0.0
2017-12-02,129735,5.0,11.0,0.0
2017-12-03,139120,6.0,11.0,0.0
2017-12-04,205505,0.0,11.0,0.0
2017-12-05,208218,1.0,11.0,0.0
2017-12-06,202480,2.0,11.0,0.0
2017-12-07,197822,3.0,11.0,0.0
2017-12-08,180686,4.0,11.0,0.0
2017-12-09,123667,5.0,11.0,0.0
2017-12-10,130987,6.0,11.0,0.0
2017-12-11,193901,0.0,11.0,0.0
2017-12-12,194997,1.0,11.0,0.0
2017-12-13,192063,2.0,11.0,0.0
2017-12-14,186496,3.0,11.0,0.0
2017-12-15,170812,4.0,11.0,0.0
2017-12-16,110474,5.0,11.0,0.0
2017-12-17,118165,6.0,11.0,0.0
2017-12-18,176843,0.0,11.0,0.0
2017-12-19,179550,1.0,11.0,0.0
2017-12-20,173506,2.0,11.0,0.0
2017-12-21,165910,3.0,11.0,0.0
2017-12-22,145886,4.0,11.0,0.0
2017-12-23,95246,5.0,11.0,0.0
2017-12-24,88781,6.0,11.0,0.0
2017-12-25,98189,0.0,11.0,1.0
2017-12-26,121383,1.0,11.0,0.0
2017-12-27,135300,2.0,11.0,0.0
2017-12-28,136827,3.0,11.0,0.0
2017-12-29,127700,4.0,11.0,0.0
2017-12-30,93014,5.0,11.0,0.0
2017-12-31,82878,6.0,11.0,0.0
2018-01-01,86419,0.0,0.0,1.0
2018-01-02,147428,1.0,0.0,0.0
2018-01-03,162193,2.0,0.0,0.0
2018-01-04,163784,3.0,0.0,0.0
2018-01-05,158606,4.0,0.0,0.0
2018-01-06,113467,5.0,0.0,0.0
2018-01-07,118313,6.0,0.0,0.0
2018-01-08,175623,0.0,0.0,0.0
2018-01-09,183880,1.0,0.0,0.0
2018-01-10,183945,2.0,0.0,0.0
2018-01-11,181769,3.0,0.0,0.0
2018-01-12,170552,4.0,0.0,0.0
2018-01-13,115707,5.0,0.0,0.0
2018-01-14,121191,6.0,0.0,0.0
2018-01-15,176127,0.0,0.0,1.0
2018-01-16,188032,1.0,0.0,0.0
2018-01-17,189871,2.0,0.0,0.0
2018-01-18,189348,3.0,0.0,0.0
2018-01-19,177456,4.0,0.0,0.0
2018-01-20,123321,5.0,0.0,0.0
2018-01-21,128306,6.0,0.0,0.0
2018-01-22,186132,0.0,0.0,0.0
2018-01-23,197618,1.0,0.0,0.0
2018-01-24,196402,2.0,0.0,0.0
2018-01-25,192722,3.0,0.0,0.0
2018-01-26,179415,4.0,0.0,0.0
2018-01-27,125769,5.0,0.0,0.0
2018-01-28,133306,6.0,0.0,0.0
2018-01-29,194151,0.0,0.0,0.0
2018-01-30,198680,1.0,0.0,0.0
2018-01-31,198652,2.0,0.0,0.0
2018-02-01,195472,3.0,1.0,0.0
2018-02-02,183173,4.0,1.0,0.0
2018-02-03,124276,5.0,1.0,0.0
2018-02-04,129054,6.0,1.0,0.0
2018-02-05,190024,0.0,1.0,0.0
2018-02-06,198658,1.0,1.0,0.0
2018-02-07,198272,2.0,1.0,0.0
2018-02-08,195339,3.0,1.0,0.0
2018-02-09,183086,4.0,1.0,0.0
2018-02-10,122536,5.0,1.0,0.0
2018-02-11,133033,6.0,1.0,0.0
2018-02-12,185386,0.0,1.0,0.0
2018-02-13,184789,1.0,1.0,0.0
2018-02-14,176089,2.0,1.0,0.0
2018-02-15,171317,3.0,1.0,0.0
2018-02-16,162693,4.0,1.0,0.0
2018-02-17,116342,5.0,1.0,0.0
2018-02-18,122466,6.0,1.0,0.0
2018-02-19,172364,0.0,1.0,1.0
2018-02-20,185896,1.0,1.0,0.0
2018-02-21,188166,2.0,1.0,0.0
2018-02-22,189427,3.0,1.0,0.0
2018-02-23,178732,4.0,1.0,0.0
2018-02-24,132664,5.0,1.0,0.0
2018-02-25,134008,6.0,1.0,0.0
2018-02-26,200075,0.0,1.0,0.0
2018-02-27,207996,1.0,1.0,0.0
2018-02-28,204416,2.0,1.0,0.0
2018-03-01,201320,3.0,2.0,0.0
2018-03-02,188205,4.0,2.0,0.0
2018-03-03,131162,5.0,2.0,0.0
2018-03-04,138320,6.0,2.0,0.0
2018-03-05,207326,0.0,2.0,0.0
2018-03-06,212462,1.0,2.0,0.0
2018-03-07,209357,2.0,2.0,0.0
2018-03-08,194876,3.0,2.0,0.0
2018-03-09,193761,4.0,2.0,0.0
2018-03-10,133449,5.0,2.0,0.0
2018-03-11,142258,6.0,2.0,0.0
2018-03-12,208753,0.0,2.0,0.0
2018-03-13,210602,1.0,2.0,0.0
2018-03-14,214236,2.0,2.0,0.0
2018-03-15,210761,3.0,2.0,0.0
2018-03-16,196619,4.0,2.0,0.0
2018-03-17,133056,5.0,2.0,0.0
2018-03-18,141335,6.0,2.0,0.0
2018-03-19,211580,0.0,2.0,0.0
2018-03-20,219051,1.0,2.0,0.0
2018-03-21,215435,2.0,2.0,0.0
2018-03-22,211961,3.0,2.0,0.0
2018-03-23,196009,4.0,2.0,0.0
2018-03-24,132390,5.0,2.0,0.0
2018-03-25,140021,6.0,2.0,0.0
2018-03-26,205273,0.0,2.0,0.0
2018-03-27,212686,1.0,2.0,0.0
2018-03-28,210683,2.0,2.0,0.0
2018-03-29,189044,3.0,2.0,0.0
2018-03-30,170256,4.0,2.0,0.0
2018-03-31,125999,5.0,2.0,0.0
2018-04-01,126749,6.0,3.0,0.0
2018-04-02,186546,0.0,3.0,0.0
2018-04-03,207905,1.0,3.0,0.0
2018-04-04,201528,2.0,3.0,0.0
2018-04-05,188580,3.0,3.0,0.0
2018-04-06,173714,4.0,3.0,0.0
2018-04-07,125723,5.0,3.0,0.0
2018-04-08,142545,6.0,3.0,0.0
2018-04-09,204767,0.0,3.0,0.0
2018-04-10,212048,1.0,3.0,0.0
2018-04-11,210517,2.0,3.0,0.0
2018-04-12,206924,3.0,3.0,0.0
2018-04-13,191679,4.0,3.0,0.0
2018-04-14,126394,5.0,3.0,0.0
2018-04-15,137279,6.0,3.0,0.0
2018-04-16,208085,0.0,3.0,0.0
2018-04-17,213273,1.0,3.0,0.0
2018-04-18,211580,2.0,3.0,0.0
2018-04-19,206037,3.0,3.0,0.0
2018-04-20,191211,4.0,3.0,0.0
2018-04-21,125564,5.0,3.0,0.0
2018-04-22,136469,6.0,3.0,0.0
2018-04-23,206288,0.0,3.0,0.0
2018-04-24,212115,1.0,3.0,0.0
2018-04-25,207948,2.0,3.0,0.0
2018-04-26,205759,3.0,3.0,0.0
2018-04-27,181330,4.0,3.0,0.0
2018-04-28,130046,5.0,3.0,0.0
2018-04-29,120802,6.0,3.0,0.0
2018-04-30,170390,0.0,3.0,0.0
2018-05-01,169054,1.0,4.0,0.0
2018-05-02,197891,2.0,4.0,0.0
2018-05-03,199820,3.0,4.0,0.0
2018-05-04,186783,4.0,4.0,0.0
2018-05-05,124420,5.0,4.0,0.0
2018-05-06,130666,6.0,4.0,0.0
2018-05-07,196014,0.0,4.0,0.0
2018-05-08,203058,1.0,4.0,0.0
2018-05-09,198582,2.0,4.0,0.0
2018-05-10,191321,3.0,4.0,0.0
2018-05-11,183639,4.0,4.0,0.0
2018-05-12,122023,5.0,4.0,0.0
2018-05-13,128775,6.0,4.0,0.0
2018-05-14,199104,0.0,4.0,0.0
2018-05-15,200658,1.0,4.0,0.0
2018-05-16,201541,2.0,4.0,0.0
2018-05-17,196886,3.0,4.0,0.0
2018-05-18,188597,4.0,4.0,0.0
2018-05-19,121392,5.0,4.0,0.0
2018-05-20,126981,6.0,4.0,0.0
2018-05-21,189291,0.0,4.0,0.0
2018-05-22,203038,1.0,4.0,0.0
2018-05-23,205330,2.0,4.0,0.0
2018-05-24,199208,3.0,4.0,0.0
2018-05-25,187768,4.0,4.0,0.0
2018-05-26,117635,5.0,4.0,0.0
2018-05-27,124352,6.0,4.0,0.0
2018-05-28,180398,0.0,4.0,1.0
2018-05-29,194170,1.0,4.0,0.0
2018-05-30,200281,2.0,4.0,0.0
2018-05-31,197244,3.0,4.0,0.0
2018-06-01,184037,4.0,5.0,0.0
2018-06-02,121135,5.0,5.0,0.0
2018-06-03,129389,6.0,5.0,0.0
2018-06-04,200331,0.0,5.0,0.0
2018-06-05,207735,1.0,5.0,0.0
2018-06-06,203354,2.0,5.0,0.0
2018-06-07,200520,3.0,5.0,0.0
2018-06-08,182038,4.0,5.0,0.0
2018-06-09,120164,5.0,5.0,0.0
2018-06-10,125256,6.0,5.0,0.0
2018-06-11,194786,0.0,5.0,0.0
2018-06-12,200815,1.0,5.0,0.0
2018-06-13,197740,2.0,5.0,0.0
2018-06-14,192294,3.0,5.0,0.0
2018-06-15,173587,4.0,5.0,0.0
2018-06-16,105955,5.0,5.0,0.0
2018-06-17,110780,6.0,5.0,0.0
2018-06-18,174582,0.0,5.0,0.0
2018-06-19,193310,1.0,5.0,0.0
2018-06-20,193062,2.0,5.0,0.0
2018-06-21,187986,3.0,5.0,0.0
2018-06-22,173606,4.0,5.0,0.0
2018-06-23,111795,5.0,5.0,0.0
2018-06-24,116134,6.0,5.0,0.0
2018-06-25,185919,0.0,5.0,0.0
2018-06-26,193142,1.0,5.0,0.0
2018-06-27,188114,2.0,5.0,0.0
2018-06-28,183737,3.0,5.0,0.0
2018-06-29,171496,4.0,5.0,0.0
2018-06-30,107210,5.0,5.0,0.0
2018-07-01,111053,6.0,6.0,0.0
2018-07-02,176198,0.0,6.0,0.0
2018-07-03,184040,1.0,6.0,0.0
2018-07-04,169783,2.0,6.0,1.0
2018-07-05,177996,3.0,6.0,0.0
2018-07-06,167378,4.0,6.0,0.0
2018-07-07,106401,5.0,6.0,0.0
2018-07-08,112327,6.0,6.0,0.0
2018-07-09,182835,0.0,6.0,0.0
2018-07-10,187694,1.0,6.0,0.0
2018-07-11,185762,2.0,6.0,0.0
2018-07-12,184099,3.0,6.0,0.0
2018-07-13,170860,4.0,6.0,0.0
2018-07-14,106799,5.0,6.0,0.0
2018-07-15,108475,6.0,6.0,0.0
2018-07-16,175704,0.0,6.0,0.0
2018-07-17,183596,1.0,6.0,0.0
2018-07-18,179897,2.0,6.0,0.0
2018-07-19,183373,3.0,6.0,0.0
2018-07-20,169626,4.0,6.0,0.0
2018-07-21,106785,5.0,6.0,0.0
2018-07-22,112387,6.0,6.0,0.0
2018-07-23,180572,0.0,6.0,0.0
2018-07-24,186943,1.0,6.0,0.0
2018-07-25,185744,2.0,6.0,0.0
2018-07-26,183117,3.0,6.0,0.0
2018-07-27,168526,4.0,6.0,0.0
2018-07-28,105936,5.0,6.0,0.0
2018-07-29,111708,6.0,6.0,0.0
2018-07-30,179950,0.0,6.0,0.0
2018-07-31,185930,1.0,6.0,0.0
2018-08-01,183366,2.0,7.0,0.0
2018-08-02,182412,3.0,7.0,0.0
2018-08-03,173429,4.0,7.0,0.0
2018-08-04,106108,5.0,7.0,0.0
2018-08-05,110059,6.0,7.0,0.0
2018-08-06,178355,0.0,7.0,0.0
2018-08-07,185518,1.0,7.0,0.0
2018-08-08,183204,2.0,7.0,0.0
2018-08-09,181276,3.0,7.0,0.0
2018-08-10,168297,4.0,7.0,0.0
2018-08-11,106488,5.0,7.0,0.0
2018-08-12,111786,6.0,7.0,0.0
2018-08-13,178620,0.0,7.0,0.0
2018-08-14,181922,1.0,7.0,0.0
2018-08-15,172198,2.0,7.0,0.0
2018-08-16,177367,3.0,7.0,0.0
2018-08-17,166550,4.0,7.0,0.0
2018-08-18,107011,5.0,7.0,0.0
2018-08-19,112299,6.0,7.0,0.0
2018-08-20,176718,0.0,7.0,0.0
2018-08-21,182562,1.0,7.0,0.0
2018-08-22,181484,2.0,7.0,0.0
2018-08-23,180317,3.0,7.0,0.0
2018-08-24,170197,4.0,7.0,0.0
2018-08-25,109383,5.0,7.0,0.0
2018-08-26,113373,6.0,7.0,0.0
2018-08-27,180142,0.0,7.0,0.0
2018-08-28,191628,1.0,7.0,0.0
2018-08-29,191149,2.0,7.0,0.0
2018-08-30,187503,3.0,7.0,0.0
2018-08-31,172280,4.0,7.0,0.0
1 date count day_of_week month_of_year holiday
2 2017-06-04 104663 6.0 5.0 0.0
3 2017-06-05 155824 0.0 5.0 0.0
4 2017-06-06 164908 1.0 5.0 0.0
5 2017-06-07 170309 2.0 5.0 0.0
6 2017-06-08 164256 3.0 5.0 0.0
7 2017-06-09 153406 4.0 5.0 0.0
8 2017-06-10 97024 5.0 5.0 0.0
9 2017-06-11 103442 6.0 5.0 0.0
10 2017-06-12 160768 0.0 5.0 0.0
11 2017-06-13 166288 1.0 5.0 0.0
12 2017-06-14 163819 2.0 5.0 0.0
13 2017-06-15 157593 3.0 5.0 0.0
14 2017-06-16 149259 4.0 5.0 0.0
15 2017-06-17 95579 5.0 5.0 0.0
16 2017-06-18 98723 6.0 5.0 0.0
17 2017-06-19 159076 0.0 5.0 0.0
18 2017-06-20 163340 1.0 5.0 0.0
19 2017-06-21 163344 2.0 5.0 0.0
20 2017-06-22 159528 3.0 5.0 0.0
21 2017-06-23 146563 4.0 5.0 0.0
22 2017-06-24 92631 5.0 5.0 0.0
23 2017-06-25 96549 6.0 5.0 0.0
24 2017-06-26 153249 0.0 5.0 0.0
25 2017-06-27 160357 1.0 5.0 0.0
26 2017-06-28 159941 2.0 5.0 0.0
27 2017-06-29 156781 3.0 5.0 0.0
28 2017-06-30 144709 4.0 5.0 0.0
29 2017-07-01 89101 5.0 6.0 0.0
30 2017-07-02 93046 6.0 6.0 0.0
31 2017-07-03 144113 0.0 6.0 0.0
32 2017-07-04 143061 1.0 6.0 1.0
33 2017-07-05 154603 2.0 6.0 0.0
34 2017-07-06 157200 3.0 6.0 0.0
35 2017-07-07 147213 4.0 6.0 0.0
36 2017-07-08 92348 5.0 6.0 0.0
37 2017-07-09 97018 6.0 6.0 0.0
38 2017-07-10 157192 0.0 6.0 0.0
39 2017-07-11 161819 1.0 6.0 0.0
40 2017-07-12 161998 2.0 6.0 0.0
41 2017-07-13 160280 3.0 6.0 0.0
42 2017-07-14 146818 4.0 6.0 0.0
43 2017-07-15 93041 5.0 6.0 0.0
44 2017-07-16 97505 6.0 6.0 0.0
45 2017-07-17 156167 0.0 6.0 0.0
46 2017-07-18 162855 1.0 6.0 0.0
47 2017-07-19 162519 2.0 6.0 0.0
48 2017-07-20 159941 3.0 6.0 0.0
49 2017-07-21 148460 4.0 6.0 0.0
50 2017-07-22 93431 5.0 6.0 0.0
51 2017-07-23 98553 6.0 6.0 0.0
52 2017-07-24 156202 0.0 6.0 0.0
53 2017-07-25 162503 1.0 6.0 0.0
54 2017-07-26 158479 2.0 6.0 0.0
55 2017-07-27 158192 3.0 6.0 0.0
56 2017-07-28 147108 4.0 6.0 0.0
57 2017-07-29 93799 5.0 6.0 0.0
58 2017-07-30 97920 6.0 6.0 0.0
59 2017-07-31 152197 0.0 6.0 0.0
60 2017-08-01 158477 1.0 7.0 0.0
61 2017-08-02 159089 2.0 7.0 0.0
62 2017-08-03 157182 3.0 7.0 0.0
63 2017-08-04 146345 4.0 7.0 0.0
64 2017-08-05 92534 5.0 7.0 0.0
65 2017-08-06 97128 6.0 7.0 0.0
66 2017-08-07 151359 0.0 7.0 0.0
67 2017-08-08 159895 1.0 7.0 0.0
68 2017-08-09 158329 2.0 7.0 0.0
69 2017-08-10 155468 3.0 7.0 0.0
70 2017-08-11 144914 4.0 7.0 0.0
71 2017-08-12 92258 5.0 7.0 0.0
72 2017-08-13 95933 6.0 7.0 0.0
73 2017-08-14 147706 0.0 7.0 0.0
74 2017-08-15 151115 1.0 7.0 0.0
75 2017-08-16 157640 2.0 7.0 0.0
76 2017-08-17 156600 3.0 7.0 0.0
77 2017-08-18 146980 4.0 7.0 0.0
78 2017-08-19 94592 5.0 7.0 0.0
79 2017-08-20 99320 6.0 7.0 0.0
80 2017-08-21 145727 0.0 7.0 0.0
81 2017-08-22 160260 1.0 7.0 0.0
82 2017-08-23 160440 2.0 7.0 0.0
83 2017-08-24 157830 3.0 7.0 0.0
84 2017-08-25 145822 4.0 7.0 0.0
85 2017-08-26 94706 5.0 7.0 0.0
86 2017-08-27 99047 6.0 7.0 0.0
87 2017-08-28 152112 0.0 7.0 0.0
88 2017-08-29 162440 1.0 7.0 0.0
89 2017-08-30 162902 2.0 7.0 0.0
90 2017-08-31 159498 3.0 7.0 0.0
91 2017-09-01 145689 4.0 8.0 0.0
92 2017-09-02 93589 5.0 8.0 0.0
93 2017-09-03 100058 6.0 8.0 0.0
94 2017-09-04 140865 0.0 8.0 1.0
95 2017-09-05 165715 1.0 8.0 0.0
96 2017-09-06 167463 2.0 8.0 0.0
97 2017-09-07 164811 3.0 8.0 0.0
98 2017-09-08 156157 4.0 8.0 0.0
99 2017-09-09 101358 5.0 8.0 0.0
100 2017-09-10 107915 6.0 8.0 0.0
101 2017-09-11 167845 0.0 8.0 0.0
102 2017-09-12 172756 1.0 8.0 0.0
103 2017-09-13 172851 2.0 8.0 0.0
104 2017-09-14 171675 3.0 8.0 0.0
105 2017-09-15 159266 4.0 8.0 0.0
106 2017-09-16 103547 5.0 8.0 0.0
107 2017-09-17 110964 6.0 8.0 0.0
108 2017-09-18 170976 0.0 8.0 0.0
109 2017-09-19 177864 1.0 8.0 0.0
110 2017-09-20 173567 2.0 8.0 0.0
111 2017-09-21 172017 3.0 8.0 0.0
112 2017-09-22 161357 4.0 8.0 0.0
113 2017-09-23 104681 5.0 8.0 0.0
114 2017-09-24 111711 6.0 8.0 0.0
115 2017-09-25 173517 0.0 8.0 0.0
116 2017-09-26 180049 1.0 8.0 0.0
117 2017-09-27 178307 2.0 8.0 0.0
118 2017-09-28 174157 3.0 8.0 0.0
119 2017-09-29 161707 4.0 8.0 0.0
120 2017-09-30 110536 5.0 8.0 0.0
121 2017-10-01 106505 6.0 9.0 0.0
122 2017-10-02 157565 0.0 9.0 0.0
123 2017-10-03 164764 1.0 9.0 0.0
124 2017-10-04 163383 2.0 9.0 0.0
125 2017-10-05 162847 3.0 9.0 0.0
126 2017-10-06 153575 4.0 9.0 0.0
127 2017-10-07 107472 5.0 9.0 0.0
128 2017-10-08 116127 6.0 9.0 0.0
129 2017-10-09 174457 0.0 9.0 1.0
130 2017-10-10 185217 1.0 9.0 0.0
131 2017-10-11 185120 2.0 9.0 0.0
132 2017-10-12 180844 3.0 9.0 0.0
133 2017-10-13 170178 4.0 9.0 0.0
134 2017-10-14 112754 5.0 9.0 0.0
135 2017-10-15 121251 6.0 9.0 0.0
136 2017-10-16 183906 0.0 9.0 0.0
137 2017-10-17 188945 1.0 9.0 0.0
138 2017-10-18 187297 2.0 9.0 0.0
139 2017-10-19 183867 3.0 9.0 0.0
140 2017-10-20 173021 4.0 9.0 0.0
141 2017-10-21 115851 5.0 9.0 0.0
142 2017-10-22 126088 6.0 9.0 0.0
143 2017-10-23 189452 0.0 9.0 0.0
144 2017-10-24 194412 1.0 9.0 0.0
145 2017-10-25 192293 2.0 9.0 0.0
146 2017-10-26 190163 3.0 9.0 0.0
147 2017-10-27 177053 4.0 9.0 0.0
148 2017-10-28 114934 5.0 9.0 0.0
149 2017-10-29 125289 6.0 9.0 0.0
150 2017-10-30 189245 0.0 9.0 0.0
151 2017-10-31 191480 1.0 9.0 0.0
152 2017-11-01 182281 2.0 10.0 0.0
153 2017-11-02 186351 3.0 10.0 0.0
154 2017-11-03 175422 4.0 10.0 0.0
155 2017-11-04 118160 5.0 10.0 0.0
156 2017-11-05 127602 6.0 10.0 0.0
157 2017-11-06 191067 0.0 10.0 0.0
158 2017-11-07 197083 1.0 10.0 0.0
159 2017-11-08 194333 2.0 10.0 0.0
160 2017-11-09 193914 3.0 10.0 0.0
161 2017-11-10 179933 4.0 10.0 1.0
162 2017-11-11 121346 5.0 10.0 0.0
163 2017-11-12 131900 6.0 10.0 0.0
164 2017-11-13 196969 0.0 10.0 0.0
165 2017-11-14 201949 1.0 10.0 0.0
166 2017-11-15 198424 2.0 10.0 0.0
167 2017-11-16 196902 3.0 10.0 0.0
168 2017-11-17 183893 4.0 10.0 0.0
169 2017-11-18 122767 5.0 10.0 0.0
170 2017-11-19 130890 6.0 10.0 0.0
171 2017-11-20 194515 0.0 10.0 0.0
172 2017-11-21 198601 1.0 10.0 0.0
173 2017-11-22 191041 2.0 10.0 0.0
174 2017-11-23 170321 3.0 10.0 1.0
175 2017-11-24 155623 4.0 10.0 0.0
176 2017-11-25 115759 5.0 10.0 0.0
177 2017-11-26 128771 6.0 10.0 0.0
178 2017-11-27 199419 0.0 10.0 0.0
179 2017-11-28 207253 1.0 10.0 0.0
180 2017-11-29 205406 2.0 10.0 0.0
181 2017-11-30 200674 3.0 10.0 0.0
182 2017-12-01 187017 4.0 11.0 0.0
183 2017-12-02 129735 5.0 11.0 0.0
184 2017-12-03 139120 6.0 11.0 0.0
185 2017-12-04 205505 0.0 11.0 0.0
186 2017-12-05 208218 1.0 11.0 0.0
187 2017-12-06 202480 2.0 11.0 0.0
188 2017-12-07 197822 3.0 11.0 0.0
189 2017-12-08 180686 4.0 11.0 0.0
190 2017-12-09 123667 5.0 11.0 0.0
191 2017-12-10 130987 6.0 11.0 0.0
192 2017-12-11 193901 0.0 11.0 0.0
193 2017-12-12 194997 1.0 11.0 0.0
194 2017-12-13 192063 2.0 11.0 0.0
195 2017-12-14 186496 3.0 11.0 0.0
196 2017-12-15 170812 4.0 11.0 0.0
197 2017-12-16 110474 5.0 11.0 0.0
198 2017-12-17 118165 6.0 11.0 0.0
199 2017-12-18 176843 0.0 11.0 0.0
200 2017-12-19 179550 1.0 11.0 0.0
201 2017-12-20 173506 2.0 11.0 0.0
202 2017-12-21 165910 3.0 11.0 0.0
203 2017-12-22 145886 4.0 11.0 0.0
204 2017-12-23 95246 5.0 11.0 0.0
205 2017-12-24 88781 6.0 11.0 0.0
206 2017-12-25 98189 0.0 11.0 1.0
207 2017-12-26 121383 1.0 11.0 0.0
208 2017-12-27 135300 2.0 11.0 0.0
209 2017-12-28 136827 3.0 11.0 0.0
210 2017-12-29 127700 4.0 11.0 0.0
211 2017-12-30 93014 5.0 11.0 0.0
212 2017-12-31 82878 6.0 11.0 0.0
213 2018-01-01 86419 0.0 0.0 1.0
214 2018-01-02 147428 1.0 0.0 0.0
215 2018-01-03 162193 2.0 0.0 0.0
216 2018-01-04 163784 3.0 0.0 0.0
217 2018-01-05 158606 4.0 0.0 0.0
218 2018-01-06 113467 5.0 0.0 0.0
219 2018-01-07 118313 6.0 0.0 0.0
220 2018-01-08 175623 0.0 0.0 0.0
221 2018-01-09 183880 1.0 0.0 0.0
222 2018-01-10 183945 2.0 0.0 0.0
223 2018-01-11 181769 3.0 0.0 0.0
224 2018-01-12 170552 4.0 0.0 0.0
225 2018-01-13 115707 5.0 0.0 0.0
226 2018-01-14 121191 6.0 0.0 0.0
227 2018-01-15 176127 0.0 0.0 1.0
228 2018-01-16 188032 1.0 0.0 0.0
229 2018-01-17 189871 2.0 0.0 0.0
230 2018-01-18 189348 3.0 0.0 0.0
231 2018-01-19 177456 4.0 0.0 0.0
232 2018-01-20 123321 5.0 0.0 0.0
233 2018-01-21 128306 6.0 0.0 0.0
234 2018-01-22 186132 0.0 0.0 0.0
235 2018-01-23 197618 1.0 0.0 0.0
236 2018-01-24 196402 2.0 0.0 0.0
237 2018-01-25 192722 3.0 0.0 0.0
238 2018-01-26 179415 4.0 0.0 0.0
239 2018-01-27 125769 5.0 0.0 0.0
240 2018-01-28 133306 6.0 0.0 0.0
241 2018-01-29 194151 0.0 0.0 0.0
242 2018-01-30 198680 1.0 0.0 0.0
243 2018-01-31 198652 2.0 0.0 0.0
244 2018-02-01 195472 3.0 1.0 0.0
245 2018-02-02 183173 4.0 1.0 0.0
246 2018-02-03 124276 5.0 1.0 0.0
247 2018-02-04 129054 6.0 1.0 0.0
248 2018-02-05 190024 0.0 1.0 0.0
249 2018-02-06 198658 1.0 1.0 0.0
250 2018-02-07 198272 2.0 1.0 0.0
251 2018-02-08 195339 3.0 1.0 0.0
252 2018-02-09 183086 4.0 1.0 0.0
253 2018-02-10 122536 5.0 1.0 0.0
254 2018-02-11 133033 6.0 1.0 0.0
255 2018-02-12 185386 0.0 1.0 0.0
256 2018-02-13 184789 1.0 1.0 0.0
257 2018-02-14 176089 2.0 1.0 0.0
258 2018-02-15 171317 3.0 1.0 0.0
259 2018-02-16 162693 4.0 1.0 0.0
260 2018-02-17 116342 5.0 1.0 0.0
261 2018-02-18 122466 6.0 1.0 0.0
262 2018-02-19 172364 0.0 1.0 1.0
263 2018-02-20 185896 1.0 1.0 0.0
264 2018-02-21 188166 2.0 1.0 0.0
265 2018-02-22 189427 3.0 1.0 0.0
266 2018-02-23 178732 4.0 1.0 0.0
267 2018-02-24 132664 5.0 1.0 0.0
268 2018-02-25 134008 6.0 1.0 0.0
269 2018-02-26 200075 0.0 1.0 0.0
270 2018-02-27 207996 1.0 1.0 0.0
271 2018-02-28 204416 2.0 1.0 0.0
272 2018-03-01 201320 3.0 2.0 0.0
273 2018-03-02 188205 4.0 2.0 0.0
274 2018-03-03 131162 5.0 2.0 0.0
275 2018-03-04 138320 6.0 2.0 0.0
276 2018-03-05 207326 0.0 2.0 0.0
277 2018-03-06 212462 1.0 2.0 0.0
278 2018-03-07 209357 2.0 2.0 0.0
279 2018-03-08 194876 3.0 2.0 0.0
280 2018-03-09 193761 4.0 2.0 0.0
281 2018-03-10 133449 5.0 2.0 0.0
282 2018-03-11 142258 6.0 2.0 0.0
283 2018-03-12 208753 0.0 2.0 0.0
284 2018-03-13 210602 1.0 2.0 0.0
285 2018-03-14 214236 2.0 2.0 0.0
286 2018-03-15 210761 3.0 2.0 0.0
287 2018-03-16 196619 4.0 2.0 0.0
288 2018-03-17 133056 5.0 2.0 0.0
289 2018-03-18 141335 6.0 2.0 0.0
290 2018-03-19 211580 0.0 2.0 0.0
291 2018-03-20 219051 1.0 2.0 0.0
292 2018-03-21 215435 2.0 2.0 0.0
293 2018-03-22 211961 3.0 2.0 0.0
294 2018-03-23 196009 4.0 2.0 0.0
295 2018-03-24 132390 5.0 2.0 0.0
296 2018-03-25 140021 6.0 2.0 0.0
297 2018-03-26 205273 0.0 2.0 0.0
298 2018-03-27 212686 1.0 2.0 0.0
299 2018-03-28 210683 2.0 2.0 0.0
300 2018-03-29 189044 3.0 2.0 0.0
301 2018-03-30 170256 4.0 2.0 0.0
302 2018-03-31 125999 5.0 2.0 0.0
303 2018-04-01 126749 6.0 3.0 0.0
304 2018-04-02 186546 0.0 3.0 0.0
305 2018-04-03 207905 1.0 3.0 0.0
306 2018-04-04 201528 2.0 3.0 0.0
307 2018-04-05 188580 3.0 3.0 0.0
308 2018-04-06 173714 4.0 3.0 0.0
309 2018-04-07 125723 5.0 3.0 0.0
310 2018-04-08 142545 6.0 3.0 0.0
311 2018-04-09 204767 0.0 3.0 0.0
312 2018-04-10 212048 1.0 3.0 0.0
313 2018-04-11 210517 2.0 3.0 0.0
314 2018-04-12 206924 3.0 3.0 0.0
315 2018-04-13 191679 4.0 3.0 0.0
316 2018-04-14 126394 5.0 3.0 0.0
317 2018-04-15 137279 6.0 3.0 0.0
318 2018-04-16 208085 0.0 3.0 0.0
319 2018-04-17 213273 1.0 3.0 0.0
320 2018-04-18 211580 2.0 3.0 0.0
321 2018-04-19 206037 3.0 3.0 0.0
322 2018-04-20 191211 4.0 3.0 0.0
323 2018-04-21 125564 5.0 3.0 0.0
324 2018-04-22 136469 6.0 3.0 0.0
325 2018-04-23 206288 0.0 3.0 0.0
326 2018-04-24 212115 1.0 3.0 0.0
327 2018-04-25 207948 2.0 3.0 0.0
328 2018-04-26 205759 3.0 3.0 0.0
329 2018-04-27 181330 4.0 3.0 0.0
330 2018-04-28 130046 5.0 3.0 0.0
331 2018-04-29 120802 6.0 3.0 0.0
332 2018-04-30 170390 0.0 3.0 0.0
333 2018-05-01 169054 1.0 4.0 0.0
334 2018-05-02 197891 2.0 4.0 0.0
335 2018-05-03 199820 3.0 4.0 0.0
336 2018-05-04 186783 4.0 4.0 0.0
337 2018-05-05 124420 5.0 4.0 0.0
338 2018-05-06 130666 6.0 4.0 0.0
339 2018-05-07 196014 0.0 4.0 0.0
340 2018-05-08 203058 1.0 4.0 0.0
341 2018-05-09 198582 2.0 4.0 0.0
342 2018-05-10 191321 3.0 4.0 0.0
343 2018-05-11 183639 4.0 4.0 0.0
344 2018-05-12 122023 5.0 4.0 0.0
345 2018-05-13 128775 6.0 4.0 0.0
346 2018-05-14 199104 0.0 4.0 0.0
347 2018-05-15 200658 1.0 4.0 0.0
348 2018-05-16 201541 2.0 4.0 0.0
349 2018-05-17 196886 3.0 4.0 0.0
350 2018-05-18 188597 4.0 4.0 0.0
351 2018-05-19 121392 5.0 4.0 0.0
352 2018-05-20 126981 6.0 4.0 0.0
353 2018-05-21 189291 0.0 4.0 0.0
354 2018-05-22 203038 1.0 4.0 0.0
355 2018-05-23 205330 2.0 4.0 0.0
356 2018-05-24 199208 3.0 4.0 0.0
357 2018-05-25 187768 4.0 4.0 0.0
358 2018-05-26 117635 5.0 4.0 0.0
359 2018-05-27 124352 6.0 4.0 0.0
360 2018-05-28 180398 0.0 4.0 1.0
361 2018-05-29 194170 1.0 4.0 0.0
362 2018-05-30 200281 2.0 4.0 0.0
363 2018-05-31 197244 3.0 4.0 0.0
364 2018-06-01 184037 4.0 5.0 0.0
365 2018-06-02 121135 5.0 5.0 0.0
366 2018-06-03 129389 6.0 5.0 0.0
367 2018-06-04 200331 0.0 5.0 0.0
368 2018-06-05 207735 1.0 5.0 0.0
369 2018-06-06 203354 2.0 5.0 0.0
370 2018-06-07 200520 3.0 5.0 0.0
371 2018-06-08 182038 4.0 5.0 0.0
372 2018-06-09 120164 5.0 5.0 0.0
373 2018-06-10 125256 6.0 5.0 0.0
374 2018-06-11 194786 0.0 5.0 0.0
375 2018-06-12 200815 1.0 5.0 0.0
376 2018-06-13 197740 2.0 5.0 0.0
377 2018-06-14 192294 3.0 5.0 0.0
378 2018-06-15 173587 4.0 5.0 0.0
379 2018-06-16 105955 5.0 5.0 0.0
380 2018-06-17 110780 6.0 5.0 0.0
381 2018-06-18 174582 0.0 5.0 0.0
382 2018-06-19 193310 1.0 5.0 0.0
383 2018-06-20 193062 2.0 5.0 0.0
384 2018-06-21 187986 3.0 5.0 0.0
385 2018-06-22 173606 4.0 5.0 0.0
386 2018-06-23 111795 5.0 5.0 0.0
387 2018-06-24 116134 6.0 5.0 0.0
388 2018-06-25 185919 0.0 5.0 0.0
389 2018-06-26 193142 1.0 5.0 0.0
390 2018-06-27 188114 2.0 5.0 0.0
391 2018-06-28 183737 3.0 5.0 0.0
392 2018-06-29 171496 4.0 5.0 0.0
393 2018-06-30 107210 5.0 5.0 0.0
394 2018-07-01 111053 6.0 6.0 0.0
395 2018-07-02 176198 0.0 6.0 0.0
396 2018-07-03 184040 1.0 6.0 0.0
397 2018-07-04 169783 2.0 6.0 1.0
398 2018-07-05 177996 3.0 6.0 0.0
399 2018-07-06 167378 4.0 6.0 0.0
400 2018-07-07 106401 5.0 6.0 0.0
401 2018-07-08 112327 6.0 6.0 0.0
402 2018-07-09 182835 0.0 6.0 0.0
403 2018-07-10 187694 1.0 6.0 0.0
404 2018-07-11 185762 2.0 6.0 0.0
405 2018-07-12 184099 3.0 6.0 0.0
406 2018-07-13 170860 4.0 6.0 0.0
407 2018-07-14 106799 5.0 6.0 0.0
408 2018-07-15 108475 6.0 6.0 0.0
409 2018-07-16 175704 0.0 6.0 0.0
410 2018-07-17 183596 1.0 6.0 0.0
411 2018-07-18 179897 2.0 6.0 0.0
412 2018-07-19 183373 3.0 6.0 0.0
413 2018-07-20 169626 4.0 6.0 0.0
414 2018-07-21 106785 5.0 6.0 0.0
415 2018-07-22 112387 6.0 6.0 0.0
416 2018-07-23 180572 0.0 6.0 0.0
417 2018-07-24 186943 1.0 6.0 0.0
418 2018-07-25 185744 2.0 6.0 0.0
419 2018-07-26 183117 3.0 6.0 0.0
420 2018-07-27 168526 4.0 6.0 0.0
421 2018-07-28 105936 5.0 6.0 0.0
422 2018-07-29 111708 6.0 6.0 0.0
423 2018-07-30 179950 0.0 6.0 0.0
424 2018-07-31 185930 1.0 6.0 0.0
425 2018-08-01 183366 2.0 7.0 0.0
426 2018-08-02 182412 3.0 7.0 0.0
427 2018-08-03 173429 4.0 7.0 0.0
428 2018-08-04 106108 5.0 7.0 0.0
429 2018-08-05 110059 6.0 7.0 0.0
430 2018-08-06 178355 0.0 7.0 0.0
431 2018-08-07 185518 1.0 7.0 0.0
432 2018-08-08 183204 2.0 7.0 0.0
433 2018-08-09 181276 3.0 7.0 0.0
434 2018-08-10 168297 4.0 7.0 0.0
435 2018-08-11 106488 5.0 7.0 0.0
436 2018-08-12 111786 6.0 7.0 0.0
437 2018-08-13 178620 0.0 7.0 0.0
438 2018-08-14 181922 1.0 7.0 0.0
439 2018-08-15 172198 2.0 7.0 0.0
440 2018-08-16 177367 3.0 7.0 0.0
441 2018-08-17 166550 4.0 7.0 0.0
442 2018-08-18 107011 5.0 7.0 0.0
443 2018-08-19 112299 6.0 7.0 0.0
444 2018-08-20 176718 0.0 7.0 0.0
445 2018-08-21 182562 1.0 7.0 0.0
446 2018-08-22 181484 2.0 7.0 0.0
447 2018-08-23 180317 3.0 7.0 0.0
448 2018-08-24 170197 4.0 7.0 0.0
449 2018-08-25 109383 5.0 7.0 0.0
450 2018-08-26 113373 6.0 7.0 0.0
451 2018-08-27 180142 0.0 7.0 0.0
452 2018-08-28 191628 1.0 7.0 0.0
453 2018-08-29 191149 2.0 7.0 0.0
454 2018-08-30 187503 3.0 7.0 0.0
455 2018-08-31 172280 4.0 7.0 0.0

View File

@@ -0,0 +1,176 @@
import pandas as pd
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.train.estimator import Estimator
from azureml.core.run import Run
from azureml.automl.core.shared import constants
def split_fraction_by_grain(df, fraction, time_column_name, grain_column_names=None):
if not grain_column_names:
df["tmp_grain_column"] = "grain"
grain_column_names = ["tmp_grain_column"]
"""Group df by grain and split on last n rows for each group."""
df_grouped = df.sort_values(time_column_name).groupby(
grain_column_names, group_keys=False
)
df_head = df_grouped.apply(
lambda dfg: dfg.iloc[: -int(len(dfg) * fraction)] if fraction > 0 else dfg
)
df_tail = df_grouped.apply(
lambda dfg: dfg.iloc[-int(len(dfg) * fraction) :] if fraction > 0 else dfg[:0]
)
if "tmp_grain_column" in grain_column_names:
for df2 in (df, df_head, df_tail):
df2.drop("tmp_grain_column", axis=1, inplace=True)
grain_column_names.remove("tmp_grain_column")
return df_head, df_tail
def split_full_for_forecasting(
df, time_column_name, grain_column_names=None, test_split=0.2
):
index_name = df.index.name
# Assumes that there isn't already a column called tmpindex
df["tmpindex"] = df.index
train_df, test_df = split_fraction_by_grain(
df, test_split, time_column_name, grain_column_names
)
train_df = train_df.set_index("tmpindex")
train_df.index.name = index_name
test_df = test_df.set_index("tmpindex")
test_df.index.name = index_name
df.drop("tmpindex", axis=1, inplace=True)
return train_df, test_df
def get_result_df(remote_run):
children = list(remote_run.get_children(recursive=True))
summary_df = pd.DataFrame(
index=["run_id", "run_algorithm", "primary_metric", "Score"]
)
goal_minimize = False
for run in children:
if (
run.get_status().lower() == constants.RunState.COMPLETE_RUN
and "run_algorithm" in run.properties
and "score" in run.properties
):
# We only count in the completed child runs.
summary_df[run.id] = [
run.id,
run.properties["run_algorithm"],
run.properties["primary_metric"],
float(run.properties["score"]),
]
if "goal" in run.properties:
goal_minimize = run.properties["goal"].split("_")[-1] == "min"
summary_df = summary_df.T.sort_values("Score", ascending=goal_minimize)
summary_df = summary_df.set_index("run_algorithm")
return summary_df
def run_inference(
test_experiment,
compute_target,
script_folder,
train_run,
test_dataset,
lookback_dataset,
max_horizon,
target_column_name,
time_column_name,
freq,
):
model_base_name = "model.pkl"
if "model_data_location" in train_run.properties:
model_location = train_run.properties["model_data_location"]
_, model_base_name = model_location.rsplit("/", 1)
train_run.download_file(
"outputs/{}".format(model_base_name), "inference/{}".format(model_base_name)
)
inference_env = train_run.get_environment()
est = Estimator(
source_directory=script_folder,
entry_script="infer.py",
script_params={
"--max_horizon": max_horizon,
"--target_column_name": target_column_name,
"--time_column_name": time_column_name,
"--frequency": freq,
"--model_path": model_base_name,
},
inputs=[
test_dataset.as_named_input("test_data"),
lookback_dataset.as_named_input("lookback_data"),
],
compute_target=compute_target,
environment_definition=inference_env,
)
run = test_experiment.submit(
est,
tags={
"training_run_id": train_run.id,
"run_algorithm": train_run.properties["run_algorithm"],
"valid_score": train_run.properties["score"],
"primary_metric": train_run.properties["primary_metric"],
},
)
run.log("run_algorithm", run.tags["run_algorithm"])
return run
def run_multiple_inferences(
summary_df,
train_experiment,
test_experiment,
compute_target,
script_folder,
test_dataset,
lookback_dataset,
max_horizon,
target_column_name,
time_column_name,
freq,
):
for run_name, run_summary in summary_df.iterrows():
print(run_name)
print(run_summary)
run_id = run_summary.run_id
train_run = Run(train_experiment, run_id)
test_run = run_inference(
test_experiment,
compute_target,
script_folder,
train_run,
test_dataset,
lookback_dataset,
max_horizon,
target_column_name,
time_column_name,
freq,
)
print(test_run)
summary_df.loc[summary_df.run_id == run_id, "test_run_id"] = test_run.id
return summary_df

View File

@@ -0,0 +1,145 @@
import argparse
import os
import numpy as np
import pandas as pd
import joblib
from sklearn.metrics import mean_absolute_error, mean_squared_error
from azureml.automl.runtime.shared.score import scoring, constants
from azureml.core import Run
try:
import torch
_torch_present = True
except ImportError:
_torch_present = False
def map_location_cuda(storage, loc):
return storage.cuda()
def APE(actual, pred):
"""
Calculate absolute percentage error.
Returns a vector of APE values with same length as actual/pred.
"""
return 100 * np.abs((actual - pred) / actual)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
return np.mean(APE(actual_safe, pred_safe))
parser = argparse.ArgumentParser()
parser.add_argument(
"--max_horizon",
type=int,
dest="max_horizon",
default=10,
help="Max Horizon for forecasting",
)
parser.add_argument(
"--target_column_name",
type=str,
dest="target_column_name",
help="Target Column Name",
)
parser.add_argument(
"--time_column_name", type=str, dest="time_column_name", help="Time Column Name"
)
parser.add_argument(
"--frequency", type=str, dest="freq", help="Frequency of prediction"
)
parser.add_argument(
"--model_path",
type=str,
dest="model_path",
default="model.pkl",
help="Filename of model to be loaded",
)
args = parser.parse_args()
max_horizon = args.max_horizon
target_column_name = args.target_column_name
time_column_name = args.time_column_name
freq = args.freq
model_path = args.model_path
print("args passed are: ")
print(max_horizon)
print(target_column_name)
print(time_column_name)
print(freq)
print(model_path)
run = Run.get_context()
# get input dataset by name
test_dataset = run.input_datasets["test_data"]
grain_column_names = []
df = test_dataset.to_pandas_dataframe()
print("Read df")
print(df)
X_test_df = df
y_test = df.pop(target_column_name).to_numpy()
_, ext = os.path.splitext(model_path)
if ext == ".pt":
# Load the fc-tcn torch model.
assert _torch_present
if torch.cuda.is_available():
map_location = map_location_cuda
else:
map_location = "cpu"
with open(model_path, "rb") as fh:
fitted_model = torch.load(fh, map_location=map_location)
else:
# Load the sklearn pipeline.
fitted_model = joblib.load(model_path)
X_rf = fitted_model.rolling_forecast(X_test_df, y_test, step=1)
assign_dict = {
fitted_model.forecast_origin_column_name: "forecast_origin",
fitted_model.forecast_column_name: "predicted",
fitted_model.actual_column_name: target_column_name,
}
X_rf.rename(columns=assign_dict, inplace=True)
print(X_rf.head())
# Use the AutoML scoring module
regression_metrics = list(constants.REGRESSION_SCALAR_SET)
y_test = np.array(X_rf[target_column_name])
y_pred = np.array(X_rf["predicted"])
scores = scoring.score_regression(y_test, y_pred, regression_metrics)
print("scores:")
print(scores)
for key, value in scores.items():
run.log(key, value)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(X_rf[target_column_name], X_rf["predicted"]))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(X_rf[target_column_name], X_rf["predicted"])
print("mean_absolute_error score: %.2f" % mae)
print("MAPE: %.2f" % MAPE(X_rf[target_column_name], X_rf["predicted"]))
run.log("rmse", rmse)
run.log("mae", mae)

View File

@@ -0,0 +1,681 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=\"red\" size=\"5\"><strong>!Important!</strong> </br>This notebook is outdated and is not supported by the AutoML Team. Please use the supported version ([link](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline)).</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hierarchical Time Series - Automated ML\n",
"**_Generate hierarchical time series forecasts with Automated Machine Learning_**\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"You'll need to create a compute Instance by following [these](https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-manage-compute-instance?tabs=python) instructions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.0 Set up workspace, datastore, experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003526897
}
},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Datastore\n",
"import pandas as pd\n",
"\n",
"# Set up your workspace\n",
"ws = Workspace.from_config()\n",
"ws.get_details()\n",
"\n",
"# Set up your datastores\n",
"dstore = ws.get_default_datastore()\n",
"\n",
"output = {}\n",
"output[\"SDK version\"] = azureml.core.VERSION\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose an experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003540729
}
},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment = Experiment(ws, \"automl-hts\")\n",
"\n",
"print(\"Experiment name: \" + experiment.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.0 Data\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Upload local csv files to datastore\n",
"You can upload your train and inference csv files to the default datastore in your workspace. \n",
"\n",
"A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n",
"Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) documentation on how to access data from Datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datastore_path = \"hts-sample\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datastore = ws.get_default_datastore()\n",
"datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the TabularDatasets \n",
"\n",
"Datasets in Azure Machine Learning are references to specific data in a Datastore. The data can be retrieved as a [TabularDatasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py). We will read in the data as a pandas DataFrame, upload to the data store and register them to your Workspace using ```register_pandas_dataframe``` so they can be called as an input into the training pipeline. We will use the inference dataset as part of the forecasting pipeline. The step need only be completed once."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007017296
}
},
"outputs": [],
"source": [
"from azureml.data.dataset_factory import TabularDatasetFactory\n",
"\n",
"registered_train = TabularDatasetFactory.register_pandas_dataframe(\n",
" pd.read_csv(\"Data/hts-sample-train.csv\"),\n",
" target=(datastore, \"hts-sample\"),\n",
" name=\"hts-sales-train\",\n",
")\n",
"registered_inference = TabularDatasetFactory.register_pandas_dataframe(\n",
" pd.read_csv(\"Data/hts-sample-test.csv\"),\n",
" target=(datastore, \"hts-sample\"),\n",
" name=\"hts-sales-test\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.0 Build the training pipeline\n",
"Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose a compute target\n",
"\n",
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n",
"\n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007037308
}
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"\n",
"# Name your cluster\n",
"compute_name = \"hts-compute\"\n",
"\n",
"\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print(\"Found compute target: \" + compute_name)\n",
"else:\n",
" print(\"Creating a new compute target...\")\n",
" provisioning_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_D16S_V3\", max_nodes=20\n",
" )\n",
" # Create the compute target\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
"\n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(\n",
" show_output=True, min_node_count=None, timeout_in_minutes=20\n",
" )\n",
"\n",
" # For a more detailed view of current cluster status, use the 'status' property\n",
" print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up training parameters\n",
"\n",
"We need to provide ``ForecastingParameters``, ``AutoMLConfig`` and ``HTSTrainParameters`` objects. For the forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, the hierarchy definition, and the level of the hierarchy at which to train.\n",
"\n",
"#### ``ForecastingParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **time_column_name** | The name of your time column. |\n",
"| **time_series_id_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **cv_step_size** | Number of periods between two consecutive cross-validation folds. The default value is \\\"auto\\\", in which case AutoMl determines the cross-validation step size automatically, if a validation set is not provided. Or users could specify an integer value. |\n",
"\n",
"#### ``AutoMLConfig`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **task** | forecasting |\n",
"| **primary_metric** | This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i> |\n",
"| **blocked_models** | Blocked models won't be used by AutoML. |\n",
"| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **experiment_timeout_hours** | Maximum amount of time in hours that each experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. **It does not control the overall timeout for the pipeline run, instead controls the timeout for each training run per partitioned time series.** |\n",
"| **label_column_name** | The name of the label column. |\n",
"| **n_cross_validations** | Number of cross validation splits. The default value is \\\"auto\\\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or users could specify an integer value. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n",
"| **enable_early_stopping** | Flag to enable early termination if the primary metric is no longer improving. |\n",
"| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n",
"| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n",
"| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n",
"| **model_explainability** | Flag to disable explaining the best automated ML model at the end of all training iterations. The default is True and will block non-explainable models which may impact the forecast accuracy. For more information, see [Interpretability: model explanations in automated machine learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl). |\n",
"\n",
"#### ``HTSTrainParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **automl_settings** | The ``AutoMLConfig`` object defined above. |\n",
"| **hierarchy_column_names** | The names of columns that define the hierarchical structure of the data from highest level to most granular. |\n",
"| **training_level** | The level of the hierarchy to be used for training models. |\n",
"| **enable_engineered_explanations** | The switch controls engineered explanations. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007061544
}
},
"outputs": [],
"source": [
"from azureml.train.automl.runtime._hts.hts_parameters import HTSTrainParameters\n",
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"from azureml.train.automl.automlconfig import AutoMLConfig\n",
"\n",
"\n",
"model_explainability = True\n",
"\n",
"engineered_explanations = False\n",
"# Define your hierarchy. Adjust the settings below based on your dataset.\n",
"hierarchy = [\"state\", \"store_id\", \"product_category\", \"SKU\"]\n",
"training_level = \"SKU\"\n",
"\n",
"# Set your forecast parameters. Adjust the settings below based on your dataset.\n",
"time_column_name = \"date\"\n",
"label_column_name = \"quantity\"\n",
"forecast_horizon = 7\n",
"\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=time_column_name,\n",
" forecast_horizon=forecast_horizon,\n",
")\n",
"\n",
"automl_settings = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" experiment_timeout_hours=1,\n",
" label_column_name=label_column_name,\n",
" track_child_runs=False,\n",
" forecasting_parameters=forecasting_parameters,\n",
" pipeline_fetch_max_batch_size=15,\n",
" model_explainability=model_explainability,\n",
" n_cross_validations=\"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" cv_step_size=\"auto\",\n",
" # The following settings are specific to this sample and should be adjusted according to your own needs.\n",
" iteration_timeout_minutes=10,\n",
" iterations=15,\n",
")\n",
"\n",
"hts_parameters = HTSTrainParameters(\n",
" automl_settings=automl_settings,\n",
" hierarchy_column_names=hierarchy,\n",
" training_level=training_level,\n",
" enable_engineered_explanations=engineered_explanations,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up hierarchy training pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The ``process_count_per_node`` is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for training. |\n",
"| **train_data** | The file dataset to be used as input to the training run. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n",
"| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node for optimal performance. |\n",
"| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n",
"| **run_invocation_timeout** | Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. This must be greater than ``experiment_timeout_hours`` by at least 300 seconds. |\n",
"\n",
"Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution.\n",
"\n",
"**Note**: Total time taken for the **training step** in the pipeline to complete = $ \\frac{t}{ p \\times n } \\times ts $\n",
"where,\n",
"- $ t $ is time taken for training one partition (can be viewed in the training logs)\n",
"- $ p $ is ``process_count_per_node``\n",
"- $ n $ is ``node_count``\n",
"- $ ts $ is total number of partitions in time series based on ``partition_column_names``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n",
"\n",
"\n",
"training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n",
" experiment=experiment,\n",
" train_data=registered_train,\n",
" compute_target=compute_target,\n",
" node_count=2,\n",
" process_count_per_node=8,\n",
" train_pipeline_parameters=hts_parameters,\n",
" run_invocation_timeout=3900,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"training_pipeline = Pipeline(ws, steps=training_pipeline_steps)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the pipeline to run\n",
"Next we submit our pipeline to run. The whole training pipeline takes about 1h using a Standard_D16_V3 VM with our current ParallelRunConfig setting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run = experiment.submit(training_pipeline)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Optional] Get the explanations\n",
"First we need to download the explanations to the local disk."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if model_explainability:\n",
" expl_output = training_run.get_pipeline_output(\"explanations\")\n",
" expl_output.download(\"training_explanations\")\n",
"else:\n",
" print(\n",
" \"Model explanations are available only if model_explainability is set to True.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The explanations are downloaded to the \"training_explanations/azureml\" directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"if model_explainability:\n",
" explanations_dirrectory = os.listdir(\n",
" os.path.join(\"training_explanations\", \"azureml\")\n",
" )\n",
" if len(explanations_dirrectory) > 1:\n",
" print(\n",
" \"Warning! The directory contains multiple explanations, only the first one will be displayed.\"\n",
" )\n",
" print(\"The explanations are located at {}.\".format(explanations_dirrectory[0]))\n",
" # Now we will list all the explanations.\n",
" explanation_path = os.path.join(\n",
" \"training_explanations\",\n",
" \"azureml\",\n",
" explanations_dirrectory[0],\n",
" \"training_explanations\",\n",
" )\n",
" print(\"Available explanations\")\n",
" print(\"==============================\")\n",
" print(\"\\n\".join(os.listdir(explanation_path)))\n",
"else:\n",
" print(\n",
" \"Model explanations are available only if model_explainability is set to True.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"View the explanations on \"state\" level."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import display\n",
"\n",
"explanation_type = \"raw\"\n",
"level = \"state\"\n",
"\n",
"if model_explainability:\n",
" display(\n",
" pd.read_csv(\n",
" os.path.join(explanation_path, \"{}_explanations_{}.csv\").format(\n",
" explanation_type, level\n",
" )\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5.0 Forecasting\n",
"For hierarchical forecasting we need to provide the HTSInferenceParameters object.\n",
"#### ``HTSInferenceParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **hierarchy_forecast_level:** | The default level of the hierarchy to produce prediction/forecast on. |\n",
"| **allocation_method:** | \\[Optional] The disaggregation method to use if the hierarchy forecast level specified is below the define hierarchy training level. <br><i>(average historical proportions) 'average_historical_proportions'</i><br><i>(proportions of the historical averages) 'proportions_of_historical_average'</i> |\n",
"\n",
"#### ``get_many_models_batch_inference_steps`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for inference run. |\n",
"| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n",
"| **compute_target** | The compute target that runs the inference pipeline. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** | \\[Optional] The number of processes per node. By default it's 2 (should be at most half of the number of cores in a single node of the compute cluster that will be used for the experiment).\n",
"| **inference_pipeline_parameters** | \\[Optional] The ``HTSInferenceParameters`` object defined above. |\n",
"| **train_run_id** | \\[Optional] The run id of the **training pipeline**. By default it is the latest successful training pipeline run in the experiment. |\n",
"| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
"| **run_invocation_timeout** | \\[Optional] Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.runtime._hts.hts_parameters import HTSInferenceParameters\n",
"\n",
"inference_parameters = HTSInferenceParameters(\n",
" hierarchy_forecast_level=\"store_id\", # The setting is specific to this dataset and should be changed based on your dataset.\n",
" allocation_method=\"proportions_of_historical_average\",\n",
")\n",
"\n",
"steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n",
" experiment=experiment,\n",
" inference_data=registered_inference,\n",
" compute_target=compute_target,\n",
" inference_pipeline_parameters=inference_parameters,\n",
" node_count=2,\n",
" process_count_per_node=8,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"inference_pipeline = Pipeline(ws, steps=steps)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_run = experiment.submit(inference_pipeline)\n",
"inference_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieve results\n",
"\n",
"Forecast results can be retrieved through the following code. The prediction results summary and the actual predictions are downloaded in forecast_results folder"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"forecasts = inference_run.get_pipeline_output(\"forecasts\")\n",
"forecasts.download(\"forecast_results\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resbumit the Pipeline\n",
"\n",
"The inference pipeline can be submitted with different configurations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_run = experiment.submit(\n",
" inference_pipeline, pipeline_parameters={\"hierarchy_forecast_level\": \"state\"}\n",
")\n",
"inference_run.wait_for_completion(show_output=False)"
]
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"categories": [
"how-to-use-azureml",
"automated-machine-learning"
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,122 @@
---
page_type: sample
languages:
- python
products:
- azure-machine-learning
description: Tutorial showing how to solve a complex machine learning time series forecasting problems at scale by using Azure Automated ML and Many Models solution accelerator.
---
![Many Models Solution Accelerator Banner](images/mmsa.png)
# Many Models Solution Accelerator
<!--
Guidelines on README format: https://review.docs.microsoft.com/help/onboard/admin/samples/concepts/readme-template?branch=master
Guidance on onboarding samples to docs.microsoft.com/samples: https://review.docs.microsoft.com/help/onboard/admin/samples/process/onboarding?branch=master
Taxonomies for products and languages: https://review.docs.microsoft.com/new-hope/information-architecture/metadata/taxonomies?branch=master
-->
In the real world, many problems can be too complex to be solved by a single machine learning model. Whether that be predicting sales for each individual store, building a predictive maintanence model for hundreds of oil wells, or tailoring an experience to individual users, building a model for each instance can lead to improved results on many machine learning problems.
This Pattern is very common across a wide variety of industries and applicable to many real world use cases. Below are some examples we have seen where this pattern is being used.
- Energy and utility companies building predictive maintenancemodelsforthousands of oil wells, hundreds of wind turbines or hundreds of smart meters
- Retail organizations building workforce optimization models for thousands of stores, campaign promotion propensity models, Price optimization models for hundreds of thousands of products they sell
- Restaurant chains buildingdemand forecasting models across thousands ofrestaurants
- Banks and financial institutes building models for cash replenishmentfor ATM Machine and for several ATMsor building personalized models for individuals
- Enterprises building revenue forecasting modelsat each division level
- Document management companies building text analytics and legal document search models per each state
Azure Machine Learning (AML) makes it easy to train, operate, and manage hundreds or even thousands of models. This repo will walk you through the end to end process of creating a many models solution from training to scoring to monitoring.
## Prerequisites
To use this solution accelerator, all you need is access to an [Azure subscription](https://azure.microsoft.com/free/) and an [Azure Machine Learning Workspace](https://docs.microsoft.com/azure/machine-learning/how-to-manage-workspace) that you'll create below.
While it's not required, a basic understanding of Azure Machine Learning will be helpful for understanding the solution. The following resources can help introduce you to AML:
1. [Azure Machine Learning Overview](https://azure.microsoft.com/services/machine-learning/)
2. [Azure Machine Learning Tutorials](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup)
3. [Azure Machine Learning Sample Notebooks on Github](https://github.com/Azure/azureml-examples)
## Getting started
### 1. Deploy Resources
Start by deploying the resources to Azure. The button below will deploy Azure Machine Learning and its related resources:
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fmicrosoft%2Fsolution-accelerator-many-models%2Fmaster%2Fazuredeploy.json" target="_blank">
<img src="http://azuredeploy.net/deploybutton.png"/>
</a>
### 2. Configure Development Environment
Next you'll need to configure your [development environment](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment) for Azure Machine Learning. We recommend using a [Compute Instance](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment#compute-instance) as it's the fastest way to get up and running.
### 3. Run Notebooks
Once your development environment is set up, run through the Jupyter Notebooks sequentially following the steps outlined. By the end, you'll know how to train, score, and make predictions using the many models pattern on Azure Machine Learning.
![Sequence of Notebooks](./images/mmsa-overview.png)
## Contents
In this repo, you'll train and score a forecasting model for each orange juice brand and for each store at a (simulated) grocery chain. By the end, you'll have forecasted sales by using up to 11,973 models to predict sales for the next few weeks.
The data used in this sample is simulated based on the [Dominick's Orange Juice Dataset](http://www.cs.unitn.it/~taufer/QMMA/L10-OJ-Data.html#(1)), sales data from a Chicago area grocery store.
<img src="images/Flow_map.png" width="1000">
### Using Automated ML to train the models:
The [`auto-ml-forecasting-many-models.ipynb`](./auto-ml-forecasting-many-models.ipynb) noteboook is a guided solution accelerator that demonstrates steps from data preparation, to model training, and forecasting on train models as well as operationalizing the solution.
## How-to-videos
Watch these how-to-videos for a step by step walk-through of the many model solution accelerator to learn how to setup your models using Automated ML.
### Automated ML
[![Watch the video](https://media.giphy.com/media/dWUKfameudyNGRnp1t/giphy.gif)](https://channel9.msdn.com/Shows/Docs-AI/Building-Large-Scale-Machine-Learning-Forecasting-Models-using-Azure-Machine-Learnings-Automated-ML)
## Key concepts
### ParallelRunStep
[ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) enables the parallel training of models and is commonly used for batch inferencing. This [document](https://docs.microsoft.com/azure/machine-learning/how-to-use-parallel-run-step) walks through some of the key concepts around ParallelRunStep.
### Pipelines
[Pipelines](https://docs.microsoft.com/azure/machine-learning/concept-ml-pipelines) allow you to create workflows in your machine learning projects. These workflows have a number of benefits including speed, simplicity, repeatability, and modularity.
### Automated Machine Learning
[Automated Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) also referred to as automated ML or AutoML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality.
### Other Concepts
In additional to ParallelRunStep, Pipelines and Automated Machine Learning, you'll also be working with the following concepts including [workspace](https://docs.microsoft.com/azure/machine-learning/concept-workspace), [datasets](https://docs.microsoft.com/azure/machine-learning/concept-data#datasets), [compute targets](https://docs.microsoft.com/azure/machine-learning/concept-compute-target#train), [python script steps](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py), and [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
## Contributing
This project welcomes contributions and suggestions. To learn more visit the [contributing](../../../CONTRIBUTING.md) section.
Most contributions require you to agree to a Contributor License Agreement (CLA)
declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

View File

@@ -0,0 +1,898 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=\"red\" size=\"5\"><strong>!Important!</strong> </br>This notebook is outdated and is not supported by the AutoML Team. Please use the supported version ([link](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline)).</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Many Models - Automated ML\n",
"**_Generate many models time series forecasts with Automated Machine Learning_**\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"You'll need to create a compute Instance by following [these](https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-manage-compute-instance?tabs=python) instructions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.0 Set up workspace, datastore, experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003526897
}
},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Datastore\n",
"import pandas as pd\n",
"\n",
"# Set up your workspace\n",
"ws = Workspace.from_config()\n",
"ws.get_details()\n",
"\n",
"# Set up your datastores\n",
"dstore = ws.get_default_datastore()\n",
"\n",
"output = {}\n",
"output[\"SDK version\"] = azureml.core.VERSION\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose an experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613003540729
}
},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment = Experiment(ws, \"automl-many-models\")\n",
"\n",
"print(\"Experiment name: \" + experiment.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.0 Data\n",
"\n",
"This notebook uses simulated orange juice sales data to walk you through the process of training many models on Azure Machine Learning using Automated ML. \n",
"\n",
"The time series data used in this example was simulated based on the University of Chicago's Dominick's Finer Foods dataset which featured two years of sales of 3 different orange juice brands for individual stores. The full simulated dataset includes 3,991 stores with 3 orange juice brands each thus allowing 11,973 models to be trained to showcase the power of the many models pattern.\n",
"\n",
" \n",
"In this notebook, two datasets will be created: one with all 11,973 files and one with only 10 files that can be used to quickly test and debug. For each dataset, you'll be walked through the process of:\n",
"\n",
"1. Registering the blob container as a Datastore to the Workspace\n",
"2. Registering a tabular dataset to the Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### 2.1 Data Preparation\n",
"The OJ data is available in the public blob container. The data is split to be used for training and for inferencing. For the current dataset, the data was split on time column ('WeekStarting') before and after '1992-5-28' .\n",
"\n",
"The container has\n",
"<ol>\n",
" <li><b>'oj-data-tabular'</b> and <b>'oj-inference-tabular'</b> folders that contains training and inference data respectively for the 11,973 models. </li>\n",
" <li>It also has <b>'oj-data-small-tabular'</b> and <b>'oj-inference-small-tabular'</b> folders that has training and inference data for 10 models.</li>\n",
"</ol>\n",
"\n",
"To create the [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) needed for the ParallelRunStep, you first need to register the blob container to the workspace."
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"<b> To use your own data, put your own data in a blobstore folder. As shown it can be one file or multiple files. We can then register datastore using that blob as shown below.\n",
" \n",
"<h3> How sample data in blob store looks like</h3>\n",
"\n",
"['oj-data-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)</b>\n",
"![image-4.png](mm-1.png)\n",
"\n",
"['oj-inference-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n",
"![image-3.png](mm-2.png)\n",
"\n",
"['oj-data-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n",
"\n",
"![image-5.png](mm-3.png)\n",
"\n",
"['oj-inference-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n",
"![image-6.png](mm-4.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 Register the blob container as DataStore\n",
"\n",
"A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n",
"\n",
"Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n",
"\n",
"In this next step, we will be registering blob storage as datastore to the Workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"\n",
"# Please change the following to point to your own blob container and pass in account_key\n",
"blob_datastore_name = \"automl_many_models\"\n",
"container_name = \"automl-sample-notebook-data\"\n",
"account_name = \"automlsamplenotebookdata\"\n",
"\n",
"oj_datastore = Datastore.register_azure_blob_container(\n",
" workspace=ws,\n",
" datastore_name=blob_datastore_name,\n",
" container_name=container_name,\n",
" account_name=account_name,\n",
" create_if_not_exists=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.3 Using tabular datasets \n",
"\n",
"Now that the datastore is available from the Workspace, [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) can be created. Datasets in Azure Machine Learning are references to specific data in a Datastore. We are using TabularDataset, so that users who have their data which can be in one or many files (*.parquet or *.csv) and have not split up data according to group columns needed for training, can do so using out of box support for 'partiion_by' feature of TabularDataset shown in section 5.0 below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007017296
}
},
"outputs": [],
"source": [
"from azureml.core import Dataset\n",
"\n",
"ds_name_small = \"oj-data-small-tabular\"\n",
"input_ds_small = Dataset.Tabular.from_delimited_files(\n",
" path=oj_datastore.path(ds_name_small + \"/\"), validate=False\n",
")\n",
"\n",
"inference_name_small = \"oj-inference-small-tabular\"\n",
"inference_ds_small = Dataset.Tabular.from_delimited_files(\n",
" path=oj_datastore.path(inference_name_small + \"/\"), validate=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.4 Configure data with ``OutputFileDatasetConfig`` objects\n",
"This step shows how to configure output data from a pipeline step. One of the use cases for this step is when you want to do some preprocessing before feeding the data to training step. Intermediate data (or output of a step) is represented by an ``OutputFileDatasetConfig`` object. ``output_data`` is produced as the output of a step. Optionally, this data can be registered as a dataset by calling the ``register_on_complete`` method. If you create an ``OutputFileDatasetConfig`` in one step and use it as an input to another step, that data dependency between steps creates an implicit execution order in the pipeline.\n",
"\n",
"``OutputFileDatasetConfig`` objects return a directory, and by default write output to the default datastore of the workspace.\n",
"\n",
"Since instance creation for class ``OutputTabularDatasetConfig`` is not allowed, we first create an instance of this class. Then we use the ``read_parquet_files`` method to read the parquet file into ``OutputTabularDatasetConfig``."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data.output_dataset_config import OutputFileDatasetConfig\n",
"\n",
"output_data = OutputFileDatasetConfig(\n",
" name=\"processed_data\", destination=(dstore, \"outputdataset/{run-id}/{output-name}\")\n",
").as_upload()\n",
"# output_data_dataset = output_data.register_on_complete(\n",
"# name='processed_data', description = 'files from prev step')\n",
"output_data = output_data.read_parquet_files()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.0 Build the training pipeline\n",
"Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Choose a compute target\n",
"\n",
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
"\n",
"\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n",
"\n",
"If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007037308
}
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"\n",
"# Name your cluster\n",
"compute_name = \"mm-compute-v1\"\n",
"\n",
"\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print(\"Found compute target: \" + compute_name)\n",
"else:\n",
" print(\"Creating a new compute target...\")\n",
" provisioning_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_D14_V2\", max_nodes=20\n",
" )\n",
" # Create the compute target\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
"\n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(\n",
" show_output=True, min_node_count=None, timeout_in_minutes=20\n",
" )\n",
"\n",
" # For a more detailed view of current cluster status, use the 'status' property\n",
" print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure the training run's environment\n",
"The next step is making sure that the remote training run has all the dependencies needed by the training steps. Dependencies and the runtime context are set by creating and configuring a RunConfiguration object.\n",
"\n",
"The code below shows two options for handling dependencies. As presented, with ``USE_CURATED_ENV = True``, the configuration is based on a [curated environment](https://docs.microsoft.com/en-us/azure/machine-learning/resource-curated-environments). Curated environments have prebuilt Docker images in the [Microsoft Container Registry](https://hub.docker.com/publishers/microsoftowner). For more information, see [Azure Machine Learning curated environments](https://docs.microsoft.com/en-us/azure/machine-learning/resource-curated-environments).\n",
"\n",
"The path taken if you change ``USE_CURATED_ENV`` to False shows the pattern for explicitly setting your dependencies. In that scenario, a new custom Docker image will be created and registered in an Azure Container Registry within your resource group (see [Introduction to private Docker container registries in Azure](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro)). Building and registering this image can take quite a few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core import Environment\n",
"\n",
"aml_run_config = RunConfiguration()\n",
"aml_run_config.target = compute_target\n",
"\n",
"USE_CURATED_ENV = True\n",
"if USE_CURATED_ENV:\n",
" curated_environment = Environment.get(\n",
" workspace=ws, name=\"AzureML-sklearn-1.5\"\n",
" )\n",
" aml_run_config.environment = curated_environment\n",
"else:\n",
" aml_run_config.environment.python.user_managed_dependencies = False\n",
"\n",
" # Add some packages relied on by data prep step\n",
" aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
" conda_packages=[\"pandas\", \"scikit-learn\"],\n",
" pip_packages=[\"azureml-sdk\", \"azureml-dataset-runtime[fuse,pandas]\"],\n",
" pin_sdk_version=False,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up training parameters\n",
"\n",
"We need to provide ``ForecastingParameters``, ``AutoMLConfig`` and ``ManyModelsTrainParameters`` objects. For the forecasting task we also need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name(s) definition.\n",
"\n",
"#### ``ForecastingParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **time_column_name** | The name of your time column. |\n",
"| **time_series_id_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **cv_step_size** | Number of periods between two consecutive cross-validation folds. The default value is \\\"auto\\\", in which case AutoMl determines the cross-validation step size automatically, if a validation set is not provided. Or users could specify an integer value. |\n",
"\n",
"#### ``AutoMLConfig`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **task** | forecasting |\n",
"| **primary_metric** | This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i> |\n",
"| **blocked_models** | Blocked models won't be used by AutoML. |\n",
"| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **experiment_timeout_hours** | Maximum amount of time in hours that each experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. **It does not control the overall timeout for the pipeline run, instead controls the timeout for each training run per partitioned time series.** |\n",
"| **label_column_name** | The name of the label column. |\n",
"| **n_cross_validations** | Number of cross validation splits. The default value is \\\"auto\\\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or users could specify an integer value. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n",
"| **enable_early_stopping** | Flag to enable early termination if the primary metric is no longer improving. |\n",
"| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n",
"| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n",
"| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n",
"\n",
"\n",
"#### ``ManyModelsTrainParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **automl_settings** | The ``AutoMLConfig`` object defined above. |\n",
"| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1613007061544
}
},
"outputs": [],
"source": [
"from azureml.train.automl.runtime._many_models.many_models_parameters import (\n",
" ManyModelsTrainParameters,\n",
")\n",
"from azureml.automl.core.forecasting_parameters import ForecastingParameters\n",
"from azureml.train.automl.automlconfig import AutoMLConfig\n",
"\n",
"partition_column_names = [\"Store\", \"Brand\"]\n",
"\n",
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=\"WeekStarting\",\n",
" forecast_horizon=6,\n",
" time_series_id_column_names=partition_column_names,\n",
" cv_step_size=\"auto\",\n",
")\n",
"\n",
"automl_settings = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
" iteration_timeout_minutes=10,\n",
" iterations=15,\n",
" experiment_timeout_hours=0.25,\n",
" label_column_name=\"Quantity\",\n",
" n_cross_validations=\"auto\", # Feel free to set to a small integer (>=2) if runtime is an issue.\n",
" track_child_runs=False,\n",
" forecasting_parameters=forecasting_parameters,\n",
")\n",
"\n",
"mm_paramters = ManyModelsTrainParameters(\n",
" automl_settings=automl_settings, partition_column_names=partition_column_names\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Construct your pipeline steps\n",
"Once you have the compute resource and environment created, you're ready to define your pipeline's steps. There are many built-in steps available via the Azure Machine Learning SDK, as you can see on the [reference documentation for the azureml.pipeline.steps package](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py). The most flexible class is [PythonScriptStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py), which runs a Python script.\n",
"\n",
"Your data preparation code is in a subdirectory (in this example, \"data_preprocessing_tabular.py\" in the directory \"./scripts\"). As part of the pipeline creation process, this directory is zipped and uploaded to the compute_target and the step runs the script specified as the value for ``script_name``.\n",
"\n",
"The ``arguments`` values specify the inputs and outputs of the step. In the example below, the baseline data is the ``input_ds_small`` dataset. The script data_preprocessing_tabular.py does whatever data-transformation tasks are appropriate to the task at hand and outputs the data to ``output_data``, of type ``OutputFileDatasetConfig``. For more information, see [Moving data into and between ML pipeline steps (Python)](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-move-data-in-out-of-pipelines). The step will run on the machine defined by ``compute_target``, using the configuration ``aml_run_config``.\n",
"\n",
"Reuse of previous results (``allow_reuse``) is key when using pipelines in a collaborative environment since eliminating unnecessary reruns offers agility. Reuse is the default behavior when the ``script_name``, ``inputs``, and the parameters of a step remain the same. When reuse is allowed, results from the previous run are immediately sent to the next step. If ``allow_reuse`` is set to False, a new run will always be generated for this step during pipeline execution.\n",
"\n",
"> Note that we only support partitioned FileDataset and TabularDataset without partition when using such output as input.\n",
"\n",
"> Note that we **drop column** \"Revenue\" from the dataset in this step to avoid information leak as \"Quantity\" = \"Revenue\" / \"Price\". **Please modify the logic based on your data**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.steps import PythonScriptStep\n",
"\n",
"dataprep_source_dir = \"./scripts\"\n",
"entry_point = \"data_preprocessing_tabular.py\"\n",
"ds_input = input_ds_small.as_named_input(\"train_10_models\")\n",
"\n",
"data_prep_step = PythonScriptStep(\n",
" script_name=entry_point,\n",
" source_directory=dataprep_source_dir,\n",
" arguments=[\"--input\", ds_input, \"--output\", output_data],\n",
" compute_target=compute_target,\n",
" runconfig=aml_run_config,\n",
" allow_reuse=False,\n",
")\n",
"\n",
"input_ds_small = output_data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up many models pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The ``process_count_per_node`` is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for training. |\n",
"| **train_data** | The file dataset to be used as input to the training run. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n",
"| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node for optimal performance. |\n",
"| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n",
"| **run_invocation_timeout** | Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. This must be greater than ``experiment_timeout_hours`` by at least 300 seconds. |\n",
"\n",
"Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution.\n",
"\n",
"**Note**: Total time taken for the **training step** in the pipeline to complete = $ \\frac{t}{ p \\times n } \\times ts $\n",
"where,\n",
"- $ t $ is time taken for training one partition (can be viewed in the training logs)\n",
"- $ p $ is ``process_count_per_node``\n",
"- $ n $ is ``node_count``\n",
"- $ ts $ is total number of partitions in time series based on ``partition_column_names``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n",
"\n",
"\n",
"training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n",
" experiment=experiment,\n",
" train_data=input_ds_small,\n",
" compute_target=compute_target,\n",
" node_count=2,\n",
" process_count_per_node=8,\n",
" run_invocation_timeout=1200,\n",
" train_pipeline_parameters=mm_paramters,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"training_pipeline = Pipeline(ws, steps=training_pipeline_steps)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the pipeline to run\n",
"Next we submit our pipeline to run. The whole training pipeline takes about 40m using a STANDARD_D16S_V3 VM with our current ParallelRunConfig setting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run = experiment.submit(training_pipeline)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5.0 Publish and schedule the train pipeline (Optional)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.1 Publish the pipeline\n",
"\n",
"Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# published_pipeline = training_pipeline.publish(name = 'automl_train_many_models',\n",
"# description = 'train many models',\n",
"# version = '1',\n",
"# continue_on_step_failure = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.2 Schedule the pipeline\n",
"You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n",
"\n",
"# training_pipeline_id = published_pipeline.id\n",
"\n",
"# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n",
"# recurring_schedule = Schedule.create(ws, name=\"automl_training_recurring_schedule\",\n",
"# description=\"Schedule Training Pipeline to run on the first day of every month\",\n",
"# pipeline_id=training_pipeline_id,\n",
"# experiment_name=experiment.name,\n",
"# recurrence=recurrence)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.0 Forecasting"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up output dataset for inference data\n",
"Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data import OutputFileDatasetConfig\n",
"\n",
"output_inference_data_ds = OutputFileDatasetConfig(\n",
" name=\"many_models_inference_output\", destination=(dstore, \"oj/inference_data/\")\n",
").register_on_complete(name=\"oj_inference_data_ds\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For many models we need to provide the ManyModelsInferenceParameters object.\n",
"\n",
"#### ``ManyModelsInferenceParameters`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **partition_column_names** | List of column names that identifies groups. |\n",
"| **target_column_name** | \\[Optional] Column name only if the inference dataset has the target. |\n",
"| **time_column_name** | \\[Optional] Time column name only if it is timeseries. |\n",
"| **inference_type** | \\[Optional] Which inference method to use on the model. Possible values are 'forecast', 'predict_proba', and 'predict'. |\n",
"| **forecast_mode** | \\[Optional] The type of forecast to be used, either 'rolling' or 'recursive'; defaults to 'recursive'. |\n",
"| **step** | \\[Optional] Number of periods to advance the forecasting window in each iteration **(for rolling forecast only)**; defaults to 1. |\n",
"\n",
"#### ``get_many_models_batch_inference_steps`` arguments\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for inference run. |\n",
"| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n",
"| **compute_target** | The compute target that runs the inference pipeline. |\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** | \\[Optional] The number of processes per node. By default it's 2 (should be at most half of the number of cores in a single node of the compute cluster that will be used for the experiment).\n",
"| **inference_pipeline_parameters** | \\[Optional] The ``ManyModelsInferenceParameters`` object defined above. |\n",
"| **append_row_file_name** | \\[Optional] The name of the output file (optional, default value is 'parallel_run_step.txt'). Supports 'txt' and 'csv' file extension. A 'txt' file extension generates the output in 'txt' format with space as separator without column names. A 'csv' file extension generates the output in 'csv' format with comma as separator and with column names. |\n",
"| **train_run_id** | \\[Optional] The run id of the **training pipeline**. By default it is the latest successful training pipeline run in the experiment. |\n",
"| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
"| **run_invocation_timeout** | \\[Optional] Maximum amount of time in seconds that the ``ParallelRunStep`` class is allowed. This is optional but provides customers with greater control on exit criteria. |\n",
"| **output_datastore** | \\[Optional] The ``Datastore`` or ``OutputDatasetConfig`` to be used for output. If specified any pipeline output will be written to that location. If unspecified the default datastore will be used. |\n",
"| **arguments** | \\[Optional] Arguments to be passed to inference script. Possible argument is '--forecast_quantiles' followed by quantile values. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n",
"from azureml.train.automl.runtime._many_models.many_models_parameters import (\n",
" ManyModelsInferenceParameters,\n",
")\n",
"\n",
"mm_parameters = ManyModelsInferenceParameters(\n",
" partition_column_names=[\"Store\", \"Brand\"],\n",
" time_column_name=\"WeekStarting\",\n",
" target_column_name=\"Quantity\",\n",
")\n",
"\n",
"output_file_name = \"parallel_run_step.csv\"\n",
"\n",
"inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n",
" experiment=experiment,\n",
" inference_data=inference_ds_small,\n",
" node_count=2,\n",
" process_count_per_node=8,\n",
" compute_target=compute_target,\n",
" run_invocation_timeout=300,\n",
" output_datastore=output_inference_data_ds,\n",
" train_run_id=training_run.id,\n",
" train_experiment_name=training_run.experiment.name,\n",
" inference_pipeline_parameters=mm_parameters,\n",
" append_row_file_name=output_file_name,\n",
" arguments=[\"--forecast_quantiles\", 0.1, 0.9],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"inference_pipeline = Pipeline(ws, steps=inference_steps)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_run = experiment.submit(inference_pipeline)\n",
"inference_run.wait_for_completion(show_output=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieve results\n",
"\n",
"The forecasting pipeline forecasts the orange juice quantity for a Store by Brand. The pipeline returns one file with the predictions for each store and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n",
"\n",
"The following code snippet:\n",
"1. Downloads the contents of the output folder that is passed in the parallel run step \n",
"2. Reads the output file that has the predictions as pandas dataframe and \n",
"3. Displays the top 10 rows of the predictions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n",
"\n",
"forecasting_results_name = \"forecasting_results\"\n",
"forecasting_output_name = \"many_models_inference_output\"\n",
"forecast_file = get_output_from_mm_pipeline(\n",
" inference_run, forecasting_results_name, forecasting_output_name, output_file_name\n",
")\n",
"df = pd.read_csv(forecast_file)\n",
"print(\n",
" \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n",
")\n",
"df.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7.0 Publish and schedule the inference pipeline (Optional)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7.1 Publish the pipeline\n",
"\n",
"Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# published_pipeline_inf = inference_pipeline.publish(name = 'automl_forecast_many_models',\n",
"# description = 'forecast many models',\n",
"# version = '1',\n",
"# continue_on_step_failure = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7.2 Schedule the pipeline\n",
"You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain or forecast models every month or based on another trigger such as data drift."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n",
"\n",
"# forecasting_pipeline_id = published_pipeline.id\n",
"\n",
"# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n",
"# recurring_schedule = Schedule.create(ws, name=\"automl_forecasting_recurring_schedule\",\n",
"# description=\"Schedule Forecasting Pipeline to run on the first day of every week\",\n",
"# pipeline_id=forecasting_pipeline_id,\n",
"# experiment_name=experiment.name,\n",
"# recurrence=recurrence)"
]
}
],
"metadata": {
"authors": [
{
"name": "jialiu"
}
],
"categories": [
"how-to-use-azureml",
"automated-machine-learning"
],
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"vscode": {
"interpreter": {
"hash": "6bd77c88278e012ef31757c15997a7bea8c943977c43d6909403c00ae11d43ca"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Some files were not shown because too many files have changed in this diff Show More