`nuget restore` actually runs through MSBuild! However, #15855 added a
dependency from our project on a system-installed _or locally detected_
`vcpkg.targets` (or `.props`).
Our build runs `nuget restore` before finding or installing vcpkg, so
the rules in our project file would try to import vcpkg before it had
been found (or installed).
On build agents with vcpkg installed via the VS workload, this was fine:
we would import the one that came with VS and go on our merry way. On
build agents where it needs to be installed locally, it could not be
imported.
The fix in this PR is to install/bootstrap vcpkg before running nuget.
I tried to isolate the vcpkg rules to only run _in the absence of
nuget_, but that didn't work.
This pull request removes the following vendored open source, in favor
of getting it from vcpkg:
- CLI11 2.4
- jsoncpp 1.9
- fmt 7.1.3
- gsl 3.1 (not vendored, but submoduled--arguably worse!)
Now that Visual Studio 2022 includes a built-in workload for vcpkg, the
onboarding process is much smoother. Terminal should only require the
vcpkg workload.
I've added some build rules that detect vcpkg via VS and via the user's
environment before falling back to a location in the source tree. The CI
pipeline will fall back to installing and bootstrapping vcpkg in
dep/vcpkg if necessary.
Some OSS has not been (and will not be) migrated:
- wyhash: ours is included directly in til/hash
- pcg_random: we have a stripped down copy compared to vcpkg
- stb_rect: vcpkg only ships *all of STB*; ours is a stripped down copy
- chromium numerics: vcpkg does not ship Chromium, especially not this
tiny fraction of Chromium
- dynamic_bitset and libpopcnt: removing in #17510
- interval_tree: no vcpkg equivalent
To support the needs of the inbox Windows build, I've split up our vcpkg
manifest into dependencies for all projects and dependencies just for
Terminal. To support this, we now offer a `terminal` feature. The vcpkg
rules in `common.build.pre.props` are set up to turn it on, whereas the
build rules we eventually write for the OS will not be.
Most of the work is concentrated in `common.build.pre.props`.
This fixes some more issues not properly covered by #17526:
* Fixed `_locComment_text` comments being effectively ignored.
* Fixed line splitting of comments (CRLF vs LF).
* Fixed BOM suppression.
* Fixed support for having multiple `{Locked=...}` comments.
* Modified `Generate-PseudoLocalizations.ps1` to find the .xml files.
(As opposed to .resw for the other translations.)
* Added support for the new format by adding new XPath expressions,
and stripping comments/attributes as needed.
* Fixed `PreserveWhitespace` during XML loading.
* Fixed compliance with PowerShell's strict mode.
## Validation Steps Performed
Ran it locally and compared the results. ✅
We have to run in an older OneBranch Windows container image due to
compiler bugs.
This change prevents us from having to wait for the container image to
download for build legs that _aren't_ using the compiler.
This allows us to remove the dependency on the `Terminal.Internal`
repository.
I have also added some parameters to the build pipeline to ease testing.
This also updates the localization pipeline to check in translations for
the PDPs.
Right now, the primary source for PDPs is the Terminal.Internal
repository. They are submitted from there, and pulled back in as though
they were destined for the internal repo. We rename them on disk prior
to loc check-in to pretend they live in this repo.
Once I submit a change request to the Touchdown team to update the paths
in their backend, I will follow up with another pull request that
updates the remaining build steps to account for that.
This centralized all our ESRP calls in one file, which will make it
easier in the future when we are invariable required to change how we
call it again.
This required me to push a bunch more parameters through the build
pipeline, but it gave me the opportunity to define them as variables
that can be set at queue time.
This is required for us to move off Entra ID Application identity.
(cherry picked from commit 2e7c3fa313)
This was approved in #16957, so I will merge with one signoff.
Work is ongoing to remove individually-authenticated service accounts
from some pipelines. This moves us closer to that goal.
Tested in Nightly 2403.28002.
Right now, the localization submission pipeline runs every night and
sends our localizable resources over to Touchdown. Later, release builds
pick up the localizations directly from Touchdown, move them into place,
and consume them.
This allowed us to avoid having localized content in the repository, but
it came with too many downsides:
- Users could not contribute additional localizations very easily
- We use the same release pipeline and Touchdown configuration for every
branch, so strings needed to either slightly match or _entirely match_
across an entire set of active release branches
- Building from day to day can pull in different strings, making the
product not reproduceable
- Calling TDBuild during release builds requires network access from the
build machine (so does restoring NuGet packages, but that's neither
here nor there)
- Local developers and users could not test out other languages
This pull request moves all localization processing into the nightly
build phase and introduces support for checking loc in and submitting a
pull request. The pull request will not be created anew if one already
exists which has not been merged.
Anything we needed to do on release is now done overnight. This includes
moving loc files into the right places and merging the Cascadia
resources with the Context Menu resources (so that we can work around a
relatively lower amount of translations being chosen for the app versus
the context menu; see #12491 for more info.)
There are some smaller downsides to this approach and its
implementation:
- The first commit is going to be huge
- Right now, it only manages a single branch and uses a force push; if a
PR is not reviewed timely, it will be force-pushed and you cannot see
the day-to-day changes in the strings. Hopefully there won't be any.
I've taken great care to ensure that repeated runs of this new pipeline
will not result in unnecessary whitespace changes. This required
changing how we merge ContextMenu.resw into CascadiaPackage to always
use the .NET XmlWriter with specific flags.
NOTE that this does not allow users to _contribute_ translation fixes
for the 10 languages which we are importing. We will still need to pull
changes out of those files and submit them as bugs to the localization
team separately, and hope they come back around in another nightly
build. However, there is no reason users cannot contribute
_non-Touchdown_ languages.
Welp, would you look at that? We never actually supported "canary"
feature settings. Canary's been defaulting to the "Dev" config since
inception.
There's a couple things that are supposed to only be on in Dev and not
Canary. They clearly haven't mattered, but better safe than sorry!
This pull request removes the need for the SettingsModel tests to run in
a UAP harness and puts them into the standard CI rotation.
This required some changes to `Run-Tests.ps1` to ensure that the right
`te.exe` is selected for each test harness. It's a bit annoying, but for
things that depend on a `resources.pri`, that file must be in the same
directory as the EXE that is hosting the test. Not the DLL, mind you,
the EXE. In our case, that's `TE.ProcessHost.exe`
The bulk of the change is honestly namespace tidying.
Co-authored-by: Mike Griese <migrie@microsoft.com>
Co-authored-by: Leonard Hecker <lhecker@microsoft.com>
Due to things outside our control, sometimes the Package phase fails
when VPack publication is enabled. Because of this, symbols won't be
published. We still want these builds to be considered "golden" and we
are still shipping them, so we *must* publish symbols.
Upgrades check-spelling to [v0.0.22](https://github.com/check-spelling/check-spelling/releases/tag/v0.0.22)
* refreshes workflow
* enables dependabot PRs to trigger CI (so that in the future you'll be
able to see breaking changes to the dictionary paths)
* refreshes metadata
* built-in handling of `\n`/`\r`/`\t` is removed -- This means that the
`patterns/0_*.txt` files can be removed.
* this specific PR includes some shim content, in
`allow/check-spelling-0.0.21.txt` -- once it this PR merges, it can be
removed on a branch and the next CI will clean out items from
`expect.txt` relating to the `\r` stuff and suggest replacement content.
* talking to the bot is enabled for forks (but not the master
repository)
* SARIF reporting is enabled for PRs w/in a single repository (not
across forks)
* In job reports, there's a summary table (space permitting) linking to
instances (this is a poor man's SARIF report)
* When a pattern splits a thing that results in check-spelling finding
an unrecognized token, that's reported with a distinct category
* When there are items in expect that not longer match anything but more
specific items do (e.g. `microsoft` vs. `Microsoft`), there's now a
specific category with help/advice
* Fancier excludes suggestions (excluding directories, file types, ...)
* Refreshed dictionaries
* The comment now links to the job summary (which includes SARIF link if
available, the details view, and a generated commit that people can use
if they're ok w/ the expect changes and don't want to run perl)
Validation
----------
1. the branch was developed in
https://github.com/check-spelling-sandbox/terminal/actions?query=branch%3Acheck-spelling-0.0.22
2. ensuring compatibility with 0.0.21 was done in
https://github.com/check-spelling-sandbox/terminal/pull/3
3. this version has been in development for a year and has quite a few
improvements, we've been actively dogfooding it throughout this period 😄
Additional Fixes
----------------
spelling: the
spelling: shouldn't
spelling: no
spelling: macos
spelling: github
spelling: fine-grained
spelling: coarse-grained
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
This fixes a cosmetic issue with the version number in the unpackaged
builds and NuGet packages.
They were showing up as `-preview`, even when they were stable, because
the variable template didn't know about the branding.
This pull request also removes the original release and nightly
pipelines, but it does not remove the release pipeline _template_.
I had to demote the Azure job from being a _deployment_ to being a plain
old job, unfortunately. Alas! Review with whitespace disabled (or `git
diff -w`).
This pipeline does everything the existing release pipeline does, except
it does it using the OneBranch official templates.
Most of our existing build infrastructure has been reused, with the
following changes:
- We are no longer using `job-submit-windows-vpack`, as OneBranch does
this for us.
- `job-merge-msix-into-bundle` now supports afterBuildSteps, which we
use to stage the msixbundle into the right place for the vpack
- `job-build-project` supports deleting all non-signed files (which the
OneBranch post-build validation requires)
- `job-build-project` now deletes `console.dll`, which is unused in any
of our builds, because XFGCheck blows up on it for some reason on x86
- `job-publish-symbols` now supports two different types of PAT
ingestion
- I have pulled out the NuGet filename variables into a shared variables
template
I have also introduced a TSA config (which files bugs on us for binary
analysis failures as well as using the word 'sucks' and stuff.)
I have also baselined a number of control flow guard/binary analysis
failures.
Unfortunately, the appLicensing restricted capability we used to make
Canary installable without the store only works on Windows 11. Because
of that, we have to restrict the app package to Windows 11 and above.
I'd rather not leave Windows 10 users out in the cold, so this pull
request also publishes Canary builds to the public storage bucket with
the name `Microsoft.WindowsTerminalCanary_latest_x64.zip` (etc.)
The version number will be kept inside the archive. It remains to be
seen whether that is a good idea!
When combined with #16048, Canary builds from Azure will automatically
run in portable mode!
I also added support to the unpackaged distribution script to produce
portable mode packages. It is off by default for AppX->ZIP builds and
**on** by default for Layout->ZIP builds.
This constitutes a change in behavior.
After the nightly build completes, we'll automatically generate a
.appinstaller and publich it plus the msixbundle to an Azure Storage
account.
I had to add step/job customization to the publish step in the full
release pipeline template.
The .appinstaller hardcodes our XAML dependency, which makes it a bit of
a pain. We can revisit this later, and publish our dependencies
directly and automatically instead of hardcoding them.
I am considering moving the appinstaller generation step to the MSIX
bundling job, but this works right now and is not too terrible.
Closes#774
This pull request moves HwndTerminal into Microsoft.Terminal.Control.Lib
and removes PublicTerminalCore completely.
Microsoft.Terminal.Control.dll now exports the C API from HwndTerminal.
This adds ~100kb to Microsoft.Terminal.Control.dll and ~1400kb to the
WPF package (per architecture) but with the coming interactivity
platform merge it's going to benefit us big time.
To make this happen, I moved most of `release.yml` into a shared
_pipeline_ template (which is larger than a steps or jobs template).
Most of the diffs are due to that move.
If you compare main:build/pipelines/release.yml against
dev/duhowett/nightly-build:build/pipelines/templates-v2/pipeline-full-release-build.yml,
you will see that the changes are much more minimal than they look.
I also added a parameter to configure how long symbols will be kept. It
defaults to 36530 days (which is the default for the PublishSymbols
task! Yes, 100 years!) but nightly builds will get 15 days.
Obviously, icons are all wrong. Color is about right but they need CAN
icons.
I'll leave that as an exercise for @DHowett to generate the right ones
as a follow-up.
Related to #774
The OneBranch build system relies on the *build container host* being
able to publish all artifacts all at once. Therefore, our build steps
must not publish any artifacts.
I made it configurable so that the impact on existing pipelines was
minimal.
For every job that produces artifacts and is part of the release
pipeline, I am now exposing two variables that we can pass to OneBranch
so that it can locate and name artifacts:
- `JobOutputDirectory`, the output folder for the entire job
- `JobOutputArtifactName`, the name of the artifact produced by the job
I have also added a `variables` parameter to every job, so consuming
pipelines can override or insert their own variables.
This does two bits:
1. correctly marks our tests as failed in xUnit, so that AzDo will pick
up that the tests have failed.
2. Actually intentionally mark skipped tests as skipped in xUnit. We
were doing this accidentally before.
3. Add a CI step to log test failures in a way that they can show up on
GitHub
Probably regressed around #6992 and #4490.
### details
#### Part the first
We were relying on the MUX build scripts to convert our WTT test logs to
xUnit format, which AzDo then ingests. That script we used relied on
some WinUI-specific logic around retrying tests. They have some logic to
auto-retry failed tests. They then mark a test as "skipped" if it passed
less than some threshold of times. Since we were never setting that
variable, we would mark a test as "skipped" if it had _0_ passes. So,
all failures showed up on AzDo as "skipped".
Why didn't we notice this? Well, the `Run-Tests.ps1` script will still
return `1` if _any_ tests failed. So the test job would fail if there
was a failure, AzDo just wouldn't know which test it was.
#### part the second
Updates `ConvertWttLogToXUnitLog` in `HelixTestHelpers.cs` to understand
that a test can be skipped, in addition to pass/fail. Removes all the
logic for dealing with retries, cause we didn't need that.
#### part the third
TAEF doesn't emit error messages in a way that AzDo can immediately pick
up on which tests failed. This means that Github gives us this useless
error message:

That's the only "error" that AzDo knows about.
This PR changes that by adding a build step to manually parse the xUnit
results, and log the names of any tests that failed. By logging them
with a prefix of `##vso[task.logissue type=error]`, then AzDo will
surface that text as an error message. GitHub can then grab that text
and surface it too.
### Addenda: Why aren't we using the VsTest module
as noted in
https://github.com/microsoft/terminal/pull/4490#issuecomment-583104982,
the vstest module is literally 6x slower than just running TAEF
directly.
This pull request rewrites the entire Azure DevOps build system.
The guiding principles behind this rewrite are:
- No pipeline definitions should contain steps (or tasks) directly.
- All jobs should be in template files.
- Any set of steps that is reused across multiple jobs must be in
template files.
- All artifact names can be customized (via a property called
`artifactStem` on all templates that produce or consume artifacts).
- No compilation happens outside of the "Build" phase, to consolidate
the production and indexing of PDBs.
- **Building the project produces a `bin` directory.** That `bin`
directory is therefore the primary currency of the build. Jobs will
either produce or consume `bin` if they want to do anything with the
build outputs.
- All step and job templates are named with `step` or `job` _first_,
which disambiguates them in the templates directory.
- Most jobs can be run on different `pool`s, so that we can put
expensive jobs on expensive build agents and cheap jobs on cheap
build agents. Some jobs handle pool selection on their own, however.
Our original build pipelines used the `VSBuild` task _all over the
place._ This resulted in Terminal being built in myriad ways, different
for every pipeline. There was an attempt at standardization early on,
where `ci.yml` consumed jobs and steps templates... but when
`release.yml` was added, all of that went out the window.
The new pipelines are consistent and focus on a small, well-defined set
of jobs:
- `job-build-project`
- This is the big one!
- Takes a list of build configurations and platforms.
- Produces an artifact named `build-PLATFORM-CONFIG` for the entire
matrix of possibilities.
- Optionally signs the output and produces a bill of materials.
- Admittedly has a lot going on.
- `job-build-package-wpf`
- Takes a list of build configurations and platforms.
- Consumes the `build-` artifact for every config/platform
possibility, plus one for "Any CPU" (hardcoded; this is where the
.NET code builds)
- Produces one `wpf-nupkg-CONFIG` for each configuration, merging
all platforms.
- Optionally signs the output and produces a bill of materials.
- `job-merge-msix-into-bundle`
- Takes a list of build configurations and platforms.
- Consumes the `build-` artifact for every config/platform
- Produces one `appxbundle-CONFIG` for each configuration, merging
all platforms for that config into one `msixbundle`.
- Optionally signs the output and produces a bill of materials.
- `job-package-conpty`
- Takes a list of build configurations and platforms.
- Consumes the `build-` artifact for every config/platform
- Produces one `conpty-nupkg-CONFIG` for each configuration, merging
all platforms.
- Optionally signs the output and produces a bill of materials.
- `job-test-project`
- Takes **one** build config and **one** platform.
- Consumes `build-PLATFORM-CONFIG`
- Selects its own pools (hardcoded) because it knows about
architectures and must choose the right agent arch.
- Runs tests (directly on the build agent).
- `job-run-pgo-tests`
- Just like the above, but runs tests where `IsPgo` is `true`
- Collects all of the PGO counts and publishes a `pgc-intermediates`
artifact for that platform and configuration.
- `job-pgo-merge-pgd`
- Takes **one** build config and multiple platforms.
- Consumes `build-$platform-CONFIG` for each platform.
- Consumes `pgc-intermediates-$platform-CONFIG` for each platform.
- Merges the `pgc` files into `pgd` files
- Produces a new `pgd-` artifact.
- `job-pgo-build-nuget-and-publish`
- Consumes the `pgd-` artifact from above.
- Packs it into a `nupkg` and publishes it.
- `job-submit-windows-vpack`
- Only expected to run against `Release`.
- Consumes the `appxbundle-CONFIG` artifact.
- Publishes it to a vpack for Windows to consume.
- `job-check-code-format`
- Does not use artifacts. Runs `clang-format`.
- `job-index-github-codenav`
- Does not use artifacts.
Fuzz submission is broken due to changes in the `onefuzz` client.
I have removed the compliance and security build because it is no longer
supported.
Finally, this pull request has some additional benefits:
- I've expanded the PGO build phase to cover ARM64!
- We can remove everything Helix-related except the WTT parser
- We no longer depend on Helix submission or Helix pools
- The WPF control's inner DLLs are now codesigned (#15404)
- Symbols for the WPF control, both .NET and C++, are published
alongside all other symbols.
- The files we submit to ESRP for signing are batched up into a single
step[^1]
Closes#11874Closes#11974Closes#15404
[^1]: This will have to change if we want to sign the individual
per-architecture `.appx` files before bundling so that they can be
directly installed.
This pull request introduces the arm64 testing agents and a few build
phases to use them.
In addition to running the ARM64 tests in CI, it makes the following
changes:
- The x64 tests now run on equivalent x64 testing agents
- We now run ARM64 builds (and tests!) on all pull requests
- I've deduplicated a lot of the build and test stages
- New queue-time parameters have been added to control various phases,
for quick pipeline testing
- A bunch of conditions have been promoted to compile-time checks to
control the existence of stages and steps more tightly
Using our own pools like this gives us a lot of freedom in the tooling
that's installed, the OS versions it targets, and when we take on Visual
Studio updates.
As part of this effort, I've also stood up a "small" agent pool. At the
time of this PR, that pool is using D2ads-v5 SKU VMs (2 vcore 8 GiB)
versus the "large" agent pool's D8as-v5 (8 vcore 32 GiB). Smaller build
tasks have been moved over to the small pool. Compilation's the hard
part, so it gets to stay on the large pool.
benchcat, "bc" for short, is a tool that I've written over the last
two years to help me benchmark OpenConsole and Windows Terminal.
Initially it only measured the time it took to print a file as fast as
possible, but it's grown to support a number of arguments, including
chunk (`WriteFile` call) sizes, repeat counts and VT mode with italic
and colorized output. In the future I also wish to add a way to
generate the output data on the fly via command line arguments.
One unusual trait of benchcat is that it is compiled entirely without
CRT and vcruntime. I did this so that I could test it on Windows XP.
Also, it's kind of funny seeing how it's only about 11kB.
This commit also fixes a couple `$LASTEXITCODE` cases, because our
spellchecker was bothering me a lot with this PR and so I just fixed it.