Previously we were using a third-party library, but that doesn't have any
support for passing context.Context through its API and so isn't suitable
for our goals of adding OpenTelemetry tracing for all outgoing network
requests.
We now have our own fork that is updated to use context.Context. It also
has a slightly reduced scope no longer including various details that
are tightly-coupled to our cliconfig mechanism and so better placed in the
main OpenTofu codebase so we can evolve it in future without making
lockstep library releases.
The "registry-address" library also uses svchost and uses some of its types
in its public API, so this also incorporates v2 of that library that is
updated to use our own svchost module.
Unfortunately this commit is a mix of mechanical updates to the new
libraries and some new code dealing with the functionality that is removed
in our fork of svchost. The new code is primarily in the "svcauthconfig"
package, which is similar in purpose "ociauthconfig" but for OpenTofu's
own auth mechanism instead of the OCI Distribution protocol's auth
mechanism.
This includes some additional plumbing of context.Context where it was
possible to do so without broad changes to files that would not otherwise
have been included in this commit, but there are a few leftover spots that
are context.TODO() which we'll address separately in later commits.
This removes the temporary workaround from d079da6e9e, since we are now
able to plumb the OpenTelemetry span tree all the way to the service
discovery requests.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This adds a new context.Context argument to the Backend.StateMgr method,
updates all of the implementations to match, and then updates all of the
callers to pass in a context.
A small number of callers don't yet have context plumbed to them so those
use context.TODO() as a placeholder for now, so we can more easily find
and fix them in later commits once we have contexts more thoroughly
plumbed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This adds a new context.Context argument to the Backend.Configure method,
updates all of the implementations to match, and then updates all of the
callers to pass in a context.
A small number of callers don't yet have context plumbed to them so those
use context.TODO() as a placeholder for now, so we can more easily find
and fix them in later commits once we have contexts more thoroughly
plumbed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Add ability to specify Terraform Cloud Project in cloud block
Adds project configuration to the workspaces section of the cloud block.
Also configurable via the `TF_CLOUD_PROJECT` environment variable.
When a project is configured, the following behaviors will occur:
- `terraform init` with workspaces.name configured will create the workspace in the given project
- `terraform workspace new <name>` with workspaces.tags configured will create workspaces in the given project
- `terraform workspace list` will list workspaces only from the given project
The following behaviors are NOT affected by project configuration
- `terraform workspace delete <name>` does not validate the workspace's inclusion in the given project
- When initializing a workspace that already exists in Terraform Cloud, the workspace's parent project is NOT validated against the given project
Adds tests for cloud block configuration of project
Update changelog
* Update cloud block docs
* Fix typos and changelog entry
* Add speculative project lookup early in the cloud initialize process to capture inability to find a configured project
* Add project config for alias test
- Add plausible unredacted plan json for `plan-json-{basic,full}` testdata --
Created by just running the relevant terraform commands locally.
- Add plan-json-no-changes testdata --
The unredacted json was organically grown, but I edited the log and redacted
json by hand to match what I observed from a real but unrelated
planned-and-finished run in TFC.
- Add plan-json-basic-no-unredacted testdata --
This mimics a lack of admin permissions, resulting in a 404.
- Hook up `MockPlans.ReadJSONOutput` to test fixtures, when present.
This method has been implemented for ages, and has had a backing store for
unredacted plan json, but has been effectively a no-op since nothing ever
fills that backing store. So, when creating a mock plan, make an attempt to
read unredacted json and stow it in the mocks on success.
- Make it possible to get the entire MockClient for a test backend
In order to test some things, I'm going to need to mess with the internal
state of runs and plans beyond what the go-tfe client API allows. I could add
magic special-casing to the mock API methods, or I could locate the
shenanigans next to the test that actually exploits it. The latter seems more
comprehensible, but I need access to the full mock client struct in order to
mess with its interior.
- Fill in some missing expectations around HasChanges when retrieving a run +
plan.
Since `terraform show -json` needs to get a raw hunk of json bytes and sling it
right back out again, it's going to be more convenient if plain `show` can ALSO
take in raw json. In order for that to happen, I need a function that basically
acts like `client.Plans.ReadJSONOutput()`, without eagerly unmarshalling that
`jsonformat.Plan` struct.
As a slight bonus, this also lets us make the tfe client mocks slightly
stupider.
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Create a pending state version followed by a separate state upload
When this version of the endpoint fails (It is not yet generally available, or when using with Terraform Enterprise) Fall back to the original call with state content included in the request.
This strategy will reduce the amount of save failures due to network latency and gateway timeouts.
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
This test case was making a real DNS call in a non-acceptance test, and
since it was intended to fail it would introduce a several second delay.
This commit replaces the test with a similar one which uses the mocked
disco services for a non-TFE host.
Also restructure the test to use t.Run for clarity.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url
Normally, `terraform output` refreshes and reads the entire state in the command package before pulling output values out of it. This doesn't give Terraform Cloud the opportunity to apply the read state outputs org permission and instead applies the read state versions permission.
I decided to expand the state manager interface to provide a separate GetRootOutputValues function in order to give the cloud backend a more nuanced opportunity to fetch just the outputs. This required moving state Refresh/Read code that was previously in the command into the shared backend state as well as the filesystem state packages.
For non-interactive contexts, Terraform is typically executed with the flag -input=false.
However for runs that are not set to auto approve, the cloud integration will prompt a user for
approval input even with input being set to false. This commit enables the cloud integration to know
the value of the input flag and use it to determine whether or not to ask the user for input.
If -input is set to false and the run cannot be auto approved, the cloud integration will throw an error
stating run confirmation can no longer be handled in the CLI and that they must do so through the browser.
These changes remove all of the preexisting version checking for
individual features, wiping the slate clean with an overall minimum
requirement of a future TFP-API-Version 2.5, which at the time of this
writing is expected to be TFE v202112-1.
It also actually provides that expected TFE version as an actionable
error message, rather than generically saying that it isn't supported or
using the somewhat opaque API version header.