This extends statemgr.Persistent, statemgr.Locker and remote.Client to
all expect context.Context parameters, and then updates all of the existing
implementations of those interfaces to support them.
All of the calls to statemgr.Persistent and statemgr.Locker methods outside
of tests are consistently context.TODO() for now, because the caller
landscape of these interfaces has some complications:
1. statemgr.Locker is also used by the clistate package for its state
implementation that was derived from statemgr.Filesystem's predecessor,
even though what clistate manages is not actually "state" in the sense
of package statemgr. The callers of that are not yet ready to provide
real contexts.
In a future commit we'll either need to plumb context through to all of
the clistate callers, or continue the effort to separate statemgr from
clistate by introducing a clistate-specific "locker" API for it
to use instead.
2. We call statemgr.Persistent and statemgr.Locker methods in situations
where the active context might have already been cancelled, and so we'll
need to make sure to ignore cancellation when calling those.
This is mainly limited to PersistState and Unlock, since both need to
be able to complete after a cancellation, but there are various
codepaths that perform a Lock, Refresh, Persist, Unlock sequence and so
it isn't yet clear where is the best place to enforce the invariant that
Persist and Unlock must not be called with a cancelable context. We'll
deal with that more in subsequent commits.
Within the various state manager and remote client implementations the
contexts _are_ wired together as best as possible with how these subsystems
are already laid out, and so once we deal with the problems above and make
callers provide suitable contexts they should be able to reach all of the
leaf API clients that might want to generate OpenTelemetry traces.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This adds a new context.Context argument to the Backend.StateMgr method,
updates all of the implementations to match, and then updates all of the
callers to pass in a context.
A small number of callers don't yet have context plumbed to them so those
use context.TODO() as a placeholder for now, so we can more easily find
and fix them in later commits once we have contexts more thoroughly
plumbed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This adds a new context.Context argument to the Backend.Workspaces method,
updates all of the implementations to match, and then updates all of the
callers to pass in a context.
A small number of callers don't yet have context plumbed to them so those
use context.TODO() as a placeholder for now, so we can more easily find
and fix them in later commits once we have contexts more thoroughly
plumbed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The "backendConfigNeedsMigration" helper evaluates the backend
configuration inline to compare it with the object previously saved in the
.terraform/terraform.tfstate file.
However, this wasn't updated to use the new "static eval" functionality
and so was treating any references to variables or function calls as
invalid, causing a spurious "backend configuration changed" error when
re-initializing the working directory with identical backend configuration
settings.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Several times over the years we've considered adding tracing
instrumentation to Terraform, since even when running in isolation as a
CLI program it has a "distributed system-like" structure, with lots of
concurrent internal work and also some work delegated to provider plugins
that are essentially temporarily-running microservices.
However, it's always felt a bit overwhelming to do it because much of
Terraform predates the Go context.Context idiom and so it's tough to get
a clean chain of context.Context values all the way down the stack without
disturbing a lot of existing APIs.
This commit aims to just get that process started by establishing how a
context can propagate from "package main" into the command package,
focusing initially on "terraform init" and some other commands that share
some underlying functions with that command.
OpenTelemetry has emerged as a de-facto industry standard and so this uses
its API directly, without any attempt to hide it behind an abstraction.
The OpenTelemetry API is itself already an adapter layer, so we should be
able to swap in any backend that uses comparable concepts. For now we just
discard the tracing reports by default, and allow users to opt in to
delivering traces over OTLP by setting an environment variable when
running Terraform (the environment variable was established in an earlier
commit, so this commit builds on that.)
When tracing collection is enabled, every Terraform CLI run will generate
at least one overall span representing the command that was run. Some
commands might also create child spans, but most currently do not.
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When running `terraform init` against a backend with multiple
workspaces, none of which are the currently indicated local workspace,
Terraform prompts the user to choose a workspace from the list. In
automation, using the `-input=false` argument should disable asking for
input, but previously would hang instead.
When initializing a backend, if the currently selected workspace does
not exist, the user is prompted to select from the list of workspaces
the backend provides.
Instead, we should automatically select the only workspace available
_if_ that's all that's there.
Although with being a nice bit of polish, this enables future
improvments with Terraform Cloud in potentially removing the implicit
depenency on always using the 'default' workspace when the current
configuration is mapped to a single TFC workspace.
Thus far our various interactions with the bits of state we keep
associated with a working directory have all been implemented directly
inside the "command" package -- often in the huge command.Meta type -- and
not managed collectively via a single component.
There's too many little codepaths reading and writing from the working
directory and data directory to refactor it all in one step, but this is
an attempt at a first step towards a future where everything that reads
and writes from the current working directory would do so via an object
that encapsulates the implementation details and offers a high-level API
to read and write all of these session-persistent settings.
The design here continues our gradual path towards using a dependency
injection style where "package main" is solely responsible for directly
interacting with the OS command line, the OS environment, the OS working
directory, the stdio streams, and the CLI configuration, and then
communicating the resulting information to the rest of Terraform by wiring
together objects. It seems likely that eventually we'll have enough wiring
code in package main to justify a more explicit organization of that code,
but for this commit the new "workdir.Dir" object is just wired directly in
place of its predecessors, without any significant change of code
organization at that top layer.
This first commit focuses on the main files and directories we use to
find provider plugins, because a subsequent commit will lightly reorganize
the separation of concerns for plugin launching with a similar goal of
collecting all of the relevant logic together into one spot.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.