OpenTelemetry has various Go packages split across several Go modules that
often need to be carefully upgraded together. And in particular, we are
using the "semconv" package in conjunction with the OpenTelemetry SDK's
"resource" package in a way that requires that they both agree on which
version of the OpenTelemetry Semantic Conventions are being followed.
To help avoid "dependency hell" situations when upgrading, this centralizes
all of our direct calls into the OpenTelemetry SDK and tracing API into
packages under internal/tracing, by exposing a few thin wrapper functions
that other packages can use to access the same functionality indirectly.
We only use a relatively small subset of the OpenTelemetry library surface
area, so we don't need too many of these reexports and they should not
represent a significant additional maintenance burden.
For the semconv and resource interaction in particular this also factors
that out into a separate helper function with a unit test, so we should
notice quickly whenever they become misaligned. This complements the
end-to-end test previously added in opentofu/opentofu#3447 to give us
faster feedback about this particular problem, while the end-to-end test
has the broader scope of making sure there aren't any errors at all when
initializing OpenTelemetry tracing.
Finally, this also replaces the constants we previously had in package
traceaddrs with functions that return attribute.KeyValue values directly.
This matches the API style used by the OpenTelemetry semconv packages, and
makes the calls to these helpers from elsewhere in the system a little
more concise.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We previously added the -config mode for showing the entire assembled
configuration tree, including the content of any descendent modules, but
that mode requires first running "tofu init" to install all of the
provider and module dependencies of the configuration.
This new -module=DIR mode returns a subset of the same JSON representation
for only a single module that can be generated without first installing
any dependencies, making this mode more appropriate for situations like
generating documentation for a single module when importing it into the
OpenTofu Registry. The registry generation process does not want to endure
the overhead of installing other providers and modules when all it actually
needs is metadata about the top-level declarations in the module.
To minimize the risk to the already-working full-config JSON representation
while still reusing most of its code, the implementation details of package
jsonconfig are a little awkward here. Since this code changes relatively
infrequently and is implementing an external interface subject to
compatibility constraints, and since this new behavior is relatively
marginal and intended primarily for our own OpenTofu Registry purposes,
this is a pragmatic tradeoff that is hopefully compensated for well enough
by the code comments that aim to explain what's going on for the benefit
of future maintainers. If we _do_ find ourselves making substantial changes
to this code at a later date then we can consider a more significant
restructure of the code at that point; the weird stuff is intentionally
encapsulated inside package jsonconfig so it can change later without
changing any callers.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This extends statemgr.Persistent, statemgr.Locker and remote.Client to
all expect context.Context parameters, and then updates all of the existing
implementations of those interfaces to support them.
All of the calls to statemgr.Persistent and statemgr.Locker methods outside
of tests are consistently context.TODO() for now, because the caller
landscape of these interfaces has some complications:
1. statemgr.Locker is also used by the clistate package for its state
implementation that was derived from statemgr.Filesystem's predecessor,
even though what clistate manages is not actually "state" in the sense
of package statemgr. The callers of that are not yet ready to provide
real contexts.
In a future commit we'll either need to plumb context through to all of
the clistate callers, or continue the effort to separate statemgr from
clistate by introducing a clistate-specific "locker" API for it
to use instead.
2. We call statemgr.Persistent and statemgr.Locker methods in situations
where the active context might have already been cancelled, and so we'll
need to make sure to ignore cancellation when calling those.
This is mainly limited to PersistState and Unlock, since both need to
be able to complete after a cancellation, but there are various
codepaths that perform a Lock, Refresh, Persist, Unlock sequence and so
it isn't yet clear where is the best place to enforce the invariant that
Persist and Unlock must not be called with a cancelable context. We'll
deal with that more in subsequent commits.
Within the various state manager and remote client implementations the
contexts _are_ wired together as best as possible with how these subsystems
are already laid out, and so once we deal with the problems above and make
callers provide suitable contexts they should be able to reach all of the
leaf API clients that might want to generate OpenTelemetry traces.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This adds a new context.Context argument to the Backend.StateMgr method,
updates all of the implementations to match, and then updates all of the
callers to pass in a context.
A small number of callers don't yet have context plumbed to them so those
use context.TODO() as a placeholder for now, so we can more easily find
and fix them in later commits once we have contexts more thoroughly
plumbed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The "tofu show" command has historically been difficult to extend to meet
new use-cases, such as showing the current configuration without creating
a plan, because it was designed to take zero or one arguments and then try
to guess what the one specified argument was intended to mean.
This commit introduces a new style where the type of object to inspect is
specified using command line option syntax, using one of two
mutually-exclusive options:
-state Show the latest state snapshot.
-plan=FILE Show the plan from the given saved plan file.
We expect that a future commit will extend this with a new "-config" option
to inspect the configuration rooted in the current working directory, and
possibly with "-module=DIR" to shallowly inspect a single module without
necessarily having to fully initialize it with all of its dependencies
first. However, both of those use-cases (and any others) are not in scope
for this commit, which is focused only on refactoring to make those future
use-cases possible.
The old mode of specifying neither option and providing zero or one
positional arguments is still supported for backward compatibility.
Notably, the legacy style is the only way to access the legacy behavior of
inspecting a specific state snapshot file from the local filesystem, which
has not often been used since Terraform v0.9 as we've moved away
from manual management of state files to the structure of state backends.
Those who _do_ still need that old behavior can still access it in the
old way, but there will be no new-style equivalent of it unless we learn
of a compelling use case for it.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
Since terraform show can accept three different kinds of file to act on, its
error messages were starting to become untidy and unhelpful. The main issue was
that if we successfully identified the file type but then ran into some problem
while reading or processing it, the "real" error would be obscured by some other
useless errors (since a file of one type is necessarily invalid as the other
types).
This commit tries to winnow it down to just one best error message, in the
"happy path" case where we know what we're dealing with but hit a snag. (If we
still have no idea, then we fall back to dumping everything.)
One funny bit: We need to know the ViewType at the point where we ask the Cloud
backend for the plan JSON, because we need to switch between two distinctly
different formats for human show vs. `show -json`. I chose to pass that by
stashing it on the command struct; passing it as an argument would also work,
but one, the argument lists in these nested method calls were getting a little
unwieldy, and two, many of these functions had to be receiver methods anyway in
order to call methods on Meta.
When showing a saved plan, we do not need to check the state lineage
against current state, because the plan cannot be applied. This is
relevant when plan and apply specify a `-state` argument to choose a
non-default state file. In this case, the stored prior state in the plan
will not match the default state file, so a lineage check will always
error.
This is a replacement declaration for using Terraform Cloud as a remote
backend, leaving the literal backend as an implementation detail and not
a user-level concept.
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.
This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.
Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.
However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.
My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
- The "terraform graph" command is no longer hooked directly into the
core graph builders, because that's no longer part of the public API.
However, I did include a couple new Context functions whose contract
is to produce a UI-oriented graph, and _for now_ those continue to
return the physical graph we use for those operations. There's no
exported API for generating the "validate" and "eval" graphs, because
neither is particularly interesting in its own right, and so
"terraform graph" no longer supports those graph types.
- terraform.NewContext no longer has the responsibility for collecting
all of the provider schemas up front. Instead, we wait until we need
them. However, that means that some of our error messages now have a
slightly different shape due to unwinding through a differently-shaped
call stack. As of this commit we also end up reloading the schemas
multiple times in some cases, which is functionally acceptable but
likely represents a performance regression. I intend to rework this to
use caching, but I'm saving that for a later commit because this one is
big enough already.
The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.
However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.