Previously we interpreted a "required_version" argument in a "terraform"
block as if it were specifying an OpenTofu version constraint, when in
reality most modules use this to represent a version constraint for
OpenTofu's predecessor instead.
The primary effect of this commit is to introduce a new top-level block
type called "language" which describes language and implementation
compatibility metadata in a way that intentionally differs from what's used
by OpenTofu's predecessor.
This also causes OpenTofu to ignore the required_version argument unless
it appears in an OpenTofu-specific file with a ".tofu" suffix, and makes
OpenTofu completely ignore the language edition and experimental feature
opt-in options from OpenTofu's predecessor on the assumption that those
could continue to evolve independently of changes in OpenTofu.
We retain support for using required_versions in .tofu files as a bridge
solution for modules that need to remain compatible with OpenTofu versions
prior to v1.12. Module authors should keep following the strategy of
having both a versions.tf and a versions.tofu file for now, and wait until
the OpenTofu v1.11 series is end-of-life before adopting the new "language"
block type.
I also took this opportunity to simplify how we handle these parts of the
configuration, since the OpenTofu project has no immediate plans to use
either multiple language editions or language experiments and so for now
we can reduce our handling of those language features to just enough that
we'd return reasonable error messages if today's OpenTofu is exposed to
a module that was written for a newer version of OpenTofu that extends
these language features. The cross-cutting plumbing for representing the
active experiments for a module is still present so that we can reactivate
it later if we need to, but for now that set will always be empty.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We don't typically just broadly run automatic rewriting tools like "go fix"
across our codebase because that tends to cause annoying and unnecessary
merge conflicts when we're backporting to earlier release branches.
But all of the files in this commit were changed in some non-trivial way
already during the OpenTofu v1.11 development period anyway, and so the
likelyhood we'd be able to successfully backport from them is reduced and
therefore this seems like a good opportunity to do some focused
modernization using "go fix".
My rules for what to include or not are admittedly quite "vibes-based", but
the general idea was:
- Focusing on files under the "command" directory only, because that's
already been an area of intentional refactoring during this development
period.
- If the existing diff in a file is already significantly larger than
the changes the fixer proposed to make, or if the fixer is proposing
to change a line that was already changed in this development period.
- More willing to include "_test.go" files than non-test files, even if
they hadn't changed as much already, just because backports from test
files for bug fixes tend to be entirely new test cases more than they
are modifications to existing test cases, and so the risk of conflicts
is lower there.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The "go fix" modernizer for using wg.Go instead of explicit wg.Add/wg.Done
only works when the goroutine function has no arguments, so it didn't match
here where this code was still using an old trick to ensure that each
goroutine would capture a different value of "i".
But that old trick isn't needed anymore because modern Go already ensures
that each iteration of the loop has an independent "i", so I made a small
change to remove the argument and just let the closure capture "i" from
the outer loop, and then "go fix" was able to complete the rewrite to
use wg.Go here.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just the result of running the "go fix" modernizers against this
package.
It seems that there were some lines with trailing whitespace previously,
which also got removed here because "go fix" includes an implicit "go fmt".
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the result of running the "go fix" modernizers on the subset of
files under this prefix that were already changed during the v1.12
development period.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine upgrade.
The latest version of go.opentelemetry.io/otel/sdk has adopted a newer
version of semconv and so this also updates our traceattrs package to
use the same version, as usual.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As with the previous tests added for for_each, this is some basic coverage
under the assumption that the rules for count are not likely to change
so much that it would be arduous to update these tests.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As with the previous tests added for for_each, this is some basic coverage
under the assumption that the rules for count are not likely to change
so much that it would be arduous to update these tests.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The previous implementation was rejecting most out-of-range errors, but
it was not rejecting negative numbers, and it was also generating
misleading error messages in other cases because cty's own error messages
for numeric range are pretty generic.
Now we'll do our own numeric range checks before we attempt to extract
the number into an int variable, so we can return more tailored error
messages and can reject negative numbers properly.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This case is not particularly interesting -- it just always produces a
hard-coded result -- but is tested anyway for completeness.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
So far we've been pretty light on testing in these new codepaths because
they've been churning quite a lot during our "walking skeleton" phase of
the new runtime development.
But this instance-selector-related code seems relatively self-contained and
settled, so this introduces an initial set of unit tests for the handling
of for_each expressions, primarily just to document what the intended
behavior of the current implementation was and to illustrate that it is
indeed unit-testable in (relative) isolation.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The comment here was correct that we can accept unknown values of object
types because the attributes are known as part of the type, but we do still
need to handle that in a special way because we need to produce the result
based only on the type information in this case, without trying to access
the value.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In earlier work we added some more detailed typechecking rules further
down that can reject incorrectly-typed values even when the value is
unknown, but forgot to remove this early return that made that other code
unreachable whenever the value is unknown.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
For unit tests of components that don't depend on any particular symbol
table structure, where we just want to test if they are using the provided
scope _at all_, it's helpful to be able to just quickly create a static
symbol table.
This is a reusable helper function for that.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Our new language runtime uses a set of new methods on SyncState to work
with its preferred "full" representation of resource instance objects, but
those are implemented in terms of methods that already existed for the old
runtime's benefit and so we need to deal with some quirks of those existing
methods.
One such quirk is that the operations to write or remove objects also want
to update some resource-level and instance-level metadata as a side-effect,
and we need to carry through that metadata even when we're intending to
completely remove a resource instance object.
To preserve our goal of leaving the existing codepaths untouched for now,
this pushes a little complexity back up into the main caller in the apply
engine, forcing it to call a different method when it knows it has deleted
an object. That new method then only takes the metadata we need and not
an actual resource instance object, so it gels better with the underlying
ModuleState methods it's implemented in terms of.
Hopefully in the long run we'll rethink the state models to not rely on
these hidden side-effects, but that's beyond the scope of our current phase
of work on the new language runtime.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the first half of solving the current problem that we're not
always properly respecting the reverse dependency edges while destroying
objects: we need to consider both the config-derived dependencies and the
dependencies recorded in the the previous run state together when building
the resource instance graph.
The "orphan" case is the most obvious way this manifests right now, but
this needs handling in the desired case too because the desired case is
what handles "replace" actions, which include a delete component that also
needs to respect these prior dependencies.
The other half of this problem is that we're not actually currently
propagating the dependency information to the apply phase, and so it's not
saving that information in the new state objects it creates. That's not
solved in this commit, and instead I just tested this by manually modifying
the prior state to include what ought to have been recorded in it and then
verified that caused a correct execution graph to be constructed. I intend
to deal with the apply-time part of this problem in a separate commit
later.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Unfortunately some existing providers suriously report RequiresReplace
paths when they are planning create and/or delete changes, so we'll need to
just quietly ignore them rather than generating a user-facing error about
it.
In practice ignoring these in these cases doesn't really do any harm, since
RequiresReplace is only relevant in causing us to reinterpret an "update"
into a "replace". But to keep the handling of this quirk centralized, we
explicitly throw away spurious paths so that the rest of the system can
safely assume that this field will be populated only when some action must
be taken based on it.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
For any resource instance object that doesn't need any changes of its own,
we initially skip adding it to the execution graph but then add a stub
"prior state" operation retroactively if we discover at least one other
resource instance object that depends on it.
However, the code for that also needs to record in the execution graph
which result provides the evaluation value for the upstream resource
instance, if the object we've just added is the "current" object for its
resource instance. Otherwise the generated execution graph is invalid,
causing it to fail to provide the result to the evaluator for downstream
evaluation.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
For "replace" actions our convention is that the old value is what is to
be deleted, the new value is what is to be created, and there's an implicit
third state in between that we don't represent explicitly. (Either neither
exist or both exist, depending on the replace order.)
We were not previously handling that situation correctly, because we were
setting the new value to be the result of the "update" where the provider
reported that replacement is required, which would therefore still include
values from the old object that would not survive replacement.
Now we'll instead ask the provider to plan the delete and create legs
separately and record the separate results from each. This therefore
matches the planning questions we'll ask the provider during the apply
phase where the delete and create are already decomposed into separate
operations in the execution graph.
Having all of this logic inline in
planGlue.planDesiredManagedResourceInstance is getting quite unweildy, but
I'm intentionally leaving it like this right now until we have a more
complete representation of all of the needed functionality, rather than
potentially having to rework it repeatedly as we fill in more of the
TODO comments elsewhere in this function.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we had this check in the apply engine because it was making
sure that the decomposed parts of a "replace" don't themselves call for
a replace action, but it's a general rule that RequiresReplace is not
meaningful unless there's both a current and a desired object to compare,
so this moves that check into the "resources" package as a general rule.
A future commit will rely on this once the planning engine correctly
handles "replace" actions by asking the provider to re-plan as if the
old object didn't exist, in order to predict the "create" part of the
replace action.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The original motivation here was to implement the special case of making
a synthetic plan without calling into the provider when the desired state
is absent and the provider hasn't opted in to planning to destroy objects.
This case needs to be opted in because older providers will crash if asked
to plan with the config value or proposed value set to null.
However, there was already far too much code duplicated across both the
planning engine and the apply engine's "final plan" operation, and so this
seemed like a good prompt to finally resolve those TODO comments and factor
out the shared planning logic into a separate place that both can depend
on.
The new package "resources" is intended to grow into the home for all of
these resource-related operations that each wrap a lower-level provider
call but also perform preparation or followup actions such as making sure
the provider's response is valid according to the requirements of the
plugin protocol.
For now it only includes config validation and planning for managed
resources, but is designed to grow to include other similar operations in
future. The most likely next step is to add a method for applying a
previously-created plan, which would replace a bunch of the code in the
apply engine's implementation of the "ManagedApply" operation.
Currently the way this integrates with the rest of the provider-related
infrastructure is a little awkward. We can hopefully improve on that in
future, but for now the priority was to avoid this becoming yet another
sprawling architecture change that would be hard to review. Once we feel
like we've achieved the "walking skeleton" milestone for the new runtime
we can think about how we want to tidy up the rough edges between the
different components of this new design.
As has become tradition for these new codepaths, this is designed to be
independently testable but not yet actually tested because we want to wait
and see how all of this code settles before we start writing tests that
would then make it harder to refactor as we learn more.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>