The Go runtime provides a number of configuration knobs that subtly change
its behavior, and we cannot control which of these is available over time
as we change which version of Go we're building with.
It's therefore possible that an OpenTofu user might be intentionally or
unintentionally relying on one of these settings for OpenTofu to work on
their system, in which case they would be broken if they upgraded to a
newer version of OpenTofu which uses a different Go version that no longer
supports that setting.
These log lines are intended to help us more quickly notice that
possibility if someone opens a bug report describing an unexpected behavior
change after upgrading to a new OpenTofu minor release series. We can ask
the reporter to share "TF_LOG=debug" output from both the previous and new
releases to compare, and then these log lines should appear in the older
version's output so we can search the Go codebase and issue tracker for
each of the mentioned names to learn if the handling of that setting has
changed between Go versions.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We have some test code that includes *addrs.Target values in fmt calls with
the %s and %q verbs, but that type was not actually a fmt.Stringer before.
Go 1.26 is introducing some new checks that cause those uses to cause
failures when building those tests. We could potentially change the tests
to produce the string representation in a different way, but we typically
expect our address types to be "stringable" in this way, so instead we'll
just make Target be a fmt.Stringer, delegating to the String method of
the underlying addrs.Targetable implementation. The addrs.Targetable
interface requires a String method, so all implementations are guaranteed
to support this.
Note that we're not actually using Go 1.26 yet at the time of this commit,
but this is an early fix to make the upgrade easier later. I verified this
using go1.26rc1.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This version includes more complete support for the "!!merge" tag, which
now allows using a sequence of mappings instead of just a single mapping.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
InterestingDependencies is used to include version information for a small
subset of our dependencies whose behavior is directly exposed in the
OpenTofu UX, such as including a parser for some syntax that OpenTofu
relies on and that is covered by our compatibility promises.
Unfortunately we recently switched to using our own forks of certain
libraries and hadn't updated this to match, so the corresponding logs had
become less complete. This updates those to the dependencies we actually
use, and also adds a few new ones that have similar characteristics.
We'll also now skip calling InterestingDependencies altogether if the log
level isn't high enough for the results to be visible, since that avoids
us wasting time generating a data structure that will not be used.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine upgrade. It should not significantly affect
OpenTofu's external behavior.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This upgrades both the OpenTelemetry SDK and all the base modules it
depends on, because things tend to work best when these are all upgraded
in lockstep to the same minor release.
Sometimes these upgrades also cause an indirect change to a newer version
of the OpenTelemetry "semconv" package which we then need to match in our
own traceattrs package, but no such change is required this time because
there has not been a new semconv version published in the meantime.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This updates only the base SDK module and the "smithy" helper library it
depends on. The service-specific functionality that OpenTofu uses is in
separate Go modules that are not upgraded yet here, because we'll want to
review their changes more closely in case they affect the behavior of our
S3 state storage or AWS KMS encryption features.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This removes most of the code previously added in 491969d29d, because we
since learned that the hashicorp/helm provider signals deferral when any
unknown values are present in provider configuration even though in
practice it can sometimes successfully plan changes in spite of those
unknown values.
That therefore made the hashicorp/helm provider behavior worse under this
change than it was before, returning an error when no error was actually
warranted.
The ephemeral resources implementation landed later and was also
interacting with this change, and so this isn't a line-for-line revert of
the original change but still removes everything that was added in support
of handling provider deferral signals so that we'll be able to start fresh
with this later if we find a better way to handle it.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a _very_ early, minimal plumbing of the execution graph builder
into the experimental planning engine.
The goal here is mainly just to prove the idea that the planning engine can
build execution graphs using the execgraph.Builder API we currently have.
This implementation is not complete yet, and also we are expecting to
rework the structure of the planning engine later on anyway so this initial
work focuses on just plumbing it in to what we had as straightforwardly
as possible.
This is enough to get a serialized form of _an_ execution graph included
in the generated plan, though since we don't have the apply engine
implemented we don't actually use it for anything yet.
In subsequent commits we'll continue building out the graph-building logic
and then arrange for the apply phase to unmarshal the saved execution graph
and attempt to execute it, so we can hopefully see a minimal version of all
of this working end-to-end for the first time. But for now, this was mainly
just a proof-of-concept of building an execution graph and capturing it
into its temporary home in the plans.Plan model.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
During the "walking skeleton" phase of our work on the new planning engine
we're intentionally changing other subsystems like the CLI layer as little
as possible and just wiring things together as straightforwardly as
possible so we can find gaps in our new architectural model before we get
too invested in the current experimental shape of it.
With that in mind, this slightly extends the plans.Plan type and its
associated protobuf serialization so that we'll be able to get an opaque
representation of the new runtime's "execution graph" concept from plan
to apply while the CLI layer is still using the traditional plan models.
This is probably not how we'll deal with this in the long run, but it's
intended to be something we can tear back out relatively easily if we
decide either to abandon this "execution graph" idea altogether or once
we reach the point of implementing a new plan model that better suits the
new architecture.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This was previously dealing okay with the case where a resource doesn't
have the requested instance, but was not handling the situation where the
resource doesn't exist at all yet.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Most of what we interact with in configgraph is other parts of the
evaluator that automatically get memoized by patterns like OnceValuer, but
the value for a resource instance is always provided by something outside
of the evaluator that won't typically be able to use those mechanisms, and
so the evaluator's ResourceInstance.Value implementation will now provide
memoization on behalf of that external component, to ensure that we end
up with only one value for each resource instance regardless of how that
external component behaves.
In the case of the current planning phase, in particular this means that
we'll now only try to plan each resource instance once, whereas before
we would ask it to make a separate plan for each call to Value.
For now this is just retrofitted in an minimally-invasive way as part of
our "walking skeleton" phase where we're just trying to wire the existing
parts together end-to-end and then decide at the end whether we want to
refactor things more. If this need for general-purpose memoization ends
up appearing in other places too then maybe we'll choose to structure this
a little differently.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
ResultRef[*states.ResourceInstanceObjectFull] is our current canonical
representation of the "final result" of applying changes to a resource
instance, and so it is going to appear a bunch in the plan and apply
engines.
The generic type is clunky to read and write though, so for this one in
particular we'll offer a more concise type alias that we can use for parts
of the API that are representing results from applying.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the tool I regularly use when I have a small amount of time to
spare and want to take care of a few easy dependency upgrade tasks.
My original motivation in writing it was to untangle the huge mess of stale
dependencies we'd accumulated by just getting them into a _rough_ order
where I could work through them gradually without upgrading too many things
at once, and so I designed it to make a best-effort topological sort by
just deleting edges heuristically until the dependency graph became
acyclic and then sorting that slightly-pruned graph.
Now that we've got the dependency situation under better control, two other
questions have become more relevant to my regular use of this tool:
- What can be upgraded in isolation without affecting anything else?
- Which collections of modules need to be upgraded together because they
are all interdependent?
The previous version of this dealt with the first indirectly by just
inserting a dividing line before the first module that had prerequisites,
and it didn't deal with the second at all.
This new version is focused mainly on answering those two questions, and
so first it finds any strongly-connected components with more than one
member ("cycles") and reduces them to a single graph node, and then does
all of the remaining work based on those groups so that families of
interdependent modules now just get handled together.
As before this is focused on being minimally functional and useful rather
than being efficient or well-designed, since this is just an optional
helper I use to keep on top of dependency upgrades on a best-effort basis.
I'm proposing to merge this into main just because I've been constantly
rebasing a local branch containing these updates and it's getting kinda
tedious!
I have no expectation that anyone else should be regularly running
this, though if anyone else wants to occasionally work on dependency
upgrades I hope it will be useful to them too.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a routine upgrade. The only change is a resynchronization of the
fallback bundle of trusted TLS certificates in x509roots, but OpenTofu
does not use of that bundle.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These are just routine upgrades. None of the upstream changes here relate
to anything OpenTofu makes use of; the goal is just to get these
relatively-trivial upgrades out of the way because these x/* modules
tend to all ratchet forward together and so this clears the way for
upgrading others that might have more important changes later.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine dependency upgrade. Upstream there have been a few
small performance improvements that are unlikely to be particularly
beneficial to OpenTofu but shouldn't cause us any problems.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We typically want our main branch to be on the latest Go release anyway,
but in this case we also intend to backport this to the v1.11 release to
patch GO-2025-4175, as discussed in opentofu/opentofu#3546.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This registry client code is very, very old and in particular predates the
creation of the separate "registry-address" library and OpenTofu's own
"package addrs", so it had its own type for representing module addresses.
It also had some historical confusion about the difference between a module
package and a module, which meant the code was pretty unclear about whether
it was dealing in module packages or module source addresses: in practice
we were always leaving the "Submodule" field empty because the registry
client isn't the component responsible for dealing with modules in package
subdirectories, but this made the code confusing to read and maintain.
This change therefore removes the legacy package regsrc altogether and
adopts regaddr.ModulePackage as the canonical representation of registry
module package addresses. This therefore makes the module registry client
agree with the module installer about what vocabulary types we're using,
and so we no longer need shim code for converting between the two
representations.
To make it abundantly clear that the registry client deals only in package
addresses, the client methods and arguments are renamed to reflect that,
and the callers updated to match.
While in the area anyway I also adjusted a few others things that were not
aligned with our modern conventions, but tried to keep it limited to
relatively localized changes for ease of review. There's probably room for
more improvement here in subsequent commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a minimal set of changes to introduce uses of the new
states.ResourceInstanceObjectFull to all of the leaf functions related to
planning managed and data resource instances.
The main goal here was just to prove that we'd reasonably be able to look
up objects with the new type in all of the places we'd need to. We're
planning some more substantial changes to the planning engine in future
commits (e.g. to generate execution graphs instead of traditional plans)
and so we'll plumb this in better as part of that work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we had a gap here where we only knew the resource type to use
when there was a "desired state" object to get it from, which isn't true
when we're planning to destroy an undesired object.
The new extended model for resource instance object state gives us direct
access to the resource type name, so we can now handle that properly.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Instead of states.ResourceInstanceObject, we'll now use the "full"
representation added in a recent commit. This new representation is a
better fit for execgraph because it tracks all of the relevant information
about a resource instance object inline without assuming that the object
is always used in conjunction with a pointer to the entire state.
We're not really making any special use of those new capabilities yet. This
is mostly just mechanical updates to refer to the new type, and to fill
in values for the extra fields as far as that's possible without doing
further refactoring.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Our existing state model is designed with the assumption that anyone who
is accessing a ResourceInstanceObjectSrc will be doing so from an object
representing the entire state at once, and so they can separately fetch
information about the resource as a whole and the provider instance it
belongs to if needed.
In our new language runtime we want to be able to just pass around pointers
to individual resource instance object states rather than the entire state
data structure, and so these two new "full" variants extend the object
representation with a provider instance address and the name of the
resource type within that provider.
This also adopts a slightly different representation of the value itself,
making use of some newer Go language features, so that we don't need quite
so much duplication between the "source" and decoded versions of this
model. In practice it's only the Value field that needs to vary between
the two, because that's the part where we require provider schema
information to decode.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is admittedly "scope creep" for a job that was originally intended
only to run the end-to-end tests, but we have a separate e2etest job
specifically because some of these tests interact with the live OpenTofu
Registry and certain GitHub repositories and so it's nice to have these
external dependencies isolated into their own job so that outages of any
of these external services should only affect this one test job.
The TF_ACC=1-constrained tests in packages initwd and registry are set up
that way because they too interact with OpenTofu Registry and GitHub
repositories, so grouping these together retains the idea of limited
these external dependencies to only one job while giving us a little more
test coverage for our PR checks.
The motivation for doing this now is that both of these packages had
acceptance tests that had been broken by changes in the past and we didn't
notice because nothing was routinely running these tests:
- package registry had been failing ever since OpenTofu existed because
one of its tests seems to have been depending on an undocumented registry
protocol feature that OpenTofu Registry has never implemented
- package initwd got broken more recently by a change to use a dependency
inversion style for the module installer's use of module registry client
and package fetcher, but these particular tests were not passing in
working clients for the module installer to use.
Both of those problems were already fixed in earlier commits, and so this
is just an attempt to avoid similar problems happening again in future.
(This doesn't address the more common case of acceptance tests that require
live credentials to access a service that doesn't support anonymous access.
Those will still need to run manually in development environments because
we cannot pass live credentials to a job that runs in response to
third-party pull requests.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
TestAccLookupModuleVersions was previously testing for a property in the
registry's JSON response that is not part of the documented "modules.v1"
protocol.
Presumably this was covering some extension property in our predecessor's
public registry implementation, which therefore made this test pass in its
original form interacting with that specific registry implementation.
However, OpenTofu does not depend on this extension property in any way
outside of this test, so there's no benefit to testing for this behavior
in OpenTofu. Regardless, the OpenTofu Registry doesn't implement this
extension so this test cannot pass in its current form running against that
implementation of the registry protocol.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Earlier work in opentofu/opentofu#2661 caused the module installer to
treat the registry client and the package fetcher as optional dependencies
so that local-only tests (by far our common case across the whole codebase)
could avoid instantiating the registry and go-getter client code and risk
accidentally depending on network services.
However, at the time I didn't notice that this package had TF_ACC=1
acceptance tests, and so although I made systematic updates across all the
tests which appeared to make them pass there were actually several that
were being skipped and so I didn't notice that I hadn't updated them well
enough.
This gets them all passing again by giving the ones that intentionally
access the real OpenTofu Registry and some test-only GitHub repositories
the clients they need to do that.
In a future commit I intend to add this package to our end-to-end test
job that runs as part of our pull request checks so that we're less likely
to forget to update these tests under future maintenance.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>