This is a _very_ early, minimal plumbing of the execution graph builder
into the experimental planning engine.
The goal here is mainly just to prove the idea that the planning engine can
build execution graphs using the execgraph.Builder API we currently have.
This implementation is not complete yet, and also we are expecting to
rework the structure of the planning engine later on anyway so this initial
work focuses on just plumbing it in to what we had as straightforwardly
as possible.
This is enough to get a serialized form of _an_ execution graph included
in the generated plan, though since we don't have the apply engine
implemented we don't actually use it for anything yet.
In subsequent commits we'll continue building out the graph-building logic
and then arrange for the apply phase to unmarshal the saved execution graph
and attempt to execute it, so we can hopefully see a minimal version of all
of this working end-to-end for the first time. But for now, this was mainly
just a proof-of-concept of building an execution graph and capturing it
into its temporary home in the plans.Plan model.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
During the "walking skeleton" phase of our work on the new planning engine
we're intentionally changing other subsystems like the CLI layer as little
as possible and just wiring things together as straightforwardly as
possible so we can find gaps in our new architectural model before we get
too invested in the current experimental shape of it.
With that in mind, this slightly extends the plans.Plan type and its
associated protobuf serialization so that we'll be able to get an opaque
representation of the new runtime's "execution graph" concept from plan
to apply while the CLI layer is still using the traditional plan models.
This is probably not how we'll deal with this in the long run, but it's
intended to be something we can tear back out relatively easily if we
decide either to abandon this "execution graph" idea altogether or once
we reach the point of implementing a new plan model that better suits the
new architecture.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This was previously dealing okay with the case where a resource doesn't
have the requested instance, but was not handling the situation where the
resource doesn't exist at all yet.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Most of what we interact with in configgraph is other parts of the
evaluator that automatically get memoized by patterns like OnceValuer, but
the value for a resource instance is always provided by something outside
of the evaluator that won't typically be able to use those mechanisms, and
so the evaluator's ResourceInstance.Value implementation will now provide
memoization on behalf of that external component, to ensure that we end
up with only one value for each resource instance regardless of how that
external component behaves.
In the case of the current planning phase, in particular this means that
we'll now only try to plan each resource instance once, whereas before
we would ask it to make a separate plan for each call to Value.
For now this is just retrofitted in an minimally-invasive way as part of
our "walking skeleton" phase where we're just trying to wire the existing
parts together end-to-end and then decide at the end whether we want to
refactor things more. If this need for general-purpose memoization ends
up appearing in other places too then maybe we'll choose to structure this
a little differently.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
ResultRef[*states.ResourceInstanceObjectFull] is our current canonical
representation of the "final result" of applying changes to a resource
instance, and so it is going to appear a bunch in the plan and apply
engines.
The generic type is clunky to read and write though, so for this one in
particular we'll offer a more concise type alias that we can use for parts
of the API that are representing results from applying.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the tool I regularly use when I have a small amount of time to
spare and want to take care of a few easy dependency upgrade tasks.
My original motivation in writing it was to untangle the huge mess of stale
dependencies we'd accumulated by just getting them into a _rough_ order
where I could work through them gradually without upgrading too many things
at once, and so I designed it to make a best-effort topological sort by
just deleting edges heuristically until the dependency graph became
acyclic and then sorting that slightly-pruned graph.
Now that we've got the dependency situation under better control, two other
questions have become more relevant to my regular use of this tool:
- What can be upgraded in isolation without affecting anything else?
- Which collections of modules need to be upgraded together because they
are all interdependent?
The previous version of this dealt with the first indirectly by just
inserting a dividing line before the first module that had prerequisites,
and it didn't deal with the second at all.
This new version is focused mainly on answering those two questions, and
so first it finds any strongly-connected components with more than one
member ("cycles") and reduces them to a single graph node, and then does
all of the remaining work based on those groups so that families of
interdependent modules now just get handled together.
As before this is focused on being minimally functional and useful rather
than being efficient or well-designed, since this is just an optional
helper I use to keep on top of dependency upgrades on a best-effort basis.
I'm proposing to merge this into main just because I've been constantly
rebasing a local branch containing these updates and it's getting kinda
tedious!
I have no expectation that anyone else should be regularly running
this, though if anyone else wants to occasionally work on dependency
upgrades I hope it will be useful to them too.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a routine upgrade. The only change is a resynchronization of the
fallback bundle of trusted TLS certificates in x509roots, but OpenTofu
does not use of that bundle.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These are just routine upgrades. None of the upstream changes here relate
to anything OpenTofu makes use of; the goal is just to get these
relatively-trivial upgrades out of the way because these x/* modules
tend to all ratchet forward together and so this clears the way for
upgrading others that might have more important changes later.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine dependency upgrade. Upstream there have been a few
small performance improvements that are unlikely to be particularly
beneficial to OpenTofu but shouldn't cause us any problems.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We typically want our main branch to be on the latest Go release anyway,
but in this case we also intend to backport this to the v1.11 release to
patch GO-2025-4175, as discussed in opentofu/opentofu#3546.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This registry client code is very, very old and in particular predates the
creation of the separate "registry-address" library and OpenTofu's own
"package addrs", so it had its own type for representing module addresses.
It also had some historical confusion about the difference between a module
package and a module, which meant the code was pretty unclear about whether
it was dealing in module packages or module source addresses: in practice
we were always leaving the "Submodule" field empty because the registry
client isn't the component responsible for dealing with modules in package
subdirectories, but this made the code confusing to read and maintain.
This change therefore removes the legacy package regsrc altogether and
adopts regaddr.ModulePackage as the canonical representation of registry
module package addresses. This therefore makes the module registry client
agree with the module installer about what vocabulary types we're using,
and so we no longer need shim code for converting between the two
representations.
To make it abundantly clear that the registry client deals only in package
addresses, the client methods and arguments are renamed to reflect that,
and the callers updated to match.
While in the area anyway I also adjusted a few others things that were not
aligned with our modern conventions, but tried to keep it limited to
relatively localized changes for ease of review. There's probably room for
more improvement here in subsequent commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a minimal set of changes to introduce uses of the new
states.ResourceInstanceObjectFull to all of the leaf functions related to
planning managed and data resource instances.
The main goal here was just to prove that we'd reasonably be able to look
up objects with the new type in all of the places we'd need to. We're
planning some more substantial changes to the planning engine in future
commits (e.g. to generate execution graphs instead of traditional plans)
and so we'll plumb this in better as part of that work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we had a gap here where we only knew the resource type to use
when there was a "desired state" object to get it from, which isn't true
when we're planning to destroy an undesired object.
The new extended model for resource instance object state gives us direct
access to the resource type name, so we can now handle that properly.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Instead of states.ResourceInstanceObject, we'll now use the "full"
representation added in a recent commit. This new representation is a
better fit for execgraph because it tracks all of the relevant information
about a resource instance object inline without assuming that the object
is always used in conjunction with a pointer to the entire state.
We're not really making any special use of those new capabilities yet. This
is mostly just mechanical updates to refer to the new type, and to fill
in values for the extra fields as far as that's possible without doing
further refactoring.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Our existing state model is designed with the assumption that anyone who
is accessing a ResourceInstanceObjectSrc will be doing so from an object
representing the entire state at once, and so they can separately fetch
information about the resource as a whole and the provider instance it
belongs to if needed.
In our new language runtime we want to be able to just pass around pointers
to individual resource instance object states rather than the entire state
data structure, and so these two new "full" variants extend the object
representation with a provider instance address and the name of the
resource type within that provider.
This also adopts a slightly different representation of the value itself,
making use of some newer Go language features, so that we don't need quite
so much duplication between the "source" and decoded versions of this
model. In practice it's only the Value field that needs to vary between
the two, because that's the part where we require provider schema
information to decode.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is admittedly "scope creep" for a job that was originally intended
only to run the end-to-end tests, but we have a separate e2etest job
specifically because some of these tests interact with the live OpenTofu
Registry and certain GitHub repositories and so it's nice to have these
external dependencies isolated into their own job so that outages of any
of these external services should only affect this one test job.
The TF_ACC=1-constrained tests in packages initwd and registry are set up
that way because they too interact with OpenTofu Registry and GitHub
repositories, so grouping these together retains the idea of limited
these external dependencies to only one job while giving us a little more
test coverage for our PR checks.
The motivation for doing this now is that both of these packages had
acceptance tests that had been broken by changes in the past and we didn't
notice because nothing was routinely running these tests:
- package registry had been failing ever since OpenTofu existed because
one of its tests seems to have been depending on an undocumented registry
protocol feature that OpenTofu Registry has never implemented
- package initwd got broken more recently by a change to use a dependency
inversion style for the module installer's use of module registry client
and package fetcher, but these particular tests were not passing in
working clients for the module installer to use.
Both of those problems were already fixed in earlier commits, and so this
is just an attempt to avoid similar problems happening again in future.
(This doesn't address the more common case of acceptance tests that require
live credentials to access a service that doesn't support anonymous access.
Those will still need to run manually in development environments because
we cannot pass live credentials to a job that runs in response to
third-party pull requests.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
TestAccLookupModuleVersions was previously testing for a property in the
registry's JSON response that is not part of the documented "modules.v1"
protocol.
Presumably this was covering some extension property in our predecessor's
public registry implementation, which therefore made this test pass in its
original form interacting with that specific registry implementation.
However, OpenTofu does not depend on this extension property in any way
outside of this test, so there's no benefit to testing for this behavior
in OpenTofu. Regardless, the OpenTofu Registry doesn't implement this
extension so this test cannot pass in its current form running against that
implementation of the registry protocol.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Earlier work in opentofu/opentofu#2661 caused the module installer to
treat the registry client and the package fetcher as optional dependencies
so that local-only tests (by far our common case across the whole codebase)
could avoid instantiating the registry and go-getter client code and risk
accidentally depending on network services.
However, at the time I didn't notice that this package had TF_ACC=1
acceptance tests, and so although I made systematic updates across all the
tests which appeared to make them pass there were actually several that
were being skipped and so I didn't notice that I hadn't updated them well
enough.
This gets them all passing again by giving the ones that intentionally
access the real OpenTofu Registry and some test-only GitHub repositories
the clients they need to do that.
In a future commit I intend to add this package to our end-to-end test
job that runs as part of our pull request checks so that we're less likely
to forget to update these tests under future maintenance.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
To support the workflow of saving a plan to disk and applying it on some
other machine we need to be able to represent the execution graph as a byte
stream and then reload it later to produce an equivalent execution graph.
This is an initial implementation of that, based on the way the execgraph
package currently represents execution graphs. We may change that
representation more in future as we get more experience working in the new
architecture, but this is intended as part of our "walking skeleton" phase
where we try to get the new architecture working end-to-end with simple
configurations as soon as possible to help verify that we're even on the
right track with this new approach, and try to find unknown unknowns that
we ought to deal with before we get too deep into this.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is primarily to clear naive security scanner reports for GO-2025-4135,
which is a potential denial of service if attacker-controlled software can
send malformed packets back to OpenTofu through the SSH Agent proxy
channel.
We are not considering this a significant vulnerability for OpenTofu
because the SSH Agent forwarding pattern already assumes that software on
the remote system is trusted not to misuse the keys that are exposed though
the proxy channel.
Due to the Go team's policy of ratcheting upgrades between all of the
golang.org/x/* modules, this also requires upgrading three other modules.
I have reviewed the changes in those, and most appear to not affect
OpenTofu at all. There are some performance improvements to the HTTP2 and
QUIC implementations in x/net, but they don't seem to be a big concern for
us.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is some minimal glue to help the new runtime use the providers that
were gathered up by the existing logic in the "command" package.
This is cheating a little because this is relying on "tofu init" still
using the old approach just enough to find out which providers are needed
and get them installed, but our current focus is on the main plan and
apply phases and so it's convenient to be able to leave that part untouched
for now and return to improve it later, once we have more of the
fundamentals in place.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a relatively uninteresting milestone where it's possible to load
and plan a root module that contains nothing except local values and
output values.
The module loader currently supports only local sources and the plugin
APIs just immediately return errors, so configurations more complicated
than that are likely to just fail immediately with one or more errors.
We'll gradually improve on this in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
None of these actually work yet, but this satisfies the new-style config
loader enough for it to return a real error instead of immediately
panicking.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We currently re-export evalglue.ExternalModules, but the method in that
interface requires returning evalglue.UncompiledModule and so we need to
export that too or else it's impossible to actually implement the interface
from outside this family of packages.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This doesn't actually work yet. It's just to sketch out a minimal overall
sequence of steps to make this behave somewhat like the main implementation
of "tofu plan", and then we'll make it work better in subsequent commits.
The main omission as of this commit is that we don't yet pass module,
provider, and provisioner dependency access objects in the EvalContext,
and so config loading immediately fails trying to request the root module
from a nil object.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
To facilitate early development and testing of the new language runtime
we're introducing a temporary mechanism to opt in to using the new codepaths
based on an environment variable. This environment variable is effective
only for experiment-enabled builds of OpenTofu, and so it will be
completely ignored by official releases of OpenTofu.
This commit just deals with the "wiring" of this new mechanism, without
actually connecting it with the new language runtime yet. The goal here
is to disturb existing codepaths as little as possible to minimize both
the risk of making this change and the burden this causes for ongoing
maintenance unrelated to work on the new language runtime.
This strategy of switching at the local backend layer means that we will
have some duplicated logic in the experimental functions compared to the
non-experimental functions, which is an intentional tradeoff to allow us
to isolate what we're doing so we don't churn existing code while we're
still in this early exploration phase. In a later phase of the language
runtime project we may pivot to a different approach which switches at
a deeper point in the call stack, but for now we're keeping this broad
to give us flexibility.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>