Before Go 1.21 we relied on third-party and custom tooling to manage which
version of Go we were using for development, testing, and release builds.
However, modern Go toolchains have built-in support for selecting an
appropriate toolchain based on metadata in the go.mod file, and so we had
previously removed most uses of the .go-version file and were maintaining
it just as a remnant of the old state of things.
This replaces our last remaining use of the ".go-version" file with a small
tool that extracts the Go version from the go.mod file, and then removes
the ".go-version" file completely. The "go" and "toolchain" directives in
go.mod are now our single source of truth about which version of Go we're
currently using for OpenTofu.
(It may be possible to rework the Dockerfile for the consul backend to
handle Go version selection in a different way so that we'd no longer need
even this "selected-go-version" tool, but that's beyond the scope of this
commit which aims to leave it unmodified to make sure the effective testing
behavior for that backend is unchanged for now.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This minor revision includes a fix of a problem where a crafted zip file
can cause denial of service by taking a very long time to open the first
file in the archive, which can potentially be triggered by a
maliciously-crafted provider or module distribution archive fetched during
"tofu init".
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to connect all of the apply engine parts together into
a functional whole which can at least attempt to apply a plan. There are
various gaps here that make this not fully work in practice, but the goal
of this commit is to just get all of the components connected together in
their current form and then subsequent commits will gradually build on this
to make it work better.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just some initial sketching of how the applying package might do
its work. Not yet finished, and I'm expecting to change about the
execution graph operations in future commits now that the main plan/apply
flow is plumbed in and so it's easier to test things end-to-end.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to get the plan object and check that it has the
execution graph field populated, and to load the configuration. More to
come in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We previously added a check like this to the Meta.Backend method, but we
use Meta.BackendForLocalPlan instead when we're applying a saved plan, so
we need to make sure the setting gets propagated here too or else the
experimental codepath cannot be entered by the "tofu apply" command.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This makes explicit the relationships between resource instances so that
they'll be honored automatically when we actually execute the graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The dependencies of a resource instance are based primarily on the marks
in the configuration value, and so we'll decide them as part of the same
function that decides the configuration value.
We then need to propagate that result through all of the intermediate
layers to the code that actually constructs the DesiredResourceInstance
object, at which point it's then available to the planning engine. In a
future commit the planning engine will use it to record the resource
instance dependencies explicitly in the generated execution graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The whole point of this helper is to encapsulate the concern of having
exactly one open/close pair for each distinct provider instance address, so
in order to be effective it must actually save its results in the same
map it was already checking!
This was just an oversight in the original implementation that became more
apparent after having a visualization of execution graphs, since that made
it far more obvious that we were previously planning to open a separate
provider instance per resource instance.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This should be just enough saved plan file support to help us get an
execution graph saved so that it can be used in a subsequent apply step,
once we implement that.
As usual with the new runtime, the traditional plan file format is mainly
just acting as a vessel to transport the execution graph and so we need to
make it valid enough that the plan loader can load it but don't need to
make it fully consistent. Therefore for now the config snapshot is just
stubbed out as empty, with the assumption that our initial "walking
skeleton" implementation will just use the loose config files on disk for
now.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to produce a valid saved plan file we need to save information
about which provider instance each resource instance belongs to. We're
probably not actually going to use this information in the new
implementation because the same information is captured as part of the
execution graph, but we do at least need to make this valid enough that the
plan file loading code will accept it, because right now (for experimental
purposes) we're smuggling the execution graph through our traditional plan
file format as a sidecar data structure.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As with the traditional runtime, we need to tolerate invalid plans from
providers that announce that they are implemented with the legacy Terraform
Plugin SDK, because that SDK is incapable of producing valid plans.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The RPC-based provider client panics if it isn't given a non-nil value for
ProviderMeta. We aren't yet ready to send a "real" ProviderMeta value, but
for now we'll just stub it out as a null value so that we have just enough
to get through the provider client code without crashing.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The code which tries to ensure that provider clients get closed when they
are no longer needed was not taking into account that a provider client
might not have actually been started at all, due to an error.
It's debatable whether we ought to start this background goroutine in that
case at all, but this is an intentionally-minimal change for now until we
can take the time to think harder about how the call to
reportProviderInstanceClosed should work (and whether it should happen at
all) in that case. This change ensures that it'll still happen in the
same way as it did before.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a routine upgrade that should not significantly change
OpenTofu's behavior because it only affects the websocket and http2 server
implementations. OpenTofu does not directly use websockets at all and only
uses http2 server as part of the temporary server used by "tofu login"'s
OAuth implementation, and even that uses the copy of this library vendored
into the Go standard library instead of through the module dependency.
Due to the Go team's policy of ratcheting dependencies between these
modules this also requires an upgrade of golang.org/x/crypto, but the only
change in that module is relevant only in "fips140=only" mode, which is
already not supported for OpenTofu for various other reasons.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In e879e9060f I introduced the manually-written memoization that was
previously here, as a quick fix for a problem I noticed while working on
something else.
However, using sync.Mutex directly here is not correct because it makes it
possible for this function to deadlock if used incorrectly elsewhere in the
system. The "grapheval" utilities are there to avoid exactly that by making
a call fail immediately with an error if a self-dependency problem arises.
Therefore this will now use grapheval.Once to better encapsulate the
memoization. This both ensures that we cooperate well with the rest of the
system using grapheval and requires less boilerplate inside the Value
method because grapheval.Once is already tailored for exactly this sort of
memoization.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We previously already did some partial skipping of provider reference
validation when there are modules in the tree that failed to load, but we
were still running the main logic that checks whether the providers passed
between module boundaries are consistent.
Because that validation involves checking consistency between different
modules it will tend to generate spurious errors when run against a
partially-loaded module tree. Therefore we must also skip the other
validation steps whenever at least one module failed to load, so that the
reported diagnostics will only describe the problem loading the module and
not produce additional misleading reports describing downstream problems
caused by a module's content being incomplete due to those errors.
This one is kinda awkward to write a test for because it's a very specific
interaction that seems unlikely to arise again, but just to have _some_
representation of this historical problem I wrote a simple test that
intentionally makes one module fail to load and then tests that the result
only contains the diagnostic related to that intentional failure. Before
this fix that test would fail because of the spurious additional errors
produced by the functions we're now skipping.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This family of modules tend to ratchet forward their interdependencies even
when no upgrade is actually needed, so keeping the "easy" ones up to date
makes it more likely that we'd be able to upgrade the ones that require
more care and review if a security concern arises.
The four modules updated here only include interdependency ratchets and
changes to functionality that OpenTofu does not use, so this should not
affect OpenTofu's external behavior at all.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We previously adopted the new "tools" mechanism for the third-party tools
we use for go:generate, etc. However, it's also acceptable to include tools
from our own module in this list, which then makes it possible to run them
using the same "go tool" shorthand.
For example, it's now possible to run "go tool protobuf-compile" to rebuild
the protocol buffers schemas, or "go tool tofu" as a shorthand way to
compile and run OpenTofu CLI itself.
It still remains possible to run these tools in the same ways they used to
work. This is just a new way to get the same results through a more modern
Go toolchain feature.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>