This provides at least a partial implementation of every resource instance
action except the ones involving "forget" actions.
However, we don't really quite have all of the building blocks needed to
properly model "delete" yet, because properly handling those actions means
we need to generate "backwards" dependency edges to preserve the guarantee
that destroying happens in reverse order to creating. Therefore the main
outcome of this commit is to add a bunch of FIXME and TODO comments
explaining where the known gaps are, with the intention of then filling
those gaps in later commits once we devise a suitable strategy to handle
them.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These both effectively had the behavior of ResourceInstancePrior embedded
in them, reading something from the state and change its address as a
single compound operation.
In the case of ManagedDepose we need to split these up for the
CreateThenDestroy variant of "replace", because we want to make sure the
final plans are valid before we depose anything and we need the prior state
to produce the final plan. (Actually using that will follow in a subsequent
commit.)
This isn't actually necessary for ManageChangeAddr, but splitting it keeps
these two operations consistent in how they interact with the rest of the
operations.
Due to how the existing states.SyncState works we're not actually making
good use of the data flow of these objects right now, but in a future world
where we're no longer using the old state models hopefully the state API
will switch to an approach that's more aligned with how the execgraph
operations are modeled.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to make it possible to destroy previously-created
objects by removing them from the configuration. For now this intentionally
duplicates some of the logic from the desired instance planning function
because that more closely matches how this was dealt with in the
traditional runtime. Once all of the managed-resource-related planning
functions are in a more complete state we'll review them and try to extract
as much functionality as possible into a common location that can be shared
across all three situations. For now though it's more important that we be
able to quickly adjust the details of this code while we're still nailing
down exactly what the needed functionality is.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We're currently being intentionally cautious about adding too many tests
while these new parts of the system are still evolving and changing a lot,
but the execGraphBuilder behavior is hopefully self-contained enough for
a small set of basic tests to be more helpful than hurtful.
We should extend this test, and add other test cases that involve more
complicated interactions between different resource instances of different
modes, once we feel that these new codepaths have reached a more mature
state where we're more focused on localized maintenance than on broad
system design exploration.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
A DeleteThenCreate action decomposes into two plan-then-apply sequences,
representing first the deletion and then the creation. However, we order
these in such a way that both plans need to succeed before we begin
applying the "delete" change, so that a failure to create the final plan
for the "create" change can prevent the object from being destroyed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Because this operation represents the main externally-visible side effects
in the apply phase, in some cases we'll need to force additional
constraints on what must complete before executing it. For example, in
a DestroyThenCreate operation we need to guarantee that the apply of the
"destroy" part is completed before we attempt the apply for the "create"
part.
(The actual use of this for DestroyThenCreate will follow in a later
commit.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously the generated execgraph was naive and only really supported
"create" changes. This commit has some initial work on generalizing
that, though it's not yet complete and will continue in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Whenever we plan for a resource instance object to switch from one instance
address to another, we'll use this new op instead of ResourceInstancePrior
to perform the state modification needed for the rename before returning
the updated object.
We combine the rename and the state read into a single operation to ensure
that they can appear to happen atomically as far as the rest of the system
is concerned, so that there's no interim state where either both or neither
of the address bindings are present in the state.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to describe operations against deposed objects for managed
resource instances we need to be able to store constant states.DeposedKey
values in the execution graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The temporary placeholder code was relying only on the
DesiredResourceInstance object, which was good enough for proving that this
could work at all and had the advantage of being a new-style model with
new-style representations of the provider instance address and the other
resource instances that the desired object depends on.
But that isn't enough information to plan anything other than "create"
changes, so now we'll switch to using plans.ResourceInstanceChange as the
main input to the execgraph building logic, even though for now that means
we need to carry a few other values alongside it to compensate for the
shortcomings of that old model designed for the old language runtime.
So far this doesn't actually change what we do in response to the change
so it still only supports "create" changes. In future commits we'll make
the execGraphBuilder method construct different shapes of graph depending
on which change action was planned.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The previous commit split the handling of provider instance items into
separate dependency-analysis and execgraph construction steps, with the
intention of avoiding the need for the execGraphBuilder to directly
interact with the planning oracle and thus indirectly with the evaluator.
Overall the hope is that execGraphBuilder will be a self-contained object
that doesn't depend on anything else in the planning engine so that it's
easier to write unit tests for it that don't require creating an entire
fake planning context.
However, on reflection that change introduced a completely unnecessary
extra handoff from the execGraphBuilder to _another part of itself_, which
is obviously unnecessary complexity because it doesn't serve to separate
any concerns.
This is therefore a further simplification that returns to just doing the
entire handling of a provider instance's presence in the execution graph
only once we've decided that at least one resource instance will
definitely use the provider instance during the apply phase.
There is still a separation of concerns where the planGlue type is
responsible for calculating the provider dependencies and then the
execGraphBuilder is only responsible for adding items to the execution
graph based on that information. That separation makes sense because
planGlue's job is to bridge between the planning engine and the evaluator,
and it's the evaluator's job to calculate the dependencies for a provider
instance, whereas execGraphBuilder is the component responsible for
deciding exactly which low-level execgraph operations we'll use to describe
the situation to the apply engine.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we did _all_ of the work related to preparing execgraph items
for provider instances as a side-effect of the ProviderInstance method
of execGraphBuilder.
Now we'll split it into two parts: the first time we encounter each
provider instance during planning we'll proactively gather its dependencies
and stash them in the graph builder. However, we'll still wait until the
first request for the execution subgraph of a provider instance before
we actually insert that into the graph, because that then effectively
excludes from the apply phase any provider instances that aren't needed for
the actual planned side-effects. In particular, if all of the resource
instances belonging to a provider instance turn out to have "no-op" plans
then that provider instance won't appear in the execution graph at all.
An earlier draft of this change did the first dependency capturing step via
a new method of planGlue called by the evaluator, but after writing that
I found it unfortunate to introduce yet another inversion of control
situation where readers of the code just need to know and trust that the
evaluator will call things in a correct order -- there's already enough
of that for resource instances -- and so I settled on the compromise of
having the ensureProviderInstanceDependencies calls happen as part of the
linear code for handling each resource instance object, which makes it far
easier for a future maintainer to verify that we're upholding the contract
of calling ensureProviderInstanceDependencies before asking for an
execgraph result, while still allowing us to handle that step generically
instead of duplicating it into each resource-mode-specific handler.
While working on this I noticed a slight flaw in our initial approach to
handling ephemeral resource instances in the execution graph, which is
described inline as a comment in planDesiredEphemeralResourceInstance.
We'll need to think about that some more and deal with it in a future
commit.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As we've continue to build out the execution graph behavior during the plan
and apply phases we've found that the execgraph package isn't really the
best place for having the higher-level ideas like singleton provider
client operations because that package doesn't have enough awareness about
the planning process to manage those concerns well.
The mix of both high- and low-level concerns in execgraph.Builder was also
starting to make it have a pretty awkward shape where some of the low-level
operations needed to be split into two parts so that the higher-level parts
could call them while holding the builder's mutex.
In response to that here we split the execgraph.Builder functionality so
that the execgraph package part is concerned only with the lowest-level
concern of adding new stuff to the graph, without any attempt to dedupe
things and without care for concurrency. The higher-level parts then live
in a higher-level wrapper in the planning engine's own package, which
absorbs the responsibility for mutexing and managing singletons.
For now the new type in the planning package just has lightly-adapted
copies of existing code just to try to illustrate what concerns belong to
it and how the rest of the system ought to interact with it. There are
various FIXME/TODO comments describing how I expect this to evolve in
future commits as we continue to build out more complete planning
functionality.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were previously using "applying:" as the common prefix, but I found
that confusing in practice because only the "ManagedApply" operation is
_actually_ applying changes.
Instead then we'll identify these trace logs as belonging to the apply
phase as a whole, to try to be a little clearer about what's going on.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Here we ask the provider to produce a plan based on the finalized
configuration taken from the "desired state", and return it as a final plan
object only if the provider request succeeds and returns something that is
valid under our usual plan consistency rules.
We then ask the provider to apply that plan, and deal with whatever it
returns. The apply part of this is not so complete yet, but there's enough
here to handle the happy path where the operation succeeds and the provider
returns something valid.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This'll make it clearer which provider we're using to ask each question,
without making a human reader try to work backwards through the execution
graph data flow.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The statements were in the wrong order so we were adding the
ProviderInstanceConfig operation before checking whether we already added
the operations for a particular provider instance, and thus the generated
execution graph had redundant unused extra ProviderInstanceConfig ops.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were useful during initial development, but the execgraph behavior
now seems to be doing well enough that these noisy logs have become more
annoying than helpful, since the useful context now comes from the logging
within engine/apply's implementation of the operations.
However, this does introduce some warning logs to draw attention to the
fact that resource instance postconditions are not implemented yet.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
There is currently a bug in the apply engine that's causing a
self-dependency error during promise resolution, which was a good reminder
that we'd not previously finished connecting all of the parts to be able
to unwind such problems into a useful error message.
Due to how the work is split between the apply engine, the evaluator, and
the execgraph package it takes some awkward back-and-forth to get all of
the needed information together into one place. This compromise aims to
do as little work as possible in the happy path and defer more expensive
analysis until we actually know we're going to report an error message.
In this case we can't really avoid proactively collecting the request IDs
because we don't know ahead of time what (if anything) will be involved in
a promise error, but only when actually generating an error message will
we dig into the original source execution graph to find out what each of
the affected requests was actually trying to do and construct a
human-friendly summary of each one.
This was a bit of a side-quest relative to the current goal of just getting
things basically working with a simple configuration, but this is useful
in figuring out what's going on with the current bug (which will be fixed
in a future commit) and will probably be useful when dealing with future
bugs too.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
During execution graph processing we need to carry addressing information
along with objects so that we can model changes of address as data flow
rather than as modifications to global mutable state.
In previous commits we arranged for other relevant types to track this
information, but the representation of a resource instance object was not
included because that was a messier change to wire in. This commit deals
with that mess: it introduces exec.ResourceInstanceObject to associate a
resource instance object with addressing information, and then adopts that
as the canonical representation of a "resource instance object result"
throughout all of the execution graph operations.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The first pass of this was mainly just to illustrate how the overall model
of execution graphs might work, using just a few "obvious" opcodes as a
placeholder for the full set.
Now that we're making some progress towards getting this to actually work
as part of a real apply phase this reworks the set of opcodes in a few
different ways:
- Some concepts that were previously handled as special cases in the
execution graph executor are now just regular operations. The general
idea here is that anything that is fallible and/or has externally-visible
side-effects should be modelled as an operation.
- The set of supported operations for managed resources should now be
complete enough to describe all of the different series of steps that
result from different kinds of plan. In particular, this now has the
building blocks needed to implement a "create then destroy" replace
operation, which includes the step of "deposing" the original object
until we've actually destroyed it.
- The new package "exec" contains vocabulary types for data flowing between
operations, and an interface representing the operations themselves.
The types in this package allow us to carry addressing information along
with objects that would not normally carry that directly, which means
we don't need to propagate that address information around the execution
graph through side-channels.
(A notable gap as of this commit is that resource instance objects don't
have address information yet. That'll follow in a subsequent commit just
because it requires some further rejiggering.)
- The interface implemented by the applying engine and provided to the
execution graph now more directly relates to the operations described
in the execution graph, so that it's considerably easier to follow how
the graph built by the planning phase corresponds to calls into that
interface in the applying phase.
- We're starting to consider OpenTofu's own tracking identifiers for
resource instances as separate from how providers refer to their resource
types, just so that the opinions about how those two related can be
more centralized in future, vs. the traditional runtime where the rules
about how those related tend to be spread haphazardly throughout the
codebase, and in particular implementing migration between resource
types was traditionally challenging because it creates a situation where
the desired resource type and the current resource type _intentionally_
differ.
This is a large commit because the execution graph concepts are
cross-cutting between the plan and apply engines, but outside of execgraph
the changes here are mainly focused on just making things compile again
with as little change as possible, and then subsequent commits will get
the other packages into a better working shape again.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to connect all of the apply engine parts together into
a functional whole which can at least attempt to apply a plan. There are
various gaps here that make this not fully work in practice, but the goal
of this commit is to just get all of the components connected together in
their current form and then subsequent commits will gradually build on this
to make it work better.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just some initial sketching of how the applying package might do
its work. Not yet finished, and I'm expecting to change about the
execution graph operations in future commits now that the main plan/apply
flow is plumbed in and so it's easier to test things end-to-end.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to get the plan object and check that it has the
execution graph field populated, and to load the configuration. More to
come in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This makes explicit the relationships between resource instances so that
they'll be honored automatically when we actually execute the graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The whole point of this helper is to encapsulate the concern of having
exactly one open/close pair for each distinct provider instance address, so
in order to be effective it must actually save its results in the same
map it was already checking!
This was just an oversight in the original implementation that became more
apparent after having a visualization of execution graphs, since that made
it far more obvious that we were previously planning to open a separate
provider instance per resource instance.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to produce a valid saved plan file we need to save information
about which provider instance each resource instance belongs to. We're
probably not actually going to use this information in the new
implementation because the same information is captured as part of the
execution graph, but we do at least need to make this valid enough that the
plan file loading code will accept it, because right now (for experimental
purposes) we're smuggling the execution graph through our traditional plan
file format as a sidecar data structure.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As with the traditional runtime, we need to tolerate invalid plans from
providers that announce that they are implemented with the legacy Terraform
Plugin SDK, because that SDK is incapable of producing valid plans.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The RPC-based provider client panics if it isn't given a non-nil value for
ProviderMeta. We aren't yet ready to send a "real" ProviderMeta value, but
for now we'll just stub it out as a null value so that we have just enough
to get through the provider client code without crashing.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The code which tries to ensure that provider clients get closed when they
are no longer needed was not taking into account that a provider client
might not have actually been started at all, due to an error.
It's debatable whether we ought to start this background goroutine in that
case at all, but this is an intentionally-minimal change for now until we
can take the time to think harder about how the call to
reportProviderInstanceClosed should work (and whether it should happen at
all) in that case. This change ensures that it'll still happen in the
same way as it did before.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a _very_ early, minimal plumbing of the execution graph builder
into the experimental planning engine.
The goal here is mainly just to prove the idea that the planning engine can
build execution graphs using the execgraph.Builder API we currently have.
This implementation is not complete yet, and also we are expecting to
rework the structure of the planning engine later on anyway so this initial
work focuses on just plumbing it in to what we had as straightforwardly
as possible.
This is enough to get a serialized form of _an_ execution graph included
in the generated plan, though since we don't have the apply engine
implemented we don't actually use it for anything yet.
In subsequent commits we'll continue building out the graph-building logic
and then arrange for the apply phase to unmarshal the saved execution graph
and attempt to execute it, so we can hopefully see a minimal version of all
of this working end-to-end for the first time. But for now, this was mainly
just a proof-of-concept of building an execution graph and capturing it
into its temporary home in the plans.Plan model.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
ResultRef[*states.ResourceInstanceObjectFull] is our current canonical
representation of the "final result" of applying changes to a resource
instance, and so it is going to appear a bunch in the plan and apply
engines.
The generic type is clunky to read and write though, so for this one in
particular we'll offer a more concise type alias that we can use for parts
of the API that are representing results from applying.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a minimal set of changes to introduce uses of the new
states.ResourceInstanceObjectFull to all of the leaf functions related to
planning managed and data resource instances.
The main goal here was just to prove that we'd reasonably be able to look
up objects with the new type in all of the places we'd need to. We're
planning some more substantial changes to the planning engine in future
commits (e.g. to generate execution graphs instead of traditional plans)
and so we'll plumb this in better as part of that work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we had a gap here where we only knew the resource type to use
when there was a "desired state" object to get it from, which isn't true
when we're planning to destroy an undesired object.
The new extended model for resource instance object state gives us direct
access to the resource type name, so we can now handle that properly.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Instead of states.ResourceInstanceObject, we'll now use the "full"
representation added in a recent commit. This new representation is a
better fit for execgraph because it tracks all of the relevant information
about a resource instance object inline without assuming that the object
is always used in conjunction with a pointer to the entire state.
We're not really making any special use of those new capabilities yet. This
is mostly just mechanical updates to refer to the new type, and to fill
in values for the extra fields as far as that's possible without doing
further refactoring.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
To support the workflow of saving a plan to disk and applying it on some
other machine we need to be able to represent the execution graph as a byte
stream and then reload it later to produce an equivalent execution graph.
This is an initial implementation of that, based on the way the execgraph
package currently represents execution graphs. We may change that
representation more in future as we get more experience working in the new
architecture, but this is intended as part of our "walking skeleton" phase
where we try to get the new architecture working end-to-end with simple
configurations as soon as possible to help verify that we're even on the
right track with this new approach, and try to find unknown unknowns that
we ought to deal with before we get too deep into this.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a relatively uninteresting milestone where it's possible to load
and plan a root module that contains nothing except local values and
output values.
The module loader currently supports only local sources and the plugin
APIs just immediately return errors, so configurations more complicated
than that are likely to just fail immediately with one or more errors.
We'll gradually improve on this in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This now seems to more-or-less work, at least as far as the new
compile-and-execute is concerned.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This covers most of the logic required to turn a source graph into a
compiled graph ready for execution. There's currently only support for one
of the opcodes though, so subsequent commits will sketch those out more
and then add some tests and fix any problems that inevitably exist here but
aren't yet visible because there are no tests.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Doesn't actually do anything yet. Just sketching how it might be
structured, with a temporary object giving us somewhere to keep track of
the relationships between nodes that we can discard once compilation is
complete, keeping only the information required to execute.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
So far this is mainly just the mechanism for building a graph piecemeal
from multiple callers working together as part of the planning engine.
The end goal is for it to be possible to "compile" an assembled graph into
something that can then be executed, and to be able to marshal/unmarshal an
uncompiled graph to save as part of a plan file, but those other
capabilities will follow in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This sketches the main happy path of planning for a desired managed
resource instance, though doesn't include all of the possible variations
that authors can cause with different configuration settings.
The main goal here is just to illustrate how the planning step might
interact with the work done in the eval system. If we decide to go in this
direction then we'll need to bring over all of the complexity from the
traditional runtime into this codepath, or possibly find some way to reuse
that behavior directory from "package tofu" in the short term.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously the PlanGlue methods all took PlanningOracle pointers as one
of their arguments, which is annoying since all of them should end up with
pointers to the same object and it makes it hard for the PlanGlue
implementation to do any work outside of and between the PlanGlue method
calls.
Instead then we'll have DrivePlanning take a function for building a
PlanGlue implementation given a PlanningOracle pointer, and then the
planning engine returns an implementation that binds a planContext to a
PlanningOracle it can then use to do all of its work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a sketch of starting and configuring provider instances once they
are needed and then closing them once they are no longer needed. The
stubby implementation of data resource instance planning is there mainly
just to illustrate how this is intended to work, but it's not yet complete
enough to actually function.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a similar idea to a sync.WorkGroup except that it tracks the
completion of individual items represented by comparable values, rather
than only a count of expected items to complete.
My intention for this is for the planning engine to use it to track when
all of the expected work for a provider instance or ephemeral resource
instance has been completed, so that it can be closed shortly afterwards
instead of waiting for the entire planning operation to run to completion.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The eval system gradually reports the "desired" declarations at various
different levels of granularity, and so the planning engine should compare
that with the instances in the previous run state to notice when any
existing resource instance is no longer in the desired state.
This doesn't yet include a real implementation of planning the deletion of
such resource instances. Real planning behaviors depend on us having some
sort of provider instance and ephemeral resource instance manager to be
able to make requests to configured providers, so that will follow in
subsequent commits before we can implement the actual planning behaviors.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously this had the idea that the planning engine would work in two
subphases where first it would deal with the desired state (decided by
evaluating the configuration) and then it would do a second pass to plan
to delete any resource instances that didn't show up in the desired state.
Although that would technically work, we need to keep configured provider
instances open for the whole timespan between planning the first
desired resource instance and the final orphan instance using each
provider, and so splitting into two phases effectively means we'd have
provider instances sitting open but idle waiting for the desired state
subphase to complete to learn if they are needed for the orphan subphase.
This commit stubs out (but does not yet implement) a new approach where
the orphan planning can happen concurrently with the desired state
planning. This works by having the evaluator gradually describe to the
planning engine which module calls, module call instances, resources, and
resource instances _are_ present in the desired state -- as soon as that
information becomes available -- so that the planning engine can notice
by omission what subset of the prior state resource instances are no longer
desired and are therefore orphans.
This means that the planning engine should now have all of the information
it needs to recognize when a provider can be closed: the PlanningOracle
reports which _desired_ resource instances (or placeholders thereof) are
expected to need each provider instance, and the prior state tracks
exactly which provider instance each prior resource instance was associated
with, and so a provider's work is finished once all instances in the union
of those two sets have had their action planned.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a problem that has been on my mind for the past little while as
I've been building this out, so I just wanted to write it down in a place
where others reading this code could find it, to acknowledge that there is
a gap in this new design idea that prevents it from having the similar
provider open/close behavior to the current implementation.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>