Our new language runtime uses a set of new methods on SyncState to work
with its preferred "full" representation of resource instance objects, but
those are implemented in terms of methods that already existed for the old
runtime's benefit and so we need to deal with some quirks of those existing
methods.
One such quirk is that the operations to write or remove objects also want
to update some resource-level and instance-level metadata as a side-effect,
and we need to carry through that metadata even when we're intending to
completely remove a resource instance object.
To preserve our goal of leaving the existing codepaths untouched for now,
this pushes a little complexity back up into the main caller in the apply
engine, forcing it to call a different method when it knows it has deleted
an object. That new method then only takes the metadata we need and not
an actual resource instance object, so it gels better with the underlying
ModuleState methods it's implemented in terms of.
Hopefully in the long run we'll rethink the state models to not rely on
these hidden side-effects, but that's beyond the scope of our current phase
of work on the new language runtime.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the first half of solving the current problem that we're not
always properly respecting the reverse dependency edges while destroying
objects: we need to consider both the config-derived dependencies and the
dependencies recorded in the the previous run state together when building
the resource instance graph.
The "orphan" case is the most obvious way this manifests right now, but
this needs handling in the desired case too because the desired case is
what handles "replace" actions, which include a delete component that also
needs to respect these prior dependencies.
The other half of this problem is that we're not actually currently
propagating the dependency information to the apply phase, and so it's not
saving that information in the new state objects it creates. That's not
solved in this commit, and instead I just tested this by manually modifying
the prior state to include what ought to have been recorded in it and then
verified that caused a correct execution graph to be constructed. I intend
to deal with the apply-time part of this problem in a separate commit
later.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
For any resource instance object that doesn't need any changes of its own,
we initially skip adding it to the execution graph but then add a stub
"prior state" operation retroactively if we discover at least one other
resource instance object that depends on it.
However, the code for that also needs to record in the execution graph
which result provides the evaluation value for the upstream resource
instance, if the object we've just added is the "current" object for its
resource instance. Otherwise the generated execution graph is invalid,
causing it to fail to provide the result to the evaluator for downstream
evaluation.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
For "replace" actions our convention is that the old value is what is to
be deleted, the new value is what is to be created, and there's an implicit
third state in between that we don't represent explicitly. (Either neither
exist or both exist, depending on the replace order.)
We were not previously handling that situation correctly, because we were
setting the new value to be the result of the "update" where the provider
reported that replacement is required, which would therefore still include
values from the old object that would not survive replacement.
Now we'll instead ask the provider to plan the delete and create legs
separately and record the separate results from each. This therefore
matches the planning questions we'll ask the provider during the apply
phase where the delete and create are already decomposed into separate
operations in the execution graph.
Having all of this logic inline in
planGlue.planDesiredManagedResourceInstance is getting quite unweildy, but
I'm intentionally leaving it like this right now until we have a more
complete representation of all of the needed functionality, rather than
potentially having to rework it repeatedly as we fill in more of the
TODO comments elsewhere in this function.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we had this check in the apply engine because it was making
sure that the decomposed parts of a "replace" don't themselves call for
a replace action, but it's a general rule that RequiresReplace is not
meaningful unless there's both a current and a desired object to compare,
so this moves that check into the "resources" package as a general rule.
A future commit will rely on this once the planning engine correctly
handles "replace" actions by asking the provider to re-plan as if the
old object didn't exist, in order to predict the "create" part of the
replace action.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The original motivation here was to implement the special case of making
a synthetic plan without calling into the provider when the desired state
is absent and the provider hasn't opted in to planning to destroy objects.
This case needs to be opted in because older providers will crash if asked
to plan with the config value or proposed value set to null.
However, there was already far too much code duplicated across both the
planning engine and the apply engine's "final plan" operation, and so this
seemed like a good prompt to finally resolve those TODO comments and factor
out the shared planning logic into a separate place that both can depend
on.
The new package "resources" is intended to grow into the home for all of
these resource-related operations that each wrap a lower-level provider
call but also perform preparation or followup actions such as making sure
the provider's response is valid according to the requirements of the
plugin protocol.
For now it only includes config validation and planning for managed
resources, but is designed to grow to include other similar operations in
future. The most likely next step is to add a method for applying a
previously-created plan, which would replace a bunch of the code in the
apply engine's implementation of the "ManagedApply" operation.
Currently the way this integrates with the rest of the provider-related
infrastructure is a little awkward. We can hopefully improve on that in
future, but for now the priority was to avoid this becoming yet another
sprawling architecture change that would be hard to review. Once we feel
like we've achieved the "walking skeleton" milestone for the new runtime
we can think about how we want to tidy up the rough edges between the
different components of this new design.
As has become tradition for these new codepaths, this is designed to be
independently testable but not yet actually tested because we want to wait
and see how all of this code settles before we start writing tests that
would then make it harder to refactor as we learn more.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We'll now remember what configuration dependencies we found when we
instantiated each provider instance during the planning phase and include
the same explicit dependencies in the execution graph.
We still have an open question of what to do with ephemeral dependencies
such as those between provider instances and ephemeral resources, since
those are allowed vary between plan and apply. There are existing TODOs
about that in the ephemeral resource planning codepaths, but for now we're
focused mainly on managed resource instances for our "walking skeleton"
milestone and so we'll slightly-incorrectly assume that the dependencies
will be the same during the apply phase for now until we complete that
later planned design work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The previous commit arranged for each resource instance object with a
planned change to have an execution subgraph generated for it, but didn't
honor the dependencies between those objects.
There's now a followup loop that adds all of the needed "waiter" edges
after the fact, including both the "forward" dependencies between
create/update changes and the "reverse" dependencies between delete
changes.
The shape of the leaf code here got quite messy. In future commits I intend
to work on cleaning up the details more, but the main focus here was to
restore the execgraph building functionality just enough to prove that this
new two-pass planning approach gives us enough information to insert
all of the needed dependency edges.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is the first chunk of the work to construct an execution graph based
on the intermediate representation in resourceInstanceObjects.
For now this just constructs an independent subgraph for each resource
instance object, without honoring any of the inter-object dependencies. A
subsequent commit will add in all of those dependencies in an additional
loop afterwards.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This was temporarily broken as part of reworking the planning engine to
use two passes. This restores the previous behavior in terms of the new
intermediate representation generated by the first pass, while taking into
account the "effective replace order" information introduced in the parent
commit so that we report an accurate replace action for each resource
instance object.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The first pass in the planning engine decides what we're now calling the
"initial replace order" for each object, which can either be
replaceAnyOrder or replaceCreateThenDestroy based purely on whether that
object was configured with "create_before_destroy = true" or not.
The second pass then analyses the materialized graph of resource instance
objects to translate any replaceAnyOrder objects into either
replaceDestroyThenCreate or replaceCreateThenDestroy depending on whether
they are in any dependency chain with something else that requires
replaceCreateThenDestroy.
For now we don't use the results of this for anything, but in a future
commit we'll use it to select the appropriate ordering for the apply-time
operations in the execution graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As I started writing analysis code based on this intermediate
representation it ended up having all of the same challenges that the
traditional runtime's "dag" model has where it was repeatedly materializing
copies of data structures just to iterate over them, because it isn't
reasonable to return an iter.Seq or iter.Seq2 over a map that's
guarded by a mutex.
Therefore this now adopts a similar pattern to what we've been following
elsewhere: a wrapper type that exists only during the construction phase,
using a mutex to allow concurrent calls, but then after construction the
object is "frozen" in an immutable form where we no longer need to hold
mutexes to interact with it. and so we can safely return iterable sequences
over the internal maps.
Future commits will use these new iter.Seq-based accessors to access the
relationships that a resourceInstanceObjects object encapsulates _without_
creating direct copies of that information and without holding any mutexes.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The planContext object is now responsible only for producing an
intermediate representation that's private to the planning engine. This
new finalizePlan function should then transform the intermediate
representation into the final plan to return to the external caller.
This is only an API stub for now, returning an empty plan with an empty
execution graph. We'll build out the real functionality in future commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Building on the work in the previous commit, this reworks the internals of
our PlanGlue implementation so that it just builds the intermediate
representation of a resourceInstanceObjects data structure that describes
the relationships between all of the resource instance objects.
For now this actually completely breaks the planning engine because it
doesn't generate a set of planned changes or an execution graph at all.
In subsequent commits we'll restore that logic as a separate pass that
works in terms of the intermediate representation, instead of trying to do
everything all at once in a single pass.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Our current implementation of planning tries to do the whole process in a
single pass driven by the evaluator, but we've reached the limits of what
we can reasonably achieve in that way because in order to construct a
correct execution graph we need certain derived information we can only
finalize after we've already discovered the full graph of resource instance
object dependencies:
- For objects that have create_before_destroy set we need to propagate that
replace order to all dependencies _and_ all dependents in order for the
generated execution graph to be acyclic.
We therefore need to calculate the effective replace order for each
object only after we've materialized the full dependency graph.
- When we generate the execution subgraph for any action involving a
"delete" step, that step needs to depend on any delete steps for anything
that depends on it so that the deletions happen in _reverse_ dependency
order. For example, if an aws_subnet depends on an aws_vpc then we need
to delete the subnet before the VPC.
Because this relies on reverse dependency information, we need to
materialize the whole dependency graph first and then traverse it
"backwards".
This new approach actually matches more closely the design of the
traditional language runtime, where the apply-time graph is not built until
after the planning phase is already complete. The most important difference
is that in this new implementation the dependencies are inferred by dynamic
analysis concurrently with making calls to providers, and so the
intermediate graph is more precise in its representation of dependencies.
A secondary difference is that the apply-time execution graph will be
finalized during the planning phase and then used verbatim during the apply
phase, whereas the traditional runtime throws away the apply graph it
builds during the planning phase and then rebuilds it again during the
apply phase, with the potential for getting different results if the graph
builder is buggy or nondeterministic.
This commit only adds some types to model the materialized dependency
graph, and does not yet make any use of it. Subsequent commits will change
the existing planning engine code to follow this new "two-pass" style.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This provides at least a partial implementation of every resource instance
action except the ones involving "forget" actions.
However, we don't really quite have all of the building blocks needed to
properly model "delete" yet, because properly handling those actions means
we need to generate "backwards" dependency edges to preserve the guarantee
that destroying happens in reverse order to creating. Therefore the main
outcome of this commit is to add a bunch of FIXME and TODO comments
explaining where the known gaps are, with the intention of then filling
those gaps in later commits once we devise a suitable strategy to handle
them.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These both effectively had the behavior of ResourceInstancePrior embedded
in them, reading something from the state and change its address as a
single compound operation.
In the case of ManagedDepose we need to split these up for the
CreateThenDestroy variant of "replace", because we want to make sure the
final plans are valid before we depose anything and we need the prior state
to produce the final plan. (Actually using that will follow in a subsequent
commit.)
This isn't actually necessary for ManageChangeAddr, but splitting it keeps
these two operations consistent in how they interact with the rest of the
operations.
Due to how the existing states.SyncState works we're not actually making
good use of the data flow of these objects right now, but in a future world
where we're no longer using the old state models hopefully the state API
will switch to an approach that's more aligned with how the execgraph
operations are modeled.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to make it possible to destroy previously-created
objects by removing them from the configuration. For now this intentionally
duplicates some of the logic from the desired instance planning function
because that more closely matches how this was dealt with in the
traditional runtime. Once all of the managed-resource-related planning
functions are in a more complete state we'll review them and try to extract
as much functionality as possible into a common location that can be shared
across all three situations. For now though it's more important that we be
able to quickly adjust the details of this code while we're still nailing
down exactly what the needed functionality is.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
We're currently being intentionally cautious about adding too many tests
while these new parts of the system are still evolving and changing a lot,
but the execGraphBuilder behavior is hopefully self-contained enough for
a small set of basic tests to be more helpful than hurtful.
We should extend this test, and add other test cases that involve more
complicated interactions between different resource instances of different
modes, once we feel that these new codepaths have reached a more mature
state where we're more focused on localized maintenance than on broad
system design exploration.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
A DeleteThenCreate action decomposes into two plan-then-apply sequences,
representing first the deletion and then the creation. However, we order
these in such a way that both plans need to succeed before we begin
applying the "delete" change, so that a failure to create the final plan
for the "create" change can prevent the object from being destroyed.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Because this operation represents the main externally-visible side effects
in the apply phase, in some cases we'll need to force additional
constraints on what must complete before executing it. For example, in
a DestroyThenCreate operation we need to guarantee that the apply of the
"destroy" part is completed before we attempt the apply for the "create"
part.
(The actual use of this for DestroyThenCreate will follow in a later
commit.)
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously the generated execgraph was naive and only really supported
"create" changes. This commit has some initial work on generalizing
that, though it's not yet complete and will continue in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Whenever we plan for a resource instance object to switch from one instance
address to another, we'll use this new op instead of ResourceInstancePrior
to perform the state modification needed for the rename before returning
the updated object.
We combine the rename and the state read into a single operation to ensure
that they can appear to happen atomically as far as the rest of the system
is concerned, so that there's no interim state where either both or neither
of the address bindings are present in the state.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to describe operations against deposed objects for managed
resource instances we need to be able to store constant states.DeposedKey
values in the execution graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The temporary placeholder code was relying only on the
DesiredResourceInstance object, which was good enough for proving that this
could work at all and had the advantage of being a new-style model with
new-style representations of the provider instance address and the other
resource instances that the desired object depends on.
But that isn't enough information to plan anything other than "create"
changes, so now we'll switch to using plans.ResourceInstanceChange as the
main input to the execgraph building logic, even though for now that means
we need to carry a few other values alongside it to compensate for the
shortcomings of that old model designed for the old language runtime.
So far this doesn't actually change what we do in response to the change
so it still only supports "create" changes. In future commits we'll make
the execGraphBuilder method construct different shapes of graph depending
on which change action was planned.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The previous commit split the handling of provider instance items into
separate dependency-analysis and execgraph construction steps, with the
intention of avoiding the need for the execGraphBuilder to directly
interact with the planning oracle and thus indirectly with the evaluator.
Overall the hope is that execGraphBuilder will be a self-contained object
that doesn't depend on anything else in the planning engine so that it's
easier to write unit tests for it that don't require creating an entire
fake planning context.
However, on reflection that change introduced a completely unnecessary
extra handoff from the execGraphBuilder to _another part of itself_, which
is obviously unnecessary complexity because it doesn't serve to separate
any concerns.
This is therefore a further simplification that returns to just doing the
entire handling of a provider instance's presence in the execution graph
only once we've decided that at least one resource instance will
definitely use the provider instance during the apply phase.
There is still a separation of concerns where the planGlue type is
responsible for calculating the provider dependencies and then the
execGraphBuilder is only responsible for adding items to the execution
graph based on that information. That separation makes sense because
planGlue's job is to bridge between the planning engine and the evaluator,
and it's the evaluator's job to calculate the dependencies for a provider
instance, whereas execGraphBuilder is the component responsible for
deciding exactly which low-level execgraph operations we'll use to describe
the situation to the apply engine.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Previously we did _all_ of the work related to preparing execgraph items
for provider instances as a side-effect of the ProviderInstance method
of execGraphBuilder.
Now we'll split it into two parts: the first time we encounter each
provider instance during planning we'll proactively gather its dependencies
and stash them in the graph builder. However, we'll still wait until the
first request for the execution subgraph of a provider instance before
we actually insert that into the graph, because that then effectively
excludes from the apply phase any provider instances that aren't needed for
the actual planned side-effects. In particular, if all of the resource
instances belonging to a provider instance turn out to have "no-op" plans
then that provider instance won't appear in the execution graph at all.
An earlier draft of this change did the first dependency capturing step via
a new method of planGlue called by the evaluator, but after writing that
I found it unfortunate to introduce yet another inversion of control
situation where readers of the code just need to know and trust that the
evaluator will call things in a correct order -- there's already enough
of that for resource instances -- and so I settled on the compromise of
having the ensureProviderInstanceDependencies calls happen as part of the
linear code for handling each resource instance object, which makes it far
easier for a future maintainer to verify that we're upholding the contract
of calling ensureProviderInstanceDependencies before asking for an
execgraph result, while still allowing us to handle that step generically
instead of duplicating it into each resource-mode-specific handler.
While working on this I noticed a slight flaw in our initial approach to
handling ephemeral resource instances in the execution graph, which is
described inline as a comment in planDesiredEphemeralResourceInstance.
We'll need to think about that some more and deal with it in a future
commit.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As we've continue to build out the execution graph behavior during the plan
and apply phases we've found that the execgraph package isn't really the
best place for having the higher-level ideas like singleton provider
client operations because that package doesn't have enough awareness about
the planning process to manage those concerns well.
The mix of both high- and low-level concerns in execgraph.Builder was also
starting to make it have a pretty awkward shape where some of the low-level
operations needed to be split into two parts so that the higher-level parts
could call them while holding the builder's mutex.
In response to that here we split the execgraph.Builder functionality so
that the execgraph package part is concerned only with the lowest-level
concern of adding new stuff to the graph, without any attempt to dedupe
things and without care for concurrency. The higher-level parts then live
in a higher-level wrapper in the planning engine's own package, which
absorbs the responsibility for mutexing and managing singletons.
For now the new type in the planning package just has lightly-adapted
copies of existing code just to try to illustrate what concerns belong to
it and how the rest of the system ought to interact with it. There are
various FIXME/TODO comments describing how I expect this to evolve in
future commits as we continue to build out more complete planning
functionality.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were previously using "applying:" as the common prefix, but I found
that confusing in practice because only the "ManagedApply" operation is
_actually_ applying changes.
Instead then we'll identify these trace logs as belonging to the apply
phase as a whole, to try to be a little clearer about what's going on.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
Here we ask the provider to produce a plan based on the finalized
configuration taken from the "desired state", and return it as a final plan
object only if the provider request succeeds and returns something that is
valid under our usual plan consistency rules.
We then ask the provider to apply that plan, and deal with whatever it
returns. The apply part of this is not so complete yet, but there's enough
here to handle the happy path where the operation succeeds and the provider
returns something valid.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This'll make it clearer which provider we're using to ask each question,
without making a human reader try to work backwards through the execution
graph data flow.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The statements were in the wrong order so we were adding the
ProviderInstanceConfig operation before checking whether we already added
the operations for a particular provider instance, and thus the generated
execution graph had redundant unused extra ProviderInstanceConfig ops.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
These were useful during initial development, but the execgraph behavior
now seems to be doing well enough that these noisy logs have become more
annoying than helpful, since the useful context now comes from the logging
within engine/apply's implementation of the operations.
However, this does introduce some warning logs to draw attention to the
fact that resource instance postconditions are not implemented yet.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
There is currently a bug in the apply engine that's causing a
self-dependency error during promise resolution, which was a good reminder
that we'd not previously finished connecting all of the parts to be able
to unwind such problems into a useful error message.
Due to how the work is split between the apply engine, the evaluator, and
the execgraph package it takes some awkward back-and-forth to get all of
the needed information together into one place. This compromise aims to
do as little work as possible in the happy path and defer more expensive
analysis until we actually know we're going to report an error message.
In this case we can't really avoid proactively collecting the request IDs
because we don't know ahead of time what (if anything) will be involved in
a promise error, but only when actually generating an error message will
we dig into the original source execution graph to find out what each of
the affected requests was actually trying to do and construct a
human-friendly summary of each one.
This was a bit of a side-quest relative to the current goal of just getting
things basically working with a simple configuration, but this is useful
in figuring out what's going on with the current bug (which will be fixed
in a future commit) and will probably be useful when dealing with future
bugs too.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
During execution graph processing we need to carry addressing information
along with objects so that we can model changes of address as data flow
rather than as modifications to global mutable state.
In previous commits we arranged for other relevant types to track this
information, but the representation of a resource instance object was not
included because that was a messier change to wire in. This commit deals
with that mess: it introduces exec.ResourceInstanceObject to associate a
resource instance object with addressing information, and then adopts that
as the canonical representation of a "resource instance object result"
throughout all of the execution graph operations.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The first pass of this was mainly just to illustrate how the overall model
of execution graphs might work, using just a few "obvious" opcodes as a
placeholder for the full set.
Now that we're making some progress towards getting this to actually work
as part of a real apply phase this reworks the set of opcodes in a few
different ways:
- Some concepts that were previously handled as special cases in the
execution graph executor are now just regular operations. The general
idea here is that anything that is fallible and/or has externally-visible
side-effects should be modelled as an operation.
- The set of supported operations for managed resources should now be
complete enough to describe all of the different series of steps that
result from different kinds of plan. In particular, this now has the
building blocks needed to implement a "create then destroy" replace
operation, which includes the step of "deposing" the original object
until we've actually destroyed it.
- The new package "exec" contains vocabulary types for data flowing between
operations, and an interface representing the operations themselves.
The types in this package allow us to carry addressing information along
with objects that would not normally carry that directly, which means
we don't need to propagate that address information around the execution
graph through side-channels.
(A notable gap as of this commit is that resource instance objects don't
have address information yet. That'll follow in a subsequent commit just
because it requires some further rejiggering.)
- The interface implemented by the applying engine and provided to the
execution graph now more directly relates to the operations described
in the execution graph, so that it's considerably easier to follow how
the graph built by the planning phase corresponds to calls into that
interface in the applying phase.
- We're starting to consider OpenTofu's own tracking identifiers for
resource instances as separate from how providers refer to their resource
types, just so that the opinions about how those two related can be
more centralized in future, vs. the traditional runtime where the rules
about how those related tend to be spread haphazardly throughout the
codebase, and in particular implementing migration between resource
types was traditionally challenging because it creates a situation where
the desired resource type and the current resource type _intentionally_
differ.
This is a large commit because the execution graph concepts are
cross-cutting between the plan and apply engines, but outside of execgraph
the changes here are mainly focused on just making things compile again
with as little change as possible, and then subsequent commits will get
the other packages into a better working shape again.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to connect all of the apply engine parts together into
a functional whole which can at least attempt to apply a plan. There are
various gaps here that make this not fully work in practice, but the goal
of this commit is to just get all of the components connected together in
their current form and then subsequent commits will gradually build on this
to make it work better.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just some initial sketching of how the applying package might do
its work. Not yet finished, and I'm expecting to change about the
execution graph operations in future commits now that the main plan/apply
flow is plumbed in and so it's easier to test things end-to-end.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just enough to get the plan object and check that it has the
execution graph field populated, and to load the configuration. More to
come in later commits.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This makes explicit the relationships between resource instances so that
they'll be honored automatically when we actually execute the graph.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The whole point of this helper is to encapsulate the concern of having
exactly one open/close pair for each distinct provider instance address, so
in order to be effective it must actually save its results in the same
map it was already checking!
This was just an oversight in the original implementation that became more
apparent after having a visualization of execution graphs, since that made
it far more obvious that we were previously planning to open a separate
provider instance per resource instance.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
In order to produce a valid saved plan file we need to save information
about which provider instance each resource instance belongs to. We're
probably not actually going to use this information in the new
implementation because the same information is captured as part of the
execution graph, but we do at least need to make this valid enough that the
plan file loading code will accept it, because right now (for experimental
purposes) we're smuggling the execution graph through our traditional plan
file format as a sidecar data structure.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
As with the traditional runtime, we need to tolerate invalid plans from
providers that announce that they are implemented with the legacy Terraform
Plugin SDK, because that SDK is incapable of producing valid plans.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The RPC-based provider client panics if it isn't given a non-nil value for
ProviderMeta. We aren't yet ready to send a "real" ProviderMeta value, but
for now we'll just stub it out as a null value so that we have just enough
to get through the provider client code without crashing.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
The code which tries to ensure that provider clients get closed when they
are no longer needed was not taking into account that a provider client
might not have actually been started at all, due to an error.
It's debatable whether we ought to start this background goroutine in that
case at all, but this is an intentionally-minimal change for now until we
can take the time to think harder about how the call to
reportProviderInstanceClosed should work (and whether it should happen at
all) in that case. This change ensures that it'll still happen in the
same way as it did before.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is a _very_ early, minimal plumbing of the execution graph builder
into the experimental planning engine.
The goal here is mainly just to prove the idea that the planning engine can
build execution graphs using the execgraph.Builder API we currently have.
This implementation is not complete yet, and also we are expecting to
rework the structure of the planning engine later on anyway so this initial
work focuses on just plumbing it in to what we had as straightforwardly
as possible.
This is enough to get a serialized form of _an_ execution graph included
in the generated plan, though since we don't have the apply engine
implemented we don't actually use it for anything yet.
In subsequent commits we'll continue building out the graph-building logic
and then arrange for the apply phase to unmarshal the saved execution graph
and attempt to execute it, so we can hopefully see a minimal version of all
of this working end-to-end for the first time. But for now, this was mainly
just a proof-of-concept of building an execution graph and capturing it
into its temporary home in the plans.Plan model.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
ResultRef[*states.ResourceInstanceObjectFull] is our current canonical
representation of the "final result" of applying changes to a resource
instance, and so it is going to appear a bunch in the plan and apply
engines.
The generic type is clunky to read and write though, so for this one in
particular we'll offer a more concise type alias that we can use for parts
of the API that are representing results from applying.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>
This is just a minimal set of changes to introduce uses of the new
states.ResourceInstanceObjectFull to all of the leaf functions related to
planning managed and data resource instances.
The main goal here was just to prove that we'd reasonably be able to look
up objects with the new type in all of the places we'd need to. We're
planning some more substantial changes to the planning engine in future
commits (e.g. to generate execution graphs instead of traditional plans)
and so we'll plumb this in better as part of that work.
Signed-off-by: Martin Atkins <mart@degeneration.co.uk>