Renaming of Terraform to OpenTF

This commit is contained in:
Yaron Yarimi
2023-08-22 12:38:28 +03:00
parent d4bf44dfb6
commit da56f706e7
14 changed files with 227 additions and 227 deletions

View File

@@ -5,7 +5,7 @@ generate:
go generate ./... go generate ./...
# We separate the protobuf generation because most development tasks on # We separate the protobuf generation because most development tasks on
# Terraform do not involve changing protobuf files and protoc is not a # OpenTF do not involve changing protobuf files and protoc is not a
# go-gettable dependency and so getting it installed can be inconvenient. # go-gettable dependency and so getting it installed can be inconvenient.
# #
# If you are working on changes to protobuf interfaces, run this Makefile # If you are working on changes to protobuf interfaces, run this Makefile

View File

@@ -1,23 +1,23 @@
# Terraform Core Codebase Documentation # OpenTF Core Codebase Documentation
This directory contains some documentation about the Terraform Core codebase, This directory contains some documentation about the OpenTF Core codebase,
aimed at readers who are interested in making code contributions. aimed at readers who are interested in making code contributions.
If you're looking for information on _using_ Terraform, please instead refer If you're looking for information on _using_ OpenTF, please instead refer
to [the main Terraform CLI documentation](https://www.terraform.io/docs/cli/index.html). to [the main OpenTF CLI documentation](https://www.terraform.io/docs/cli/index.html).
## Terraform Core Architecture Documents ## OpenTF Core Architecture Documents
* [Terraform Core Architecture Summary](./architecture.md): an overview of the * [OpenTF Core Architecture Summary](./architecture.md): an overview of the
main components of Terraform Core and how they interact. This is the best main components of OpenTF Core and how they interact. This is the best
starting point if you are diving in to this codebase for the first time. starting point if you are diving in to this codebase for the first time.
* [Resource Instance Change Lifecycle](./resource-instance-change-lifecycle.md): * [Resource Instance Change Lifecycle](./resource-instance-change-lifecycle.md):
a description of the steps in validating, planning, and applying a change a description of the steps in validating, planning, and applying a change
to a resource instance, from the perspective of the provider plugin RPC to a resource instance, from the perspective of the provider plugin RPC
operations. This may be useful for understanding the various expectations operations. This may be useful for understanding the various expectations
Terraform enforces about provider behavior, either if you intend to make OpenTF enforces about provider behavior, either if you intend to make
changes to those behaviors or if you are implementing a new Terraform plugin changes to those behaviors or if you are implementing a new OpenTF plugin
SDK and so wish to conform to them. SDK and so wish to conform to them.
(If you are planning to write a new provider using the _official_ SDK then (If you are planning to write a new provider using the _official_ SDK then
@@ -31,10 +31,10 @@ to [the main Terraform CLI documentation](https://www.terraform.io/docs/cli/inde
This documentation is for SDK developers, and is not necessary reading for This documentation is for SDK developers, and is not necessary reading for
those implementing a provider using the official SDK. those implementing a provider using the official SDK.
* [How Terraform Uses Unicode](./unicode.md): an overview of the various * [How OpenTF Uses Unicode](./unicode.md): an overview of the various
features of Terraform that rely on Unicode and how to change those features features of OpenTF that rely on Unicode and how to change those features
to adopt new versions of Unicode. to adopt new versions of Unicode.
## Contribution Guides ## Contribution Guides
* [Contributing to Terraform](../.github/CONTRIBUTING.md): a complete guideline for those who want to contribute to this project. * [Contributing to OpenTF](../.github/CONTRIBUTING.md): a complete guideline for those who want to contribute to this project.

View File

@@ -1,19 +1,19 @@
# Terraform Core Architecture Summary # OpenTF Core Architecture Summary
This document is a summary of the main components of Terraform Core and how This document is a summary of the main components of OpenTF Core and how
data and requests flow between these components. It's intended as a primer data and requests flow between these components. It's intended as a primer
to help navigate the codebase to dig into more details. to help navigate the codebase to dig into more details.
We assume some familiarity with user-facing Terraform concepts like We assume some familiarity with user-facing OpenTF concepts like
configuration, state, CLI workflow, etc. The Terraform website has configuration, state, CLI workflow, etc. The OpenTF website has
documentation on these ideas. documentation on these ideas.
## Terraform Request Flow ## OpenTF Request Flow
The following diagram shows an approximation of how a user command is The following diagram shows an approximation of how a user command is
executed in Terraform: executed in OpenTF:
![Terraform Architecture Diagram, described in text below](./images/architecture-overview.png) ![OpenTF Architecture Diagram, described in text below](./images/architecture-overview.png)
Each of the different subsystems (solid boxes) in this diagram is described Each of the different subsystems (solid boxes) in this diagram is described
in more detail in a corresponding section below. in more detail in a corresponding section below.
@@ -29,7 +29,7 @@ their corresponding `command` package types can be found in the `commands.go`
file in the root of the repository. file in the root of the repository.
The full flow illustrated above does not actually apply to _all_ commands, The full flow illustrated above does not actually apply to _all_ commands,
but it applies to the main Terraform workflow commands `terraform plan` and but it applies to the main OpenTF workflow commands `terraform plan` and
`terraform apply`, along with a few others. `terraform apply`, along with a few others.
For these commands, the role of the command implementation is to read and parse For these commands, the role of the command implementation is to read and parse
@@ -62,8 +62,8 @@ the command-handling code calls `Operation` with the operation it has
constructed, and then the backend is responsible for executing that action. constructed, and then the backend is responsible for executing that action.
Backends that execute operations, however, do so as an architectural implementation detail and not a Backends that execute operations, however, do so as an architectural implementation detail and not a
general feature of backends. That is, the term 'backend' as a Terraform feature is used to refer to general feature of backends. That is, the term 'backend' as a OpenTF feature is used to refer to
a plugin that determines where Terraform stores its state snapshots - only the default `local` a plugin that determines where OpenTF stores its state snapshots - only the default `local`
backend and Terraform Cloud's backends (`remote`, `cloud`) perform operations. backend and Terraform Cloud's backends (`remote`, `cloud`) perform operations.
Thus, most backends do _not_ implement this interface, and so the `command` package wraps these Thus, most backends do _not_ implement this interface, and so the `command` package wraps these
@@ -73,7 +73,7 @@ causing the operation to be executed locally within the `terraform` process itse
## Backends ## Backends
A _backend_ determines where Terraform should store its state snapshots. A _backend_ determines where OpenTF should store its state snapshots.
As described above, the `local` backend also executes operations on behalf of most other As described above, the `local` backend also executes operations on behalf of most other
backends. It uses a _state manager_ backends. It uses a _state manager_
@@ -86,7 +86,7 @@ initial processing/validation of the configuration specified in the
operation. It then uses these, along with the other settings given in the operation. It then uses these, along with the other settings given in the
operation, to construct a operation, to construct a
[`terraform.Context`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#Context), [`terraform.Context`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#Context),
which is the main object that actually performs Terraform operations. which is the main object that actually performs OpenTF operations.
The `local` backend finally calls an appropriate method on that context to The `local` backend finally calls an appropriate method on that context to
begin execution of the relevant command, such as begin execution of the relevant command, such as
@@ -115,7 +115,7 @@ and recursively loads all of the child modules to produce a single
[`configs.Config`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/configs#Config) [`configs.Config`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/configs#Config)
representing the entire configuration. representing the entire configuration.
Terraform expects configuration files written in the Terraform language, which OpenTF expects configuration files written in the OpenTF language, which
is a DSL built on top of is a DSL built on top of
[HCL](https://github.com/hashicorp/hcl). Some parts of the configuration [HCL](https://github.com/hashicorp/hcl). Some parts of the configuration
cannot be interpreted until we build and walk the graph, since they depend cannot be interpreted until we build and walk the graph, since they depend
@@ -124,12 +124,12 @@ the configuration remain represented as the low-level HCL types
[`hcl.Body`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Body) [`hcl.Body`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Body)
and and
[`hcl.Expression`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Expression), [`hcl.Expression`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Expression),
allowing Terraform to interpret them at a more appropriate time. allowing OpenTF to interpret them at a more appropriate time.
## State Manager ## State Manager
A _state manager_ is responsible for storing and retrieving snapshots of the A _state manager_ is responsible for storing and retrieving snapshots of the
[Terraform state](https://www.terraform.io/docs/language/state/index.html) [OpenTF state](https://www.terraform.io/docs/language/state/index.html)
for a particular workspace. Each manager is an implementation of for a particular workspace. Each manager is an implementation of
some combination of interfaces in some combination of interfaces in
[the `statemgr` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr), [the `statemgr` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr),
@@ -144,7 +144,7 @@ that does not implement all of `statemgr.Full`.
The implementation The implementation
[`statemgr.Filesystem`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr#Filesystem) is used [`statemgr.Filesystem`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr#Filesystem) is used
by default (by the `local` backend) and is responsible for the familiar by default (by the `local` backend) and is responsible for the familiar
`terraform.tfstate` local file that most Terraform users start with, before `terraform.tfstate` local file that most OpenTF users start with, before
they switch to [remote state](https://www.terraform.io/docs/language/state/remote.html). they switch to [remote state](https://www.terraform.io/docs/language/state/remote.html).
Other implementations of `statemgr.Full` are used to implement remote state. Other implementations of `statemgr.Full` are used to implement remote state.
Each of these saves and retrieves state via a remote network service Each of these saves and retrieves state via a remote network service
@@ -166,12 +166,12 @@ to represent the necessary steps for that operation and the dependency
relationships between them. relationships between them.
In most cases, the In most cases, the
[vertices](https://en.wikipedia.org/wiki/Vertex_(graph_theory)) of Terraform's [vertices](https://en.wikipedia.org/wiki/Vertex_(graph_theory)) of OpenTF's
graphs each represent a specific object in the configuration, or something graphs each represent a specific object in the configuration, or something
derived from those configuration objects. For example, each `resource` block derived from those configuration objects. For example, each `resource` block
in the configuration has one corresponding in the configuration has one corresponding
[`GraphNodeConfigResource`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#GraphNodeConfigResource) [`GraphNodeConfigResource`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#GraphNodeConfigResource)
vertex representing it in the "plan" graph. (Terraform Core uses terminology vertex representing it in the "plan" graph. (OpenTF Core uses terminology
inconsistently, describing graph _vertices_ also as graph _nodes_ in various inconsistently, describing graph _vertices_ also as graph _nodes_ in various
places. These both describe the same concept.) places. These both describe the same concept.)
@@ -228,7 +228,7 @@ itself is implemented in
[the low-level `dag` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk) [the low-level `dag` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk)
(where "DAG" is short for [_Directed Acyclic Graph_](https://en.wikipedia.org/wiki/Directed_acyclic_graph)), in (where "DAG" is short for [_Directed Acyclic Graph_](https://en.wikipedia.org/wiki/Directed_acyclic_graph)), in
[`AcyclicGraph.Walk`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk). [`AcyclicGraph.Walk`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk).
However, the "interesting" Terraform walk functionality is implemented in However, the "interesting" OpenTF walk functionality is implemented in
[`terraform.ContextGraphWalker`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#ContextGraphWalker), [`terraform.ContextGraphWalker`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#ContextGraphWalker),
which implements a small set of higher-level operations that are performed which implements a small set of higher-level operations that are performed
during the graph walk: during the graph walk:
@@ -346,7 +346,7 @@ or
Expression evaluation produces a dynamic value represented as a Expression evaluation produces a dynamic value represented as a
[`cty.Value`](https://pkg.go.dev/github.com/zclconf/go-cty/cty#Value). [`cty.Value`](https://pkg.go.dev/github.com/zclconf/go-cty/cty#Value).
This Go type represents values from the Terraform language and such values This Go type represents values from the OpenTF language and such values
are eventually passed to provider plugins. are eventually passed to provider plugins.
### Sub-graphs ### Sub-graphs

View File

@@ -1,4 +1,4 @@
# Terraform Core Resource Destruction Notes # OpenTF Core Resource Destruction Notes
This document intends to describe some of the details and complications This document intends to describe some of the details and complications
involved in the destruction of resources. It covers the ordering defined for involved in the destruction of resources. It covers the ordering defined for
@@ -8,7 +8,7 @@ all possible combinations of dependency ordering, only to outline the basics
and document some of the more complicated aspects of resource destruction. and document some of the more complicated aspects of resource destruction.
The graph diagrams here will continue to use the inverted graph structure used The graph diagrams here will continue to use the inverted graph structure used
internally by Terraform, where edges represent dependencies rather than order internally by OpenTF, where edges represent dependencies rather than order
of operations. of operations.
## Simple Resource Creation ## Simple Resource Creation

View File

@@ -1,6 +1,6 @@
# Planning Behaviors # Planning Behaviors
A key design tenet for Terraform is that any actions with externally-visible A key design tenet for OpenTF is that any actions with externally-visible
side-effects should be carried out via the standard process of creating a side-effects should be carried out via the standard process of creating a
plan and then applying it. Any new features should typically fit within this plan and then applying it. Any new features should typically fit within this
model. model.
@@ -8,25 +8,25 @@ model.
There are also some historical exceptions to this rule, which we hope to There are also some historical exceptions to this rule, which we hope to
supplement with plan-and-apply-based equivalents over time. supplement with plan-and-apply-based equivalents over time.
This document describes the default planning behavior of Terraform in the This document describes the default planning behavior of OpenTF in the
absence of any special instructions, and also describes the three main absence of any special instructions, and also describes the three main
design approaches we can choose from when modelling non-default behaviors that design approaches we can choose from when modelling non-default behaviors that
require additional information from outside of Terraform Core. require additional information from outside of OpenTF Core.
This document focuses primarily on actions relating to _resource instances_, This document focuses primarily on actions relating to _resource instances_,
because that is Terraform's main concern. However, these design principles can because that is OpenTF's main concern. However, these design principles can
potentially generalize to other externally-visible objects, if we can describe potentially generalize to other externally-visible objects, if we can describe
their behaviors in a way comparable to the resource instance behaviors. their behaviors in a way comparable to the resource instance behaviors.
This is developer-oriented documentation rather than user-oriented This is developer-oriented documentation rather than user-oriented
documentation. See documentation. See
[the main Terraform documentation](https://www.terraform.io/docs) for [the main OpenTF documentation](https://www.terraform.io/docs) for
information on existing planning behaviors and other behaviors as viewed from information on existing planning behaviors and other behaviors as viewed from
an end-user perspective. an end-user perspective.
## Default Planning Behavior ## Default Planning Behavior
When given no explicit information to the contrary, Terraform Core will When given no explicit information to the contrary, OpenTF Core will
automatically propose taking the following actions in the appropriate automatically propose taking the following actions in the appropriate
situations: situations:
@@ -52,21 +52,21 @@ situations:
the configuration (in a `resource` block) and recorded in the prior state the configuration (in a `resource` block) and recorded in the prior state
_marked as "tainted"_. The special "tainted" status means that the process _marked as "tainted"_. The special "tainted" status means that the process
of creating the object failed partway through and so the existing object does of creating the object failed partway through and so the existing object does
not necessarily match the configuration, so Terraform plans to replace it not necessarily match the configuration, so OpenTF plans to replace it
in order to ensure that the resulting object is complete. in order to ensure that the resulting object is complete.
- **Read**, if there is a `data` block in the configuration. - **Read**, if there is a `data` block in the configuration.
- If possible, Terraform will eagerly perform this action during the planning - If possible, OpenTF will eagerly perform this action during the planning
phase, rather than waiting until the apply phase. phase, rather than waiting until the apply phase.
- If the configuration contains at least one unknown value, or if the - If the configuration contains at least one unknown value, or if the
data resource directly depends on a managed resource that has any change data resource directly depends on a managed resource that has any change
proposed elsewhere in the plan, Terraform will instead delay this action proposed elsewhere in the plan, OpenTF will instead delay this action
to the apply phase so that it can react to the completion of modification to the apply phase so that it can react to the completion of modification
actions on other objects. actions on other objects.
- **No-op**, to explicitly represent that Terraform considered a particular - **No-op**, to explicitly represent that OpenTF considered a particular
resource instance but concluded that no action was required. resource instance but concluded that no action was required.
The **Replace** action described above is really a sort of "meta-action", which The **Replace** action described above is really a sort of "meta-action", which
Terraform expands into separate **Create** and **Delete** operations. There are OpenTF expands into separate **Create** and **Delete** operations. There are
two possible orderings, and the first one is the default planning behavior two possible orderings, and the first one is the default planning behavior
unless overridden by a special planning behavior as described later. The unless overridden by a special planning behavior as described later. The
two possible lowerings of **Replace** are: two possible lowerings of **Replace** are:
@@ -81,7 +81,7 @@ two possible lowerings of **Replace** are:
## Special Planning Behaviors ## Special Planning Behaviors
For the sake of this document, a "special" planning behavior is one where For the sake of this document, a "special" planning behavior is one where
Terraform Core will select a different action than the defaults above, OpenTF Core will select a different action than the defaults above,
based on explicit instructions given either by a module author, an operator, based on explicit instructions given either by a module author, an operator,
or a provider. or a provider.
@@ -107,27 +107,27 @@ of the following depending on which stakeholder is activating the behavior:
"automatic". "automatic".
Because these special behaviors are activated by values in the provider's Because these special behaviors are activated by values in the provider's
response to the planning request from Terraform Core, behaviors of this response to the planning request from OpenTF Core, behaviors of this
sort will typically represent "tweaks" to or variants of the default sort will typically represent "tweaks" to or variants of the default
planning behaviors, rather than entirely different behaviors. planning behaviors, rather than entirely different behaviors.
- [Single-run Behaviors](#single-run-behaviors) are activated by explicitly - [Single-run Behaviors](#single-run-behaviors) are activated by explicitly
setting additional "plan options" when calling Terraform Core's plan setting additional "plan options" when calling OpenTF Core's plan
operation. operation.
This design pattern is good for situations where the direct operator of This design pattern is good for situations where the direct operator of
Terraform needs to do something exceptional or one-off, such as when the OpenTF needs to do something exceptional or one-off, such as when the
configuration is correct but the real system has become degraded or damaged configuration is correct but the real system has become degraded or damaged
in a way that Terraform cannot automatically understand. in a way that OpenTF cannot automatically understand.
However, this design pattern has the disadvantage that each new single-run However, this design pattern has the disadvantage that each new single-run
behavior type requires custom work in every wrapping UI or automaton around behavior type requires custom work in every wrapping UI or automaton around
Terraform Core, in order provide the user of that wrapper some way OpenTF Core, in order provide the user of that wrapper some way
to directly activate the special option, or to offer an "escape hatch" to to directly activate the special option, or to offer an "escape hatch" to
use Terraform CLI directly and bypass the wrapping automation for a use OpenTF CLI directly and bypass the wrapping automation for a
particular change. particular change.
We've also encountered use-cases that seem to call for a hybrid between these We've also encountered use-cases that seem to call for a hybrid between these
different patterns. For example, a configuration construct might cause Terraform different patterns. For example, a configuration construct might cause OpenTF
Core to _invite_ a provider to activate a special behavior, but let the Core to _invite_ a provider to activate a special behavior, but let the
provider make the final call about whether to do it. Or conversely, a provider provider make the final call about whether to do it. Or conversely, a provider
might advertise the possibility of a special behavior but require the user to might advertise the possibility of a special behavior but require the user to
@@ -153,36 +153,36 @@ configuration-driven behaviors, selected to illustrate some different variations
that might be useful inspiration for new designs: that might be useful inspiration for new designs:
- The `ignore_changes` argument inside `resource` block `lifecycle` blocks - The `ignore_changes` argument inside `resource` block `lifecycle` blocks
tells Terraform that if there is an existing object bound to a particular tells OpenTF that if there is an existing object bound to a particular
resource instance address then Terraform should ignore the configured value resource instance address then OpenTF should ignore the configured value
for a particular argument and use the corresponding value from the prior for a particular argument and use the corresponding value from the prior
state instead. state instead.
This can therefore potentially cause what would've been an **Update** to be This can therefore potentially cause what would've been an **Update** to be
a **No-op** instead. a **No-op** instead.
- The `replace_triggered_by` argument inside `resource` block `lifecycle` - The `replace_triggered_by` argument inside `resource` block `lifecycle`
blocks can use a proposed change elsewhere in a module to force Terraform blocks can use a proposed change elsewhere in a module to force OpenTF
to propose one of the two **Replace** variants for a particular resource. to propose one of the two **Replace** variants for a particular resource.
- The `create_before_destroy` argument inside `resource` block `lifecycle` - The `create_before_destroy` argument inside `resource` block `lifecycle`
blocks only takes effect if a particular resource instance has a proposed blocks only takes effect if a particular resource instance has a proposed
**Replace** action. If not set or set to `false`, Terraform will decompose **Replace** action. If not set or set to `false`, OpenTF will decompose
it to **Destroy** then **Create**, but if set to `true` Terraform will use it to **Destroy** then **Create**, but if set to `true` OpenTF will use
the inverted ordering. the inverted ordering.
Because Terraform Core will never select a **Replace** action automatically Because OpenTF Core will never select a **Replace** action automatically
by itself, this is an example of a hybrid design where the config-driven by itself, this is an example of a hybrid design where the config-driven
`create_before_destroy` combines with any other behavior (config-driven or `create_before_destroy` combines with any other behavior (config-driven or
otherwise) that might cause **Replace** to customize exactly what that otherwise) that might cause **Replace** to customize exactly what that
**Replace** will mean. **Replace** will mean.
- Top-level `moved` blocks in a module activate a special behavior during the - Top-level `moved` blocks in a module activate a special behavior during the
planning phase, where Terraform will first try to change the bindings of planning phase, where OpenTF will first try to change the bindings of
existing objects in the prior state to attach to new addresses before running existing objects in the prior state to attach to new addresses before running
the normal planning process. This therefore allows a module author to the normal planning process. This therefore allows a module author to
document certain kinds of refactoring so that Terraform can update the document certain kinds of refactoring so that OpenTF can update the
state automatically once users upgrade to a new version of the module. state automatically once users upgrade to a new version of the module.
This special behavior is interesting because it doesn't _directly_ change This special behavior is interesting because it doesn't _directly_ change
what actions Terraform will propose, but instead it adds an extra what actions OpenTF will propose, but instead it adds an extra
preparation step before the typical planning process which changes the preparation step before the typical planning process which changes the
addresses that the planning process will consider. It can therefore addresses that the planning process will consider. It can therefore
_indirectly_ cause different proposed actions for affected resource _indirectly_ cause different proposed actions for affected resource
@@ -201,13 +201,13 @@ Providers get an opportunity to activate some special behaviors for a particular
resource instance when they respond to the `PlanResourceChange` function of resource instance when they respond to the `PlanResourceChange` function of
the provider plugin protocol. the provider plugin protocol.
When Terraform Core executes this RPC, it has already selected between When OpenTF Core executes this RPC, it has already selected between
**Create**, **Delete**, or **Update** actions for the particular resource **Create**, **Delete**, or **Update** actions for the particular resource
instance, and so the special behaviors a provider may activate will typically instance, and so the special behaviors a provider may activate will typically
serve as modifiers or tweaks to that base action, and will not allow serve as modifiers or tweaks to that base action, and will not allow
the provider to select another base action altogether. The provider wire the provider to select another base action altogether. The provider wire
protocol does not talk about the action types explicitly, and instead only protocol does not talk about the action types explicitly, and instead only
implies them via other content of the request and response, with Terraform Core implies them via other content of the request and response, with OpenTF Core
making the final decision about how to react to that information. making the final decision about how to react to that information.
The following is a non-exhaustive list of existing examples of The following is a non-exhaustive list of existing examples of
@@ -218,7 +218,7 @@ that might be useful inspiration for new designs:
more paths to attributes which have changes that the provider cannot more paths to attributes which have changes that the provider cannot
implement as an in-place update due to limitations of the remote system. implement as an in-place update due to limitations of the remote system.
In that case, Terraform Core will replace the **Update** action with one of In that case, OpenTF Core will replace the **Update** action with one of
the two **Replace** variants, which means that from the provider's the two **Replace** variants, which means that from the provider's
perspective the apply phase will really be two separate calls for the perspective the apply phase will really be two separate calls for the
decomposed **Create** and **Delete** actions (in either order), rather decomposed **Create** and **Delete** actions (in either order), rather
@@ -232,31 +232,31 @@ that might be useful inspiration for new designs:
remote system. remote system.
If all of those taken together causes the new object to match the prior If all of those taken together causes the new object to match the prior
state, Terraform Core will treat the update as a **No-op** instead. state, OpenTF Core will treat the update as a **No-op** instead.
Of the three genres of special behaviors, provider-driven behaviors is the one Of the three genres of special behaviors, provider-driven behaviors is the one
we've made the least use of historically but one that seems to have a lot of we've made the least use of historically but one that seems to have a lot of
opportunities for future exploration. Provider-driven behaviors can often be opportunities for future exploration. Provider-driven behaviors can often be
ideal because their effects appear as if they are built in to Terraform so ideal because their effects appear as if they are built in to OpenTF so
that "it just works", with Terraform automatically deciding and explaining what that "it just works", with OpenTF automatically deciding and explaining what
needs to happen and why, without any special effort on the user's part. needs to happen and why, without any special effort on the user's part.
### Single-run Behaviors ### Single-run Behaviors
Terraform Core's "plan" operation takes a set of arguments that we collectively OpenTF Core's "plan" operation takes a set of arguments that we collectively
call "plan options", that can modify Terraform's planning behavior on a per-run call "plan options", that can modify OpenTF's planning behavior on a per-run
basis without any configuration changes or special provider behaviors. basis without any configuration changes or special provider behaviors.
As noted above, this particular genre of designs is the most burdensome to As noted above, this particular genre of designs is the most burdensome to
implement because any wrapping software that can ask Terraform Core to create implement because any wrapping software that can ask OpenTF Core to create
a plan must ideally offer some way to set all of the available planning options, a plan must ideally offer some way to set all of the available planning options,
or else some part of Terraform's functionality won't be available to anyone or else some part of OpenTF's functionality won't be available to anyone
using that wrapper. using that wrapper.
However, we've seen various situations where single-run behaviors really are the However, we've seen various situations where single-run behaviors really are the
most appropriate way to handle a particular use-case, because the need for the most appropriate way to handle a particular use-case, because the need for the
behavior originates in some process happening outside of the scope of any behavior originates in some process happening outside of the scope of any
particular Terraform module or provider. particular OpenTF module or provider.
The following is a non-exhaustive list of existing examples of The following is a non-exhaustive list of existing examples of
single-run behaviors, selected to illustrate some different variations single-run behaviors, selected to illustrate some different variations
@@ -265,25 +265,25 @@ that might be useful inspiration for new designs:
- The "replace" planning option specifies zero or more resource instance - The "replace" planning option specifies zero or more resource instance
addresses. addresses.
For any resource instance specified, Terraform Core will transform any For any resource instance specified, OpenTF Core will transform any
**Update** or **No-op** action for that instance into one of the **Update** or **No-op** action for that instance into one of the
**Replace** actions, thereby allowing an operator to respond to something **Replace** actions, thereby allowing an operator to respond to something
having become degraded in a way that Terraform and providers cannot having become degraded in a way that OpenTF and providers cannot
automatically detect and force Terraform to replace that object with automatically detect and force OpenTF to replace that object with
a new one that will hopefully function correctly. a new one that will hopefully function correctly.
- The "refresh only" planning mode ("planning mode" is a single planning option - The "refresh only" planning mode ("planning mode" is a single planning option
that selects between a few mutually-exclusive behaviors) forces Terraform that selects between a few mutually-exclusive behaviors) forces OpenTF
to treat every resource instance as **No-op**, regardless of what is bound to treat every resource instance as **No-op**, regardless of what is bound
to that address in state or present in the configuration. to that address in state or present in the configuration.
## Legacy Operations ## Legacy Operations
Some of the legacy operations Terraform CLI offers that _aren't_ integrated Some of the legacy operations OpenTF CLI offers that _aren't_ integrated
with the plan and apply flow could be thought of as various degenerate kinds with the plan and apply flow could be thought of as various degenerate kinds
of single-run behaviors. Most don't offer any opportunity to preview an effect of single-run behaviors. Most don't offer any opportunity to preview an effect
before applying it, but do meet a similar set of use-cases where an operator before applying it, but do meet a similar set of use-cases where an operator
needs to take some action to respond to changes to the context Terraform is needs to take some action to respond to changes to the context OpenTF is
in rather than to the Terraform configuration itself. in rather than to the OpenTF configuration itself.
Most of these legacy operations could therefore most readily be translated to Most of these legacy operations could therefore most readily be translated to
single-run behaviors, but before doing so it's worth researching whether people single-run behaviors, but before doing so it's worth researching whether people

View File

@@ -1,7 +1,7 @@
# Terraform Plugin Protocol # OpenTF Plugin Protocol
This directory contains documentation about the physical wire protocol that This directory contains documentation about the physical wire protocol that
Terraform Core uses to communicate with provider plugins. OpenTF Core uses to communicate with provider plugins.
Most providers are not written directly against this protocol. Instead, prefer Most providers are not written directly against this protocol. Instead, prefer
to use an SDK that implements this protocol and write the provider against to use an SDK that implements this protocol and write the provider against
@@ -9,35 +9,35 @@ the SDK's API.
---- ----
**If you want to write a plugin for Terraform, please refer to **If you want to write a plugin for OpenTF, please refer to
[Extending Terraform](https://www.terraform.io/docs/extend/index.html) instead.** [Extending OpenTF](https://www.terraform.io/docs/extend/index.html) instead.**
This documentation is for those who are developing _Terraform SDKs_, rather This documentation is for those who are developing _OpenTF SDKs_, rather
than those implementing plugins. than those implementing plugins.
---- ----
From Terraform v0.12.0 onwards, Terraform's plugin protocol is built on From OpenTF v0.12.0 onwards, OpenTF's plugin protocol is built on
[gRPC](https://grpc.io/). This directory contains `.proto` definitions of [gRPC](https://grpc.io/). This directory contains `.proto` definitions of
different versions of Terraform's protocol. different versions of OpenTF's protocol.
Only `.proto` files published as part of Terraform release tags are actually Only `.proto` files published as part of OpenTF release tags are actually
official protocol versions. If you are reading this directory on the `main` official protocol versions. If you are reading this directory on the `main`
branch or any other development branch then it may contain protocol definitions branch or any other development branch then it may contain protocol definitions
that are not yet finalized and that may change before final release. that are not yet finalized and that may change before final release.
## RPC Plugin Model ## RPC Plugin Model
Terraform plugins are normal executable programs that, when launched, expose OpenTF plugins are normal executable programs that, when launched, expose
gRPC services on a server accessed via the loopback interface. Terraform Core gRPC services on a server accessed via the loopback interface. OpenTF Core
discovers and launches plugins, waits for a handshake to be printed on the discovers and launches plugins, waits for a handshake to be printed on the
plugin's `stdout`, and then connects to the indicated port number as a plugin's `stdout`, and then connects to the indicated port number as a
gRPC client. gRPC client.
For this reason, we commonly refer to Terraform Core itself as the plugin For this reason, we commonly refer to OpenTF Core itself as the plugin
"client" and the plugin program itself as the plugin "server". Both of these "client" and the plugin program itself as the plugin "server". Both of these
processes run locally, with the server process appearing as a child process processes run locally, with the server process appearing as a child process
of the client. Terraform Core controls the lifecycle of these server processes of the client. OpenTF Core controls the lifecycle of these server processes
and will terminate them when they are no longer required. and will terminate them when they are no longer required.
The startup and handshake protocol is not currently documented. We hope to The startup and handshake protocol is not currently documented. We hope to
@@ -51,7 +51,7 @@ more significant breaking changes from time to time while allowing old and
new plugins to be used together for some period. new plugins to be used together for some period.
The versioning strategy described below was introduced with protocol version The versioning strategy described below was introduced with protocol version
5.0 in Terraform v0.12. Prior versions of Terraform and prior protocol versions 5.0 in OpenTF v0.12. Prior versions of OpenTF and prior protocol versions
do not follow this strategy. do not follow this strategy.
The authoritative definition for each protocol version is in this directory The authoritative definition for each protocol version is in this directory
@@ -64,11 +64,11 @@ is the minor version.
The minor version increases for each change introducing optional new The minor version increases for each change introducing optional new
functionality that can be ignored by implementations of prior versions. For functionality that can be ignored by implementations of prior versions. For
example, if a new field were added to an response message, it could be a minor example, if a new field were added to an response message, it could be a minor
release as long as Terraform Core can provide some default behavior when that release as long as OpenTF Core can provide some default behavior when that
field is not populated. field is not populated.
The major version increases for any significant change to the protocol where The major version increases for any significant change to the protocol where
compatibility is broken. However, Terraform Core and an SDK may both choose compatibility is broken. However, OpenTF Core and an SDK may both choose
to support multiple major versions at once: the plugin handshake includes a to support multiple major versions at once: the plugin handshake includes a
negotiation step where client and server can work together to select a negotiation step where client and server can work together to select a
mutually-supported major version. mutually-supported major version.
@@ -84,9 +84,9 @@ features.
## Version compatibility for Core, SDK, and Providers ## Version compatibility for Core, SDK, and Providers
A particular version of Terraform Core has both a minimum minor version it A particular version of OpenTF Core has both a minimum minor version it
requires and a maximum major version that it supports. A particular version of requires and a maximum major version that it supports. A particular version of
Terraform Core may also be able to optionally use a newer minor version when OpenTF Core may also be able to optionally use a newer minor version when
available, but fall back on older behavior when that functionality is not available, but fall back on older behavior when that functionality is not
available. available.
@@ -95,16 +95,16 @@ The compatible versions for a provider are a list of major and minor version
pairs, such as "4.0", "5.2", which indicates that the provider supports the pairs, such as "4.0", "5.2", which indicates that the provider supports the
baseline features of major version 4 and supports major version 5 including baseline features of major version 4 and supports major version 5 including
the enhancements from both minor versions 1 and 2. This provider would the enhancements from both minor versions 1 and 2. This provider would
therefore be compatible with a Terraform Core release that supports only therefore be compatible with a OpenTF Core release that supports only
protocol version 5.0, since major version 5 is supported and the optional protocol version 5.0, since major version 5 is supported and the optional
5.1 and 5.2 enhancements will be ignored. 5.1 and 5.2 enhancements will be ignored.
If Terraform Core and the plugin do not have at least one mutually-supported If OpenTF Core and the plugin do not have at least one mutually-supported
major version, Terraform Core will return an error from `terraform init` major version, OpenTF Core will return an error from `terraform init`
during plugin installation: during plugin installation:
``` ```
Provider "aws" v1.0.0 is not compatible with Terraform v0.12.0. Provider "aws" v1.0.0 is not compatible with OpenTF v0.12.0.
Provider version v2.0.0 is the earliest compatible version. Provider version v2.0.0 is the earliest compatible version.
Select it with the following version constraint: Select it with the following version constraint:
@@ -113,24 +113,24 @@ Select it with the following version constraint:
``` ```
``` ```
Provider "aws" v3.0.0 is not compatible with Terraform v0.12.0. Provider "aws" v3.0.0 is not compatible with OpenTF v0.12.0.
Provider version v2.34.0 is the latest compatible version. Select Provider version v2.34.0 is the latest compatible version. Select
it with the following constraint: it with the following constraint:
version = "~> 2.34.0" version = "~> 2.34.0"
Alternatively, upgrade to the latest version of Terraform for compatibility with newer provider releases. Alternatively, upgrade to the latest version of OpenTF for compatibility with newer provider releases.
``` ```
The above messages are for plugins installed via `terraform init` from a The above messages are for plugins installed via `terraform init` from a
Terraform registry, where the registry API allows Terraform Core to recognize OpenTF registry, where the registry API allows OpenTF Core to recognize
the protocol compatibility for each provider release. For plugins that are the protocol compatibility for each provider release. For plugins that are
installed manually to a local plugin directory, Terraform Core has no way to installed manually to a local plugin directory, OpenTF Core has no way to
suggest specific versions to upgrade or downgrade to, and so the error message suggest specific versions to upgrade or downgrade to, and so the error message
is more generic: is more generic:
``` ```
The installed version of provider "example" is not compatible with Terraform v0.12.0. The installed version of provider "example" is not compatible with OpenTF v0.12.0.
This provider was loaded from: This provider was loaded from:
/usr/local/bin/terraform-provider-example_v0.1.0 /usr/local/bin/terraform-provider-example_v0.1.0
@@ -154,14 +154,14 @@ of the plugin in ways that affect its semver-based version numbering:
For this reason, SDK developers must be clear in their release notes about For this reason, SDK developers must be clear in their release notes about
the addition and removal of support for major versions. the addition and removal of support for major versions.
Terraform Core also makes an assumption about major version support when OpenTF Core also makes an assumption about major version support when
it produces actionable error messages for users about incompatibilities: it produces actionable error messages for users about incompatibilities:
a particular protocol major version is supported for a single consecutive a particular protocol major version is supported for a single consecutive
range of provider releases, with no "gaps". range of provider releases, with no "gaps".
## Using the protobuf specifications in an SDK ## Using the protobuf specifications in an SDK
If you wish to build an SDK for Terraform plugins, an early step will be to If you wish to build an SDK for OpenTF plugins, an early step will be to
copy one or more `.proto` files from this directory into your own repository copy one or more `.proto` files from this directory into your own repository
(depending on which protocol versions you intend to support) and use the (depending on which protocol versions you intend to support) and use the
`protoc` protocol buffers compiler (with gRPC extensions) to generate suitable `protoc` protocol buffers compiler (with gRPC extensions) to generate suitable
@@ -178,7 +178,7 @@ You can find out more about the tool usage for each target language in
[the gRPC Quick Start guides](https://grpc.io/docs/quickstart/). [the gRPC Quick Start guides](https://grpc.io/docs/quickstart/).
The protobuf specification for a version is immutable after it has been The protobuf specification for a version is immutable after it has been
included in at least one Terraform release. Any changes will be documented in included in at least one OpenTF release. Any changes will be documented in
a new `.proto` file establishing a new protocol version. a new `.proto` file establishing a new protocol version.
The protocol buffer compiler will produce some sort of library object appropriate The protocol buffer compiler will produce some sort of library object appropriate
@@ -200,7 +200,7 @@ and copy the relevant `.proto` file into it, creating a separate set of stubs
that can in principle allow your SDK to support both major versions at the that can in principle allow your SDK to support both major versions at the
same time. We recommend supporting both the previous and current major versions same time. We recommend supporting both the previous and current major versions
together for a while across a major version upgrade so that users can avoid together for a while across a major version upgrade so that users can avoid
having to upgrade both Terraform Core and all of their providers at the same having to upgrade both OpenTF Core and all of their providers at the same
time, but you can delete the previous major version stubs once you remove time, but you can delete the previous major version stubs once you remove
support for that version. support for that version.

View File

@@ -1,33 +1,33 @@
# Wire Format for Terraform Objects and Associated Values # Wire Format for OpenTF Objects and Associated Values
The provider wire protocol (as of major version 5) includes a protobuf message The provider wire protocol (as of major version 5) includes a protobuf message
type `DynamicValue` which Terraform uses to represent values from the Terraform type `DynamicValue` which OpenTF uses to represent values from the OpenTF
Language type system, which result from evaluating the content of `resource`, Language type system, which result from evaluating the content of `resource`,
`data`, and `provider` blocks, based on a schema defined by the corresponding `data`, and `provider` blocks, based on a schema defined by the corresponding
provider. provider.
Because the structure of these values is determined at runtime, `DynamicValue` Because the structure of these values is determined at runtime, `DynamicValue`
uses one of two possible dynamic serialization formats for the values uses one of two possible dynamic serialization formats for the values
themselves: MessagePack or JSON. Terraform most commonly uses MessagePack, themselves: MessagePack or JSON. OpenTF most commonly uses MessagePack,
because it offers a compact binary representation of a value. However, a server because it offers a compact binary representation of a value. However, a server
implementation of the provider protocol should fall back to JSON if the implementation of the provider protocol should fall back to JSON if the
MessagePack field is not populated, in order to support both formats. MessagePack field is not populated, in order to support both formats.
The remainder of this document describes how Terraform translates from its own The remainder of this document describes how OpenTF translates from its own
type system into the type system of the two supported serialization formats. type system into the type system of the two supported serialization formats.
A server implementation of the Terraform provider protocol can use this A server implementation of the OpenTF provider protocol can use this
information to decode `DynamicValue` values from incoming messages into information to decode `DynamicValue` values from incoming messages into
whatever representation is convenient for the provider implementation. whatever representation is convenient for the provider implementation.
A server implementation must also be able to _produce_ `DynamicValue` messages A server implementation must also be able to _produce_ `DynamicValue` messages
as part of various response messages. When doing so, servers should always as part of various response messages. When doing so, servers should always
use MessagePack encoding, because Terraform does not consistently support use MessagePack encoding, because OpenTF does not consistently support
JSON responses across all request types and all Terraform versions. JSON responses across all request types and all OpenTF versions.
Both the MessagePack and JSON serializations are driven by information the Both the MessagePack and JSON serializations are driven by information the
provider previously returned in a `Schema` message. Terraform will encode each provider previously returned in a `Schema` message. OpenTF will encode each
value depending on the type constraint given for it in the corresponding schema, value depending on the type constraint given for it in the corresponding schema,
using the closest possible MessagePack or JSON type to the Terraform language using the closest possible MessagePack or JSON type to the OpenTF language
type. Therefore a server implementation can decode a serialized value using a type. Therefore a server implementation can decode a serialized value using a
standard MessagePack or JSON library and assume it will conform to the standard MessagePack or JSON library and assume it will conform to the
serialization rules described below. serialization rules described below.
@@ -38,8 +38,8 @@ The MessagePack types referenced in this section are those defined in
[The MessagePack type system specification](https://github.com/msgpack/msgpack/blob/master/spec.md#type-system). [The MessagePack type system specification](https://github.com/msgpack/msgpack/blob/master/spec.md#type-system).
Note that MessagePack defines several possible serialization formats for each Note that MessagePack defines several possible serialization formats for each
type, and Terraform may choose any of the formats of a specified type. type, and OpenTF may choose any of the formats of a specified type.
The exact serialization chosen for a given value may vary between Terraform The exact serialization chosen for a given value may vary between OpenTF
versions, but the types given here are contractual. versions, but the types given here are contractual.
Conversely, server implementations that are _producing_ MessagePack-encoded Conversely, server implementations that are _producing_ MessagePack-encoded
@@ -49,7 +49,7 @@ the value without a loss of range.
### `Schema.Block` Mapping Rules for MessagePack ### `Schema.Block` Mapping Rules for MessagePack
To represent the content of a block as MessagePack, Terraform constructs a To represent the content of a block as MessagePack, OpenTF constructs a
MessagePack map that contains one key-value pair per attribute and one MessagePack map that contains one key-value pair per attribute and one
key-value pair per distinct nested block described in the `Schema.Block` message. key-value pair per distinct nested block described in the `Schema.Block` message.
@@ -63,7 +63,7 @@ The key-value pairs representing nested block types have values based on
The MessagePack serialization of an attribute value depends on the value of the The MessagePack serialization of an attribute value depends on the value of the
`type` field of the corresponding `Schema.Attribute` message. The `type` field is `type` field of the corresponding `Schema.Attribute` message. The `type` field is
a compact JSON serialization of a a compact JSON serialization of a
[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html), [OpenTF type constraint](https://www.terraform.io/docs/configuration/types.html),
which consists either of a single which consists either of a single
string value (for primitive types) or a two-element array giving a type kind string value (for primitive types) or a two-element array giving a type kind
and a type argument. and a type argument.
@@ -84,7 +84,7 @@ in the table below, regardless of type:
| `"number"` | Either MessagePack integer, MessagePack float, or MessagePack string representing the number. If a number is represented as a string then the string contains a decimal representation of the number which may have a larger mantissa than can be represented by a 64-bit float. | | `"number"` | Either MessagePack integer, MessagePack float, or MessagePack string representing the number. If a number is represented as a string then the string contains a decimal representation of the number which may have a larger mantissa than can be represented by a 64-bit float. |
| `"bool"` | A MessagePack boolean value corresponding to the value. | | `"bool"` | A MessagePack boolean value corresponding to the value. |
| `["list",T]` | A MessagePack array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. | | `["list",T]` | A MessagePack array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. | | `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because OpenTF sets are unordered. |
| `["map",T]` | A MessagePack map with one key-value pair per element of the map value, where the element key is serialized as the map key (always a MessagePack string) and the element value is represented by a value constructed by applying these same mapping rules to the nested type `T`. | | `["map",T]` | A MessagePack map with one key-value pair per element of the map value, where the element key is serialized as the map key (always a MessagePack string) and the element value is represented by a value constructed by applying these same mapping rules to the nested type `T`. |
| `["object",ATTRS]` | A MessagePack map with one key-value pair per attribute defined in the `ATTRS` object. The attribute name is serialized as the map key (always a MessagePack string) and the attribute value is represented by a value constructed by applying these same mapping rules to each attribute's own type. | | `["object",ATTRS]` | A MessagePack map with one key-value pair per attribute defined in the `ATTRS` object. The attribute name is serialized as the map key (always a MessagePack string) and the attribute value is represented by a value constructed by applying these same mapping rules to each attribute's own type. |
| `["tuple",TYPES]` | A MessagePack array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. | | `["tuple",TYPES]` | A MessagePack array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
@@ -97,7 +97,7 @@ values.
The older encoding is for unrefined unknown values and uses an extension The older encoding is for unrefined unknown values and uses an extension
code of zero, with the extension value payload completely ignored. code of zero, with the extension value payload completely ignored.
Newer Terraform versions can produce "refined" unknown values which carry some Newer OpenTF versions can produce "refined" unknown values which carry some
additional information that constrains the possible range of the final value/ additional information that constrains the possible range of the final value/
Refined unknown values have extension code 12 and then the extension object's Refined unknown values have extension code 12 and then the extension object's
payload is a MessagePack-encoded map using integer keys to represent different payload is a MessagePack-encoded map using integer keys to represent different
@@ -161,7 +161,7 @@ by applying
to the block's contents based on the `block` field, producing what we'll call to the block's contents based on the `block` field, producing what we'll call
a _block value_ in the table below. a _block value_ in the table below.
The `nesting` value then in turn defines how Terraform will collect all of the The `nesting` value then in turn defines how OpenTF will collect all of the
individual block values together to produce a single property value representing individual block values together to produce a single property value representing
the nested block type. For all `nesting` values other than `MAP`, blocks may the nested block type. For all `nesting` values other than `MAP`, blocks may
not have any labels. For the `nesting` value `MAP`, blocks must have exactly not have any labels. For the `nesting` value `MAP`, blocks must have exactly
@@ -173,13 +173,13 @@ one label, which is a string we'll call a _block label_ in the table below.
| `LIST` | A MessagePack array of all of the block values, preserving the order of definition of the blocks in the configuration. | | `LIST` | A MessagePack array of all of the block values, preserving the order of definition of the blocks in the configuration. |
| `SET` | A MessagePack array of all of the block values in no particular order. | | `SET` | A MessagePack array of all of the block values in no particular order. |
| `MAP` | A MessagePack map with one key-value pair per block value, where the key is the block label and the value is the block value. | | `MAP` | A MessagePack map with one key-value pair per block value, where the key is the block label and the value is the block value. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. | | `GROUP` | The same as with `SINGLE`, except that if there is no block of that type OpenTF will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
For the `LIST` and `SET` nesting modes, Terraform guarantees that the For the `LIST` and `SET` nesting modes, OpenTF guarantees that the
MessagePack array will have a number of elements between the `min_items` and MessagePack array will have a number of elements between the `min_items` and
`max_items` values given in the schema, _unless_ any of the block values contain `max_items` values given in the schema, _unless_ any of the block values contain
nested unknown values. When unknown values are present, Terraform considers nested unknown values. When unknown values are present, OpenTF considers
the value to be potentially incomplete and so Terraform defers validation of the value to be potentially incomplete and so OpenTF defers validation of
the number of blocks. For example, if the configuration includes a `dynamic` the number of blocks. For example, if the configuration includes a `dynamic`
block whose `for_each` argument is unknown then the final number of blocks is block whose `for_each` argument is unknown then the final number of blocks is
not predictable until the apply phase. not predictable until the apply phase.
@@ -198,7 +198,7 @@ _current_ version of that provider.
### `Schema.Block` Mapping Rules for JSON ### `Schema.Block` Mapping Rules for JSON
To represent the content of a block as JSON, Terraform constructs a To represent the content of a block as JSON, OpenTF constructs a
JSON object that contains one property per attribute and one property per JSON object that contains one property per attribute and one property per
distinct nested block described in the `Schema.Block` message. distinct nested block described in the `Schema.Block` message.
@@ -212,7 +212,7 @@ The properties representing nested block types have property values based on
The JSON serialization of an attribute value depends on the value of the `type` The JSON serialization of an attribute value depends on the value of the `type`
field of the corresponding `Schema.Attribute` message. The `type` field is field of the corresponding `Schema.Attribute` message. The `type` field is
a compact JSON serialization of a a compact JSON serialization of a
[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html), [OpenTF type constraint](https://www.terraform.io/docs/configuration/types.html),
which consists either of a single which consists either of a single
string value (for primitive types) or a two-element array giving a type kind string value (for primitive types) or a two-element array giving a type kind
and a type argument. and a type argument.
@@ -226,10 +226,10 @@ table regardless of type:
| `type` Pattern | JSON Representation | | `type` Pattern | JSON Representation |
|---|---| |---|---|
| `"string"` | A JSON string containing the Unicode characters from the string value. | | `"string"` | A JSON string containing the Unicode characters from the string value. |
| `"number"` | A JSON number representing the number value. Terraform numbers are arbitrary-precision floating point, so the value may have a larger mantissa than can be represented by a 64-bit float. | | `"number"` | A JSON number representing the number value. OpenTF numbers are arbitrary-precision floating point, so the value may have a larger mantissa than can be represented by a 64-bit float. |
| `"bool"` | Either JSON `true` or JSON `false`, depending on the boolean value. | | `"bool"` | Either JSON `true` or JSON `false`, depending on the boolean value. |
| `["list",T]` | A JSON array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. | | `["list",T]` | A JSON array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. | | `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because OpenTF sets are unordered. |
| `["map",T]` | A JSON object with one property per element of the map value, where the element key is serialized as the property name string and the element value is represented by a property value constructed by applying these same mapping rules to the nested type `T`. | | `["map",T]` | A JSON object with one property per element of the map value, where the element key is serialized as the property name string and the element value is represented by a property value constructed by applying these same mapping rules to the nested type `T`. |
| `["object",ATTRS]` | A JSON object with one property per attribute defined in the `ATTRS` object. The attribute name is serialized as the property name string and the attribute value is represented by a property value constructed by applying these same mapping rules to each attribute's own type. | | `["object",ATTRS]` | A JSON object with one property per attribute defined in the `ATTRS` object. The attribute name is serialized as the property name string and the attribute value is represented by a property value constructed by applying these same mapping rules to each attribute's own type. |
| `["tuple",TYPES]` | A JSON array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. | | `["tuple",TYPES]` | A JSON array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
@@ -248,7 +248,7 @@ by applying
to the block's contents based on the `block` field, producing what we'll call to the block's contents based on the `block` field, producing what we'll call
a _block value_ in the table below. a _block value_ in the table below.
The `nesting` value then in turn defines how Terraform will collect all of the The `nesting` value then in turn defines how OpenTF will collect all of the
individual block values together to produce a single property value representing individual block values together to produce a single property value representing
the nested block type. For all `nesting` values other than `MAP`, blocks may the nested block type. For all `nesting` values other than `MAP`, blocks may
not have any labels. For the `nesting` value `MAP`, blocks must have exactly not have any labels. For the `nesting` value `MAP`, blocks must have exactly
@@ -260,8 +260,8 @@ one label, which is a string we'll call a _block label_ in the table below.
| `LIST` | A JSON array of all of the block values, preserving the order of definition of the blocks in the configuration. | | `LIST` | A JSON array of all of the block values, preserving the order of definition of the blocks in the configuration. |
| `SET` | A JSON array of all of the block values in no particular order. | | `SET` | A JSON array of all of the block values in no particular order. |
| `MAP` | A JSON object with one property per block value, where the property name is the block label and the value is the block value. | | `MAP` | A JSON object with one property per block value, where the property name is the block label and the value is the block value. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. | | `GROUP` | The same as with `SINGLE`, except that if there is no block of that type OpenTF will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
For the `LIST` and `SET` nesting modes, Terraform guarantees that the JSON For the `LIST` and `SET` nesting modes, OpenTF guarantees that the JSON
array will have a number of elements between the `min_items` and `max_items` array will have a number of elements between the `min_items` and `max_items`
values given in the schema. values given in the schema.

View File

@@ -1,7 +1,7 @@
# Releasing a New Version of the Protocol # Releasing a New Version of the Protocol
Terraform's plugin protocol is the contract between Terraform's plugins and OpenTF's plugin protocol is the contract between OpenTF's plugins and
Terraform, and as such releasing a new version requires some coordination OpenTF, and as such releasing a new version requires some coordination
between those pieces. This document is intended to be a checklist to consult between those pieces. This document is intended to be a checklist to consult
when adding a new major version of the protocol (X in X.Y) to ensure that when adding a new major version of the protocol (X in X.Y) to ensure that
everything that needs to be is aware of it. everything that needs to be is aware of it.
@@ -17,7 +17,7 @@ protocol file, and modify it accordingly.
The The
[hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go) [hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go)
repository serves as the foundation for Terraform's plugin ecosystem. It needs repository serves as the foundation for OpenTF's plugin ecosystem. It needs
to know about the new major protocol version. Either open an issue in that repo to know about the new major protocol version. Either open an issue in that repo
to have the Plugin SDK team add the new package, or if you would like to to have the Plugin SDK team add the new package, or if you would like to
contribute it yourself, open a PR. It is recommended that you copy the package contribute it yourself, open a PR. It is recommended that you copy the package
@@ -25,15 +25,15 @@ for the latest protocol version and modify it accordingly.
## Update the Registry's List of Allowed Versions ## Update the Registry's List of Allowed Versions
The Terraform Registry validates the protocol versions a provider advertises The OpenTF Registry validates the protocol versions a provider advertises
support for when ingesting providers. Providers will not be able to advertise support for when ingesting providers. Providers will not be able to advertise
support for the new protocol version until it is added to that list. support for the new protocol version until it is added to that list.
## Update Terraform's Version Constraints ## Update OpenTF's Version Constraints
Terraform only downloads providers that speak protocol versions it is OpenTF only downloads providers that speak protocol versions it is
compatible with from the Registry during `terraform init`. When adding support compatible with from the Registry during `terraform init`. When adding support
for a new protocol, you need to tell Terraform it knows that protocol version. for a new protocol, you need to tell OpenTF it knows that protocol version.
Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
`internal/getproviders/registry_client.go` file to include the new protocol. `internal/getproviders/registry_client.go` file to include the new protocol.
@@ -42,7 +42,7 @@ Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
Use the provider test framework to test a provider written with the new Use the provider test framework to test a provider written with the new
protocol. This end-to-end test ensures that providers written with the new protocol. This end-to-end test ensures that providers written with the new
protocol work correctly with the test framework, especially in communicating protocol work correctly with the test framework, especially in communicating
the protocol version between the test framework and Terraform. the protocol version between the test framework and OpenTF.
## Test Retrieving and Running a Provider From the Registry ## Test Retrieving and Running a Provider From the Registry

View File

@@ -1,7 +1,7 @@
# Terraform Resource Instance Change Lifecycle # OpenTF Resource Instance Change Lifecycle
This document describes the relationships between the different operations This document describes the relationships between the different operations
called on a Terraform Provider to handle a change to a resource instance. called on a OpenTF Provider to handle a change to a resource instance.
![](https://user-images.githubusercontent.com/20180/172506401-777597dc-3e6e-411d-9580-b192fd34adba.png) ![](https://user-images.githubusercontent.com/20180/172506401-777597dc-3e6e-411d-9580-b192fd34adba.png)
@@ -28,18 +28,18 @@ The various object values used in different parts of this process are:
* **Prior State**: The provider's representation of the current state of the * **Prior State**: The provider's representation of the current state of the
remote object at the time of the most recent read. remote object at the time of the most recent read.
* **Proposed New State**: Terraform Core uses some built-in logic to perform * **Proposed New State**: OpenTF Core uses some built-in logic to perform
an initial basic merger of the **Configuration** and the **Prior State** an initial basic merger of the **Configuration** and the **Prior State**
which a provider may use as a starting point for its planning operation. which a provider may use as a starting point for its planning operation.
The built-in logic primarily deals with the expected behavior for attributes The built-in logic primarily deals with the expected behavior for attributes
marked in the schema as "computed". If an attribute is only "computed", marked in the schema as "computed". If an attribute is only "computed",
Terraform expects the value to only be chosen by the provider and it will OpenTF expects the value to only be chosen by the provider and it will
preserve any Prior State. If an attribute is marked as "computed" and preserve any Prior State. If an attribute is marked as "computed" and
"optional", this means that the user may either set it or may leave it "optional", this means that the user may either set it or may leave it
unset to allow the provider to choose a value. unset to allow the provider to choose a value.
Terraform Core therefore constructs the proposed new state by taking the OpenTF Core therefore constructs the proposed new state by taking the
attribute value from Configuration if it is non-null, and then using the attribute value from Configuration if it is non-null, and then using the
Prior State as a fallback otherwise, thereby helping a provider to Prior State as a fallback otherwise, thereby helping a provider to
preserve its previously-chosen value for the attribute where appropriate. preserve its previously-chosen value for the attribute where appropriate.
@@ -55,7 +55,7 @@ The various object values used in different parts of this process are:
must mark these by including unknown values in the state objects. must mark these by including unknown values in the state objects.
The distinction between the _Initial_ and _Final_ planned states is that The distinction between the _Initial_ and _Final_ planned states is that
the initial one is created during Terraform Core's planning phase based the initial one is created during OpenTF Core's planning phase based
on a possibly-incomplete configuration, whereas the final one is created on a possibly-incomplete configuration, whereas the final one is created
during the apply step once all of the dependencies have already been during the apply step once all of the dependencies have already been
updated and so the configuration should then be wholly known. updated and so the configuration should then be wholly known.
@@ -67,9 +67,9 @@ The various object values used in different parts of this process are:
actual state of the system, rather than a hypothetical future state. actual state of the system, rather than a hypothetical future state.
* **Previous Run State** is the same object as the **New State** from * **Previous Run State** is the same object as the **New State** from
the previous run of Terraform. This is exactly what the provider most the previous run of OpenTF. This is exactly what the provider most
recently returned, and so it will not take into account any changes that recently returned, and so it will not take into account any changes that
may have been made outside of Terraform in the meantime, and it may conform may have been made outside of OpenTF in the meantime, and it may conform
to an earlier version of the resource type schema and therefore be to an earlier version of the resource type schema and therefore be
incompatible with the _current_ schema. incompatible with the _current_ schema.
@@ -77,22 +77,22 @@ The various object values used in different parts of this process are:
provider-specified logic to upgrade the existing data to the latest schema. provider-specified logic to upgrade the existing data to the latest schema.
However, it still represents the remote system as it was at the end of the However, it still represents the remote system as it was at the end of the
last run, and so still doesn't take into account any changes that may have last run, and so still doesn't take into account any changes that may have
been made outside of Terraform. been made outside of OpenTF.
* The **Import ID** and **Import Stub State** are both details of the special * The **Import ID** and **Import Stub State** are both details of the special
process of importing pre-existing objects into a Terraform state, and so process of importing pre-existing objects into a OpenTF state, and so
we'll wait to discuss those in a later section on importing. we'll wait to discuss those in a later section on importing.
## Provider Protocol API Functions ## Provider Protocol API Functions
The following sections describe the three provider API functions that are The following sections describe the three provider API functions that are
called to plan and apply a change, including the expectations Terraform Core called to plan and apply a change, including the expectations OpenTF Core
enforces for each. enforces for each.
For historical reasons, the original Terraform SDK is exempt from error For historical reasons, the original OpenTF SDK is exempt from error
messages produced when certain assumptions are violated, but violating them messages produced when certain assumptions are violated, but violating them
will often cause downstream errors nonetheless, because Terraform's workflow will often cause downstream errors nonetheless, because OpenTF's workflow
depends on these contracts being met. depends on these contracts being met.
The following section uses the word "attribute" to refer to the named The following section uses the word "attribute" to refer to the named
@@ -116,7 +116,7 @@ expressed via schema alone.
In principle a provider can make any rule it wants here, although in practice In principle a provider can make any rule it wants here, although in practice
providers should typically avoid reporting errors for values that are unknown. providers should typically avoid reporting errors for values that are unknown.
Terraform Core will call this function multiple times at different phases OpenTF Core will call this function multiple times at different phases
of evaluation, and guarantees to _eventually_ call with a wholly-known of evaluation, and guarantees to _eventually_ call with a wholly-known
configuration so that the provider will have an opportunity to belatedly catch configuration so that the provider will have an opportunity to belatedly catch
problems related to values that are initially unknown during planning. problems related to values that are initially unknown during planning.
@@ -133,7 +133,7 @@ modify the user's supplied configuration.
### PlanResourceChange ### PlanResourceChange
The purpose of `PlanResourceChange` is to predict the approximate effect of The purpose of `PlanResourceChange` is to predict the approximate effect of
a subsequent apply operation, allowing Terraform to render the plan for the a subsequent apply operation, allowing OpenTF to render the plan for the
user and to propagate the predictable subset of results downstream through user and to propagate the predictable subset of results downstream through
expressions in the configuration. expressions in the configuration.
@@ -159,20 +159,20 @@ following constraints:
`PlanResourceChange` is actually called twice per run for each resource type. `PlanResourceChange` is actually called twice per run for each resource type.
The first call is during the planning phase, before Terraform prints out a The first call is during the planning phase, before OpenTF prints out a
diff to the user for confirmation. Because no changes at all have been applied diff to the user for confirmation. Because no changes at all have been applied
at that point, the given **Configuration** may contain unknown values as at that point, the given **Configuration** may contain unknown values as
placeholders for the results of expressions that derive from unknown values placeholders for the results of expressions that derive from unknown values
of other resource instances. The result of this initial call is the of other resource instances. The result of this initial call is the
**Initial Planned State**. **Initial Planned State**.
If the user accepts the plan, Terraform will call `PlanResourceChange` a If the user accepts the plan, OpenTF will call `PlanResourceChange` a
second time during the apply step, and that call is guaranteed to have a second time during the apply step, and that call is guaranteed to have a
wholly-known **Configuration** with any values from upstream dependencies wholly-known **Configuration** with any values from upstream dependencies
taken into account already. The result of this second call is the taken into account already. The result of this second call is the
**Final Planned State**. **Final Planned State**.
Terraform Core compares the final with the initial planned state, enforcing OpenTF Core compares the final with the initial planned state, enforcing
the following additional constraints along with those listed above: the following additional constraints along with those listed above:
* Any attribute that had a known value in the **Initial Planned State** must * Any attribute that had a known value in the **Initial Planned State** must
@@ -213,49 +213,49 @@ constraints:
After calling `ApplyResourceChange` for each resource instance in the plan, After calling `ApplyResourceChange` for each resource instance in the plan,
and dealing with any other bookkeeping to return the results to the user, and dealing with any other bookkeeping to return the results to the user,
a single Terraform run is complete. Terraform Core saves the **New State** a single OpenTF run is complete. OpenTF Core saves the **New State**
in a state snapshot for the entire configuration, so it'll be preserved for in a state snapshot for the entire configuration, so it'll be preserved for
use on the next run. use on the next run.
When the user subsequently runs Terraform again, the **New State** becomes When the user subsequently runs OpenTF again, the **New State** becomes
the **Previous Run State** verbatim, and passes into `UpgradeResourceState`. the **Previous Run State** verbatim, and passes into `UpgradeResourceState`.
### UpgradeResourceState ### UpgradeResourceState
Because the state values for a particular resource instance persist in a Because the state values for a particular resource instance persist in a
saved state snapshot from one run to the next, Terraform Core must deal with saved state snapshot from one run to the next, OpenTF Core must deal with
the possibility that the user has upgraded to a newer version of the provider the possibility that the user has upgraded to a newer version of the provider
since the last run, and that the new provider version has an incompatible since the last run, and that the new provider version has an incompatible
schema for the relevant resource type. schema for the relevant resource type.
Terraform Core therefore begins by calling `UpgradeResourceState` and passing OpenTF Core therefore begins by calling `UpgradeResourceState` and passing
the **Previous Run State** in a _raw_ form, which in current protocol versions the **Previous Run State** in a _raw_ form, which in current protocol versions
is the raw JSON data structure as was stored in the state snapshot. Terraform is the raw JSON data structure as was stored in the state snapshot. OpenTF
Core doesn't have access to the previous schema versions for a provider's Core doesn't have access to the previous schema versions for a provider's
resource types, so the provider itself must handle the data decoding in this resource types, so the provider itself must handle the data decoding in this
upgrade function. upgrade function.
The provider can then use whatever logic is appropriate to update the shape The provider can then use whatever logic is appropriate to update the shape
of the data to conform to the current schema for the resource type. Although of the data to conform to the current schema for the resource type. Although
Terraform Core has no way to enforce it, a provider should only change the OpenTF Core has no way to enforce it, a provider should only change the
shape of the data structure and should _not_ change the meaning of the data. shape of the data structure and should _not_ change the meaning of the data.
In particular, it should not try to update the state data to capture any In particular, it should not try to update the state data to capture any
changes made to the corresponding remote object outside of Terraform. changes made to the corresponding remote object outside of OpenTF.
This function then returns the **Upgraded State**, which captures the same This function then returns the **Upgraded State**, which captures the same
information as the **Previous Run State** but does so in a way that conforms information as the **Previous Run State** but does so in a way that conforms
to the current version of the resource type schema, which therefore allows to the current version of the resource type schema, which therefore allows
Terraform Core to interact with the data fully for subsequent steps. OpenTF Core to interact with the data fully for subsequent steps.
### ReadResource ### ReadResource
Although Terraform typically expects to have exclusive control over any remote Although OpenTF typically expects to have exclusive control over any remote
object that is bound to a resource instance, in practice users may make changes object that is bound to a resource instance, in practice users may make changes
to those objects outside of Terraform, causing Terraform's records of the to those objects outside of OpenTF, causing OpenTF's records of the
object to become stale. object to become stale.
The `ReadResource` function asks the provider to make a best effort to detect The `ReadResource` function asks the provider to make a best effort to detect
any such external changes and describe them so that Terraform Core can use any such external changes and describe them so that OpenTF Core can use
an up-to-date **Prior State** as the input to the next `PlanResourceChange` an up-to-date **Prior State** as the input to the next `PlanResourceChange`
call. call.
@@ -266,7 +266,7 @@ a provider might not be able to detect certain changes. For example:
* There may be new features of the underlying API which the current provider * There may be new features of the underlying API which the current provider
version doesn't know how to ask about. version doesn't know how to ask about.
Terraform Core expects a provider to carefully distinguish between the OpenTF Core expects a provider to carefully distinguish between the
following two situations for each attribute: following two situations for each attribute:
* **Normalization**: the remote API has returned some data in a different form * **Normalization**: the remote API has returned some data in a different form
than was recorded in the **Previous Run State**, but the meaning is unchanged. than was recorded in the **Previous Run State**, but the meaning is unchanged.
@@ -282,8 +282,8 @@ following two situations for each attribute:
In this case, the provider should return the value from the remote system, In this case, the provider should return the value from the remote system,
thereby discarding the value from the **Previous Run State**. When a thereby discarding the value from the **Previous Run State**. When a
provider does this, Terraform _may_ report it to the user as a change provider does this, OpenTF _may_ report it to the user as a change
made outside of Terraform, if Terraform Core determined that the detected made outside of OpenTF, if OpenTF Core determined that the detected
change was a possible cause of another planned action for a downstream change was a possible cause of another planned action for a downstream
resource instance. resource instance.
@@ -296,7 +296,7 @@ over again.
Nested blocks are a configuration-only construct and so the number of blocks Nested blocks are a configuration-only construct and so the number of blocks
cannot be changed on the fly during planning or during apply: each block cannot be changed on the fly during planning or during apply: each block
represented in the configuration must have a corresponding nested object in represented in the configuration must have a corresponding nested object in
the planned new state and new state, or Terraform Core will raise an error. the planned new state and new state, or OpenTF Core will raise an error.
If a provider wishes to report about new instances of the sub-object type If a provider wishes to report about new instances of the sub-object type
represented by nested blocks that are created implicitly during the apply represented by nested blocks that are created implicitly during the apply
@@ -315,12 +315,12 @@ follow the same rules as for a nested block type of the same nesting mode.
## Import Behavior ## Import Behavior
The main resource instance change lifecycle is concerned with objects whose The main resource instance change lifecycle is concerned with objects whose
entire lifecycle is driven through Terraform, including the initial creation entire lifecycle is driven through OpenTF, including the initial creation
of the object. of the object.
As an aid to those who are adopting Terraform as a replacement for existing As an aid to those who are adopting OpenTF as a replacement for existing
processes or software, Terraform also supports adopting pre-existing objects processes or software, OpenTF also supports adopting pre-existing objects
to bring them under Terraform's management without needing to recreate them to bring them under OpenTF's management without needing to recreate them
first. first.
When using this facility, the user provides the address of the resource When using this facility, the user provides the address of the resource
@@ -331,7 +331,7 @@ by the provider on a per-resource-type basis, which we'll call the
The import process trades the user's **Import ID** for a special The import process trades the user's **Import ID** for a special
**Import Stub State**, which behaves as a placeholder for the **Import Stub State**, which behaves as a placeholder for the
**Previous Run State** pretending as if a previous Terraform run is what had **Previous Run State** pretending as if a previous OpenTF run is what had
created the object. created the object.
### ImportResourceState ### ImportResourceState
@@ -340,7 +340,7 @@ The `ImportResourceState` operation takes the user's given **Import ID** and
uses it to verify that the given object exists and, if so, to retrieve enough uses it to verify that the given object exists and, if so, to retrieve enough
data about it to produce the **Import Stub State**. data about it to produce the **Import Stub State**.
Terraform Core will always pass the returned **Import Stub State** to the OpenTF Core will always pass the returned **Import Stub State** to the
normal `ReadResource` operation after `ImportResourceState` returns it, so normal `ReadResource` operation after `ImportResourceState` returns it, so
in practice the provider may populate only the minimal subset of attributes in practice the provider may populate only the minimal subset of attributes
that `ReadResource` will need to do its work, letting the normal function that `ReadResource` will need to do its work, letting the normal function
@@ -348,7 +348,7 @@ deal with populating the rest of the data to match what is currently set in
the remote system. the remote system.
For the same reasons that `ReadResource` is only a _best effort_ at detecting For the same reasons that `ReadResource` is only a _best effort_ at detecting
changes outside of Terraform, a provider may not be able to fully support changes outside of OpenTF, a provider may not be able to fully support
importing for all resource types. In that case, the provider developer must importing for all resource types. In that case, the provider developer must
choose between the following options: choose between the following options:
@@ -364,9 +364,9 @@ choose between the following options:
* Return an error explaining why importing isn't possible. * Return an error explaining why importing isn't possible.
This is a last resort because of course it will then leave the user unable This is a last resort because of course it will then leave the user unable
to bring the existing object under Terraform's management. However, if a to bring the existing object under OpenTF's management. However, if a
particular object's design doesn't suit importing then it can be a better particular object's design doesn't suit importing then it can be a better
user experience to be clear and honest that the user must replace the object user experience to be clear and honest that the user must replace the object
as part of adopting Terraform, rather than to perform an import that will as part of adopting OpenTF, rather than to perform an import that will
leave the object in a situation where Terraform cannot meaningfully manage leave the object in a situation where OpenTF cannot meaningfully manage
it. it.

View File

@@ -1,15 +1,15 @@
# How Terraform Uses Unicode # How OpenTF Uses Unicode
The Terraform language uses the Unicode standards as the basis of various The OpenTF language uses the Unicode standards as the basis of various
different features. The Unicode Consortium publishes new versions of those different features. The Unicode Consortium publishes new versions of those
standards periodically, and we aim to adopt those new versions in new standards periodically, and we aim to adopt those new versions in new
minor releases of Terraform in order to support additional characters added minor releases of OpenTF in order to support additional characters added
in those new versions. in those new versions.
Unfortunately due to those features being implemented by relying on a number Unfortunately due to those features being implemented by relying on a number
of external libraries, adopting a new version of Unicode is not as simple as of external libraries, adopting a new version of Unicode is not as simple as
just updating a version number somewhere. This document aims to describe the just updating a version number somewhere. This document aims to describe the
various steps required to adopt a new version of Unicode in Terraform. various steps required to adopt a new version of Unicode in OpenTF.
We typically aim to be consistent across all of these dependencies as to which We typically aim to be consistent across all of these dependencies as to which
major version of Unicode we currently conform to. The usual initial driver major version of Unicode we currently conform to. The usual initial driver
@@ -21,7 +21,7 @@ upgrading to a new Go version.
## Unicode tables in the Go standard library ## Unicode tables in the Go standard library
Several Terraform language features are implemented in terms of functions in Several OpenTF language features are implemented in terms of functions in
[the Go `strings` package](https://pkg.go.dev/strings), [the Go `strings` package](https://pkg.go.dev/strings),
[the Go `unicode` package](https://pkg.go.dev/unicode), and other supporting [the Go `unicode` package](https://pkg.go.dev/unicode), and other supporting
packages in the Go standard library. packages in the Go standard library.
@@ -32,13 +32,13 @@ particular Go version is available in
[`unicode.Version`](https://pkg.go.dev/unicode#Version). [`unicode.Version`](https://pkg.go.dev/unicode#Version).
We adopt a new version of Go by editing the `.go-version` file in the root We adopt a new version of Go by editing the `.go-version` file in the root
of this repository. Although it's typically possible to build Terraform with of this repository. Although it's typically possible to build OpenTF with
other versions of Go, that file documents the version we intend to use for other versions of Go, that file documents the version we intend to use for
official releases and thus the primary version we use for development and official releases and thus the primary version we use for development and
testing. Adopting a new Go version typically also implies other behavior testing. Adopting a new Go version typically also implies other behavior
changes inherited from the Go standard library, so it's important to review the changes inherited from the Go standard library, so it's important to review the
relevant version changelog(s) to note any behavior changes we'll need to pass relevant version changelog(s) to note any behavior changes we'll need to pass
on to our own users via the Terraform changelog. on to our own users via the OpenTF changelog.
The other subsystems described below should always be set up to match The other subsystems described below should always be set up to match
`unicode.Version`. In some cases those libraries automatically try to align `unicode.Version`. In some cases those libraries automatically try to align
@@ -55,7 +55,7 @@ HCL uses a superset of that specification for its own identifier tokenization
rules, and so it includes some code derived from the TF31 data tables that rules, and so it includes some code derived from the TF31 data tables that
describe which characters belong to the "ID_Start" and "ID_Continue" classes. describe which characters belong to the "ID_Start" and "ID_Continue" classes.
Since Terraform is the primary user of HCL, it's typically Terraform's adoption Since OpenTF is the primary user of HCL, it's typically OpenTF's adoption
of a new Unicode version which drives HCL to adopt one. To update the Unicode of a new Unicode version which drives HCL to adopt one. To update the Unicode
tables to a new version: tables to a new version:
* Edit `hclsyntax/generate.go`'s line which runs `unicode2ragel.rb` to specify * Edit `hclsyntax/generate.go`'s line which runs `unicode2ragel.rb` to specify
@@ -67,7 +67,7 @@ tables to a new version:
order to complete this step.) order to complete this step.)
* Run all the tests to check for regressions: `go test ./...` * Run all the tests to check for regressions: `go test ./...`
* If all looks good, commit all of the changes and open a PR to HCL. * If all looks good, commit all of the changes and open a PR to HCL.
* Once that PR is merged and released, update Terraform to use the new version * Once that PR is merged and released, update OpenTF to use the new version
of HCL. of HCL.
## Unicode Text Segmentation ## Unicode Text Segmentation
@@ -76,7 +76,7 @@ _Text Segmentation_ (TR29) is a Unicode standards annex which describes
algorithms for breaking strings into smaller units such as sentences, words, algorithms for breaking strings into smaller units such as sentences, words,
and grapheme clusters. and grapheme clusters.
Several Terraform language features make use of the _grapheme cluster_ Several OpenTF language features make use of the _grapheme cluster_
algorithm in particular, because it provides a practical definition of algorithm in particular, because it provides a practical definition of
individual visible characters, taking into account combining sequences such individual visible characters, taking into account combining sequences such
as Latin letters with separate diacritics or Emoji characters with gender as Latin letters with separate diacritics or Emoji characters with gender
@@ -108,27 +108,27 @@ are needed.
Once a new Unicode version is included, the maintainer of that library will Once a new Unicode version is included, the maintainer of that library will
typically publish a new major version that we can depend on. Two different typically publish a new major version that we can depend on. Two different
codebases included in Terraform all depend directly on the `go-textseg` module codebases included in OpenTF all depend directly on the `go-textseg` module
for parts of their functionality: for parts of their functionality:
* [`hashicorp/hcl`](https://github.com/hashicorp/hcl) uses text * [`hashicorp/hcl`](https://github.com/hashicorp/hcl) uses text
segmentation as part of producing visual column offsets in source ranges segmentation as part of producing visual column offsets in source ranges
returned by the tokenizer and parser. Terraform in turn uses that library returned by the tokenizer and parser. OpenTF in turn uses that library
for the underlying syntax of the Terraform language, and so it passes on for the underlying syntax of the OpenTF language, and so it passes on
those source ranges to the end-user as part of diagnostic messages. those source ranges to the end-user as part of diagnostic messages.
* The third-party module [`github.com/zclconf/go-cty`](https://github.com/zclconf/go-cty) * The third-party module [`github.com/zclconf/go-cty`](https://github.com/zclconf/go-cty)
provides several of the Terraform language built in functions, including provides several of the OpenTF language built in functions, including
functions like `substr` and `length` which need to count grapheme clusters functions like `substr` and `length` which need to count grapheme clusters
as part of their implementation. as part of their implementation.
As part of upgrading Terraform's Unicode support we therefore typically also As part of upgrading OpenTF's Unicode support we therefore typically also
open pull requests against these other codebases, and then adopt the new open pull requests against these other codebases, and then adopt the new
versions that produces. Terraform work often drives the adoption of new Unicode versions that produces. OpenTF work often drives the adoption of new Unicode
versions in those codebases, with other dependencies following along when they versions in those codebases, with other dependencies following along when they
next upgrade. next upgrade.
At the time of writing Terraform itself doesn't _directly_ depend on At the time of writing OpenTF itself doesn't _directly_ depend on
`go-textseg`, and so there are no specific changes required in this Terraform `go-textseg`, and so there are no specific changes required in this OpenTF
codebase aside from the `go.sum` file update that always follows from codebase aside from the `go.sum` file update that always follows from
changes to transitive dependencies. changes to transitive dependencies.

View File

@@ -5,14 +5,14 @@ package main
// experimentsAllowed can be set to any non-empty string using Go linker // experimentsAllowed can be set to any non-empty string using Go linker
// arguments in order to enable the use of experimental features for a // arguments in order to enable the use of experimental features for a
// particular Terraform build: // particular OpenTF build:
// //
// go install -ldflags="-X 'main.experimentsAllowed=yes'" // go install -ldflags="-X 'main.experimentsAllowed=yes'"
// //
// By default this variable is initialized as empty, in which case // By default this variable is initialized as empty, in which case
// experimental features are not available. // experimental features are not available.
// //
// The Terraform release process should arrange for this variable to be // The OpenTF release process should arrange for this variable to be
// set for alpha releases and development snapshots, but _not_ for // set for alpha releases and development snapshots, but _not_ for
// betas, release candidates, or final releases. // betas, release candidates, or final releases.
// //

View File

@@ -13,7 +13,7 @@ import (
"github.com/mitchellh/cli" "github.com/mitchellh/cli"
) )
// helpFunc is a cli.HelpFunc that can be used to output the help CLI instructions for Terraform. // helpFunc is a cli.HelpFunc that can be used to output the help CLI instructions for OpenTF.
func helpFunc(commands map[string]cli.CommandFactory) string { func helpFunc(commands map[string]cli.CommandFactory) string {
// Determine the maximum key length, and classify based on type // Determine the maximum key length, and classify based on type
var otherCommands []string var otherCommands []string

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# This is a helper script to launch Terraform inside the "dlv" debugger, # This is a helper script to launch OpenTF inside the "dlv" debugger,
# configured to await a remote debugging connection on port 2345. You can # configured to await a remote debugging connection on port 2345. You can
# then connect to it using the following command, or its equivalent in your # then connect to it using the following command, or its equivalent in your
# debugging frontend of choice: # debugging frontend of choice:

View File

@@ -1,24 +1,24 @@
# terraform-bundle # terraform-bundle
`terraform-bundle` was a solution intended to help with the problem `terraform-bundle` was a solution intended to help with the problem
of distributing Terraform providers to environments where direct registry of distributing OpenTF providers to environments where direct registry
access is impossible or undesirable, created in response to the Terraform v0.10 access is impossible or undesirable, created in response to the OpenTF v0.10
change to distribute providers separately from Terraform CLI. change to distribute providers separately from OpenTF CLI.
The Terraform v0.13 series introduced our intended longer-term solutions The OpenTF v0.13 series introduced our intended longer-term solutions
to this need: to this need:
* [Alternative provider installation methods](https://www.terraform.io/docs/cli/config/config-file.html#provider-installation), * [Alternative provider installation methods](https://www.terraform.io/docs/cli/config/config-file.html#provider-installation),
including the possibility of running server containing a local mirror of including the possibility of running server containing a local mirror of
providers you intend to use which Terraform can then use instead of the providers you intend to use which OpenTF can then use instead of the
origin registry. origin registry.
* [The `terraform providers mirror` command](https://www.terraform.io/docs/cli/commands/providers/mirror.html), * [The `terraform providers mirror` command](https://www.terraform.io/docs/cli/commands/providers/mirror.html),
built in to Terraform v0.13.0 and later, can automatically construct a built in to OpenTF v0.13.0 and later, can automatically construct a
suitable directory structure to serve from a local mirror based on your suitable directory structure to serve from a local mirror based on your
current Terraform configuration, serving a similar (though not identical) current OpenTF configuration, serving a similar (though not identical)
purpose than `terraform-bundle` had served. purpose than `terraform-bundle` had served.
For those using Terraform CLI alone, without Terraform Cloud, we recommend For those using OpenTF CLI alone, without OpenTF Cloud, we recommend
planning to transition to the above features instead of using planning to transition to the above features instead of using
`terraform-bundle`. `terraform-bundle`.
@@ -26,8 +26,8 @@ planning to transition to the above features instead of using
However, if you need to continue using `terraform-bundle` However, if you need to continue using `terraform-bundle`
during a transitional period then you can use the version of the tool included during a transitional period then you can use the version of the tool included
in the Terraform v0.15 branch to build bundles compatible with in the OpenTF v0.15 branch to build bundles compatible with
Terraform v0.13.0 and later. OpenTF v0.13.0 and later.
If you have a working toolchain for the Go programming language, you can If you have a working toolchain for the Go programming language, you can
build a `terraform-bundle` executable as follows: build a `terraform-bundle` executable as follows:
@@ -45,16 +45,16 @@ on how to use `terraform-bundle`, see
[the README from the v0.15 branch](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md). [the README from the v0.15 branch](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md).
You can follow a similar principle to build a `terraform-bundle` release You can follow a similar principle to build a `terraform-bundle` release
compatible with Terraform v0.12 by using `--branch=v0.12` instead of compatible with OpenTF v0.12 by using `--branch=v0.12` instead of
`--branch=v0.15` in the command above. Terraform CLI versions prior to `--branch=v0.15` in the command above. OpenTF CLI versions prior to
v0.13 have different expectations for plugin packaging due to them predating v0.13 have different expectations for plugin packaging due to them predating
Terraform v0.13's introduction of automatic third-party provider installation. OpenTF v0.13's introduction of automatic third-party provider installation.
## Terraform Enterprise Users ## Terraform Enterprise Users
If you use Terraform Enterprise, the self-hosted distribution of If you use OpenTF Enterprise, the self-hosted distribution of
Terraform Cloud, you can use `terraform-bundle` as described above to build Terraform Cloud, you can use `terraform-bundle` as described above to build
custom Terraform packages with bundled provider plugins. custom OpenTF packages with bundled provider plugins.
For more information, see For more information, see
[Installing a Bundle in Terraform Enterprise](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md#installing-a-bundle-in-terraform-enterprise). [Installing a Bundle in Terraform Enterprise](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md#installing-a-bundle-in-terraform-enterprise).