Initial RFC push for new providers + middleware

Signed-off-by: James Humphries <james@james-humphries.co.uk>
This commit is contained in:
James Humphries
2025-07-28 17:09:50 +01:00
parent 3c76c5f419
commit 295bf90283
9 changed files with 1680 additions and 0 deletions

View File

@@ -0,0 +1,157 @@
# OpenTofu Providers
> [!TIP]
> **Short on time?** Skip to the [TL;DR example](#tldr) to see a quick demonstration of what this RFC enables.
## Summary
This RFC proposes **a new type of provider for OpenTofu** that dramatically lowers the barrier to entry for provider development. These "OpenTofu Providers" will coexist with traditional Terraform providers, using a simplified execution model and offering SDKs in multiple programming languages.
This proposal evolved from my [original middleware RFC concept](https://github.com/opentofu/opentofu/pull/3016). After community discussion and exploration of the broader challenges in provider development, it became clear that the solution warranted a completely new provider type rather than just middleware functionality.
> [!NOTE]
> This new provider type is intended to exist alongside existing Terraform providers. There is no plan to stop support for Terraform providers in any way.
> The OpenTofu project should do it's best to avoid fragmentation of the ecosystem where possible.
## Composable Architecture
The components defined in this RFC collection are designed to be **composable and modular**. While they work together to create a comprehensive "big picture" solution, each component can theoretically be:
- **Developed independently**: Different teams can work on different components
- **Implemented separately**: Components can be rolled out in phases
- **Replaced with alternatives**: Other proposals or implementations could substitute individual components
**I actively encourage alternative proposals!** Community members are welcome to open RFCs proposing different approaches for any of these components:
- Alternative provider protocols (different from our MessagePack approach)
- Different SDK architectures or language bindings
- Alternative execution models (beyond cmd+args)
- Different registry integration strategies
- Novel approaches to provider extensions
The modular design means that swapping out individual components should have minimal impact on the rest of the system. For example, someone could propose a WebAssembly-based provider protocol while keeping the same SDK patterns, or suggest an entirely different SDK approach while using the same underlying protocol.
This RFC represents **one cohesive proposal** for how these components could work together, but I explicitly designed it to be flexible and evolutionary. The OpenTofu ecosystem benefits from multiple perspectives and approaches.
In order to make this RFC easier to read and implement, I have split it into several focused documents:
### Core Architecture
1. [Provider Protocol](./20250728-opentofu-providers/01-provider-protocol.md) - Defines the stdio-based communication protocol using MessagePack
2. [Provider Client Library](./20250728-opentofu-providers/02-provider-client-library.md) - Multiplexing library that abstracts protocol differences
### Developer Experience
3. [Provider SDK](./20250728-opentofu-providers/03-provider-sdk.md) - Language-specific SDKs with simple, idiomatic APIs
4. [Local Execution](./20250728-opentofu-providers/04-local-execution.md) - Configuration and execution model for cmd+args providers
### Distribution and Discovery
5. [Registry Integration](./20250728-opentofu-providers/05-registry-integration.md) - Registry distribution and discovery mechanisms
### Advanced Features
6. [Provider Extensions](./20250728-opentofu-providers/06-provider-extensions.md) - Extensibility framework and proposed extensions:
- [Middleware Integration](./20250728-opentofu-providers/06a-middleware.md) - Provider-served middleware for governance and compliance
- [State Management Enhancements](./20250728-opentofu-providers/06b-state-management.md) - Advanced state handling capabilities
## Background
Current provider development using terraform-plugin-framework presents significant barriers:
- **Language lock-in**: Providers must be written in Go, excluding developers from other ecosystems
- **Complex toolchain**: Requires GoReleaser, GPG signing, GitHub Actions workflows
- **Slow development cycles**: 15+ minute test feedback loops due to integration test requirements
- **Excessive ceremony**: Scaffolding, boilerplate, and complex local testing setup
These barriers prevent many engineers from creating custom providers for governance, internal tooling, or specialized use cases.
## Proposed Solution
OpenTofu Providers introduce:
1. **Multi-language support**: SDKs for TypeScript, Python, Go, and other languages
2. **Simple execution model**: Providers run as local processes and talk over simple protocols and transports
3. **Progressive distribution**: Simplify the process from Local development → package managers → registry publication
4. **Unified interface**: Seamless integration with existing provider ecosystem
## Security Considerations
OpenTofu providers run with the same security model as existing Terraform providers - they execute with the permissions of the user running the `tofu` command and have access to the same system resources.
## TL;DR
This RFC enables you to write OpenTofu providers in any language and run them locally without complex build processes. Here's what it looks like:
**1. Write a simple provider in TypeScript:**
```typescript
// my-provider.ts
import { Provider, StdioTransport } from '@opentofu/provider-sdk';
import { z } from 'zod';
const provider = new Provider({
name: "myapp",
version: "1.0.0",
});
provider.resource("user", {
schema: z.object({
name: z.string(),
email: z.string().email(),
// Computed
id: z.string().optional(),
}),
methods: {
async create(config) {
// Call your API to create user
const user = await api.createUser(config.name, config.email);
return {
id: user.id,
state: { ...config, id: user.id }
};
},
async read(id, config) {
const user = await api.getUser(id);
return user ? { ...config, ...user } : null;
},
// ... update, delete methods
}
});
new StdioTransport().connect(provider);
```
**2. Use it in your OpenTofu configuration:**
```hcl
terraform {
required_providers {
myapp = {
cmd = "npx"
args = ["tsx", "./my-provider.ts"]
}
}
}
resource "myapp_user" "admin" {
name = "admin"
email = "admin@company.com"
}
```
**3. Run OpenTofu normally:**
```bash
tofu plan # Works just like any other provider
tofu apply # Creates the user via your Python code
```
**That's it!** No Go toolchain, no complex build processes, no registry submissions required for local development. When you're ready to share, publish to npm/PyPI/etc., or eventually to the OpenTofu registry.
This RFC also includes middleware capabilities, so providers can intercept operations for cost tracking, approval workflows, and governance policies.
## References
- Original Provider Client SDK Discussion: [Issue #3033](https://github.com/opentofu/opentofu/issues/3033)
- Original "Seasonings" Plugin Protocol: [PR #3051](https://github.com/opentofu/opentofu/pull/3051)
- **DRAFT** Local-exec providers: [PR #3027](https://github.com/opentofu/opentofu/pull/3027)
- **DRAFT** Registry in a file: [PR #2892](https://github.com/opentofu/opentofu/pull/2892)
- Original Middleware RFC: [20250711-Middleware-For-Enhanced-Operations.md](20250711-Middleware-For-Enhanced-Operations.md)

View File

@@ -0,0 +1,164 @@
# Provider Protocol
## Summary
This document defines the stdio-based communication protocol for OpenTofu providers, using MessagePack-RPC for efficient, language-agnostic communication. The protocol enables the SDK to act as a translator between the methods in `providers.Interface` and the simplified methods that provider developers write.
> [!NOTE]
> This protocol specification is based on my original [Seasonings plugin protocol RFC](https://github.com/opentofu/opentofu/pull/3051), adapted for the broader OpenTofu provider ecosystem.
## Protocol Overview
The OpenTofu Provider Protocol uses MessagePack-RPC over standard input/output (stdio) streams to enable communication between OpenTofu Core (via the Provider Client Library) and provider implementations. This approach provides:
- **Language agnostic**: Any language with MessagePack support can implement providers
- **Type preservation**: Maintains OpenTofu's complex type system including unknown values
- **Efficient serialization**: Binary format with smaller message sizes than JSON
- **Extensible design**: Protocol can evolve with new message types and capabilities
## Wire Protocol
### RPC Framework Decision
> [!NOTE]
> I am currenty not sure if we should propose one of two approaches for the RPC framework:
> 1. **MessagePack-RPC**: Using the existing [msgpack-rpc specification](https://github.com/msgpack-rpc/msgpack-rpc), though this project appears unmaintained
> 2. **Custom JSON-RPC-style**: Implementing JSON-RPC semantics but using MessagePack serialization with our own library
>
> Both approaches use MessagePack for serialization to maintain type fidelity and performance, but differ in message structure and ecosystem support.
### Message Format
The protocol uses MessagePack serialization with RPC semantics. Depending on the final framework decision, messages follow a pattern as similar as possible to the [JSON-RPC 2.0 specification](https://www.jsonrpc.org/specification). This may be json-rpc in msgpack-serialized format, or a wrapper around it to add msgpack on top of json-rpc. undecided.
### Transport Mechanism
The OpenTofu Provider Protocol is designed with transport abstraction in mind, enabling future support for remote transports (HTTP, WebSocket, gRPC, etc.) while starting with stdio for simplicity and broad compatibility.
#### Standard I/O (stdio) Transport
**Initial Implementation Focus**: The stdio transport provides the foundation for provider communication:
**Stream Usage**:
- **stdin/stdout**: Protocol message communication using MessagePack-RPC
- **stderr**: Logging, debugging output, and diagnostic information
This separation follows the same pattern as [MCP servers](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports), providing authors with a clean way to emit logs and debugging information without interfering with protocol communication.
**Benefits of stdio Transport**:
**Simple Execution**: Providers can be executed as standard processes with no special networking requirements:
```bash
./my-provider --config=provider.json
```
**Container Compatibility**: Works seamlessly with containerized providers through Docker's stdio handling:
```bash
docker run --rm -i my-provider:latest
```
**Language Agnostic**: Every programming language has built-in support for reading from stdin and writing to stdout, making provider development accessible across ecosystems.
**Development Friendly**: Easy to test and debug providers using standard shell pipelines and tools.
#### Message Delimitation
All communication over stdio should use self-delimiting messages to ensure reliable parsing. This approach ensures that providers can be implemented without complex message framing logic, while maintaining compatibility with streaming parsers and network transports.
#### Future Transport Options
The protocol design enables future transport implementations:
**HTTP/HTTPS**: RESTful API endpoints for remote provider execution
**WebSocket**: Real-time bidirectional communication for streaming operations
**gRPC**: Integration with existing gRPC infrastructure
**Unix Domain Sockets**: Local high-performance communication
**TCP/TLS**: Direct network communication with encryption
**Transport Abstraction**: The RPC semantics remain identical across all transports - only the underlying message delivery mechanism changes. This allows providers written for stdio to be easily adapted for remote execution.
## Type Serialization
### Why MessagePack is Required
MessagePack is not just an optimization choice for OpenTofu providers - it is **technically required** to preserve the semantics of OpenTofu's type system, which is built on the [go-cty library](https://github.com/zclconf/go-cty).
**The Unknown Value Problem**: OpenTofu's two-phase execution model (plan-then-apply) requires "unknown values" (`cty.UnknownVal`) that represent placeholders for values not yet determinable during planning. These are essential when:
- Resource A depends on outputs from Resource B that doesn't exist yet
- Provider validation must work with incomplete information
- Change detection needs to distinguish known vs unknown future values
**JSON Cannot Represent OpenTofu's Type System**:
- **No unknown value support**: JSON has no way to represent `cty.UnknownVal`
- **Type information loss**: Lists/sets both become arrays, maps/objects both become objects
- **Precision loss**: Numbers lose integer vs float distinctions
- **Missing refined unknowns**: Cannot represent constraint metadata on unknown values
**Evidence from OpenTofu Core**: The existing provider protocol prefers MessagePack with JSON only as a compatibility fallback:
```go
// From internal/grpcwrap/provider6.go
switch {
case len(v.Msgpack) > 0:
res, err = msgpack.Unmarshal(v.Msgpack, ty) // Preferred
case len(v.Json) > 0:
res, err = ctyjson.Unmarshal(v.Json, ty) // Fallback only
}
```
### MessagePack Extensions for OpenTofu
OpenTofu uses specific MessagePack extension types to preserve type system fidelity, particularly for unknown values and their refinements. The most recent documentation of these extension codes and their payload formats can be found in the [Wire Format for OpenTofu Objects](../docs/plugin-protocol/object-wire-format.md) document.
### Performance Benefits
Beyond correctness, MessagePack provides performance advantages:
- **~30% smaller** than equivalent JSON representations
- **2-5x faster** serialization/deserialization
- **Streaming support**: Self-delimiting format enables efficient parsing
- **Binary efficiency**: Should hanndle large state files and complex configurations efficiently
## Core Message Types
Here are some examples of a possible set of messages that could be implemented to act as control messages before/after the opentofu flow occurs. Treat these as handshakes or ways to gracefully shut down.
### Initialization
**Init Request**: `["init", {"protocol_version": "1.0", "capabilities": [...]}]`
**Init Response**: `{"supported_capabilities": [...], "provider_info": {...}}`
Used for protocol negotiation and capability discovery.
### Provider Lifecycle
**Shutdown Notification**: `["shutdown", {}]`
Graceful termination request (no response expected).
**Ping Request**: `["ping", {}]`
**Ping Response**: `{"status": "ok"}`
Health check and connectivity verification.
### Provider Operations
All standard `providers.Interface` methods map to protocol messages:
**GetProviderSchema**: Returns provider, resource, and data source schemas
**ValidateProviderConfig**: Validates provider configuration
**ConfigureProvider**: Initializes provider with final configuration
**ReadResource**: Refreshes resource state
**PlanResourceChange**: Plans resource modifications
**ApplyResourceChange**: Applies planned changes
## Protocol Extensions
The protocol is designed to be extensible through additional message types and capabilities. Provider extension mechanisms will be detailed in a separate RFC document ([06-provider-extensions.md](./06-provider-extensions.md)).
**Capability Negotiation**: During initialization, providers return a list of supported capabilities in their init response. This allows OpenTofu Core to determine which extended features are available:
```
Init Response: {
"supported_capabilities": ["middleware_hooks", "state_storage", "ai"],
"provider_info": {...}
}
```
Future extensions may include middleware integration, provider state storage, cross-provider communication, and other advanced capabilities as the ecosystem evolves.

View File

@@ -0,0 +1,11 @@
# Provider Client Library
## Summary
This document specifies the provider client library that acts as a multiplexer between different provider protocols, abstracting the differences between Terraform providers (gRPC) and OpenTofu providers (stdio). But also it acts as a publicly consumable package that can be used outside of OpenTofu to communicate to providers in a standardized way.
> [!NOTE]
> This RFC builds upon the work being done in [Issue #3033](https://github.com/opentofu/opentofu/issues/3033).
<COMING SOON>

View File

@@ -0,0 +1,618 @@
# Provider SDK
## Summary
This document proposes a new set of SDKs that enable developers to create OpenTofu providers in multiple programming languages. The SDK abstracts the underlying protocol complexity, providing a CRUD-centric, schema-first development experience that dramatically lowers the barrier to entry for provider development.
> [!NOTE]
> The SDK design is heavily inspired by the [MCP Server SDKs](https://github.com/modelcontextprotocol/servers), which demonstrate how to create simple, language-idiomatic APIs for protocol-based integrations.
## Design Philosophy
### Core Principles
**1. CRUD-Centric Development**
Developers define simple create, read, update, and delete operations. The SDK should aim to handle the complex mapping to OpenTofu's provider protocol, state management, and resource lifecycle.
**2. Schema-First Approach**
Resource and data source schemas are defined using each language's idiomatic validation libraries (Zod for TypeScript, Pydantic for Python, etc.). The SDK uses these schemas for runtime validation, type safety, and automatic documentation generation. However it may be possible to define another function that can return schema.
**3. Language-Idiomatic Design**
Each SDK follows the conventions and patterns of its target language ecosystem, making provider development feel natural to developers already familiar with that language.
**4. Progressive Disclosure**
Simple use cases require minimal code, while complex scenarios remain possible through advanced SDK features and escape hatches.
**5. Automatic Features**
The SDK automatically should provide managed resources, data sources, documentation generation, capability negotiation, and protocol handling, etc. without requiring developer intervention.
## SDK Architecture
### Transport Abstraction
All SDKs use a transport abstraction that handles the underlying protocol communication. The initial implementation focuses on stdio transport, with future support for other transports:
```typescript
// TypeScript example
import { Provider, StdioTransport } from '@opentofu/provider-sdk';
const provider = new Provider({ name: "custom", version: "1.0.0" });
new StdioTransport().connect(provider);
```
### Protocol Translation Layer
The SDK acts as a translation layer between the developer's simple methods and something that could fulfil the `providers.interface` specification. See [02-provider-client-library.md](./02-provider-client-library.md) for more details.
For example a developer should be able to write the following:
```typescript
provider.resource("s3_bucket", {
...
methods: {
async create(config) {
const result = await my_s3_client.createBucket(config);
return { id: result.id, state: { ...config, id: result.id } };
}
}
}
```
And the SDK translates the calls to/from such functions:
- `GetProviderSchema` responses with proper schema definitions
- `PlanResourceChange` handling with unknown value management
- `ApplyResourceChange` execution with error handling
- State persistence and retrieval logic
## Multi-Language Implementation
### TypeScript SDK
> [!NOTE]
> The proposal for typescript may seem more comlpete as it is what I am most familiar with. Other language examples are shown to define the pattern of implementation as an example, and not as something set in stone.
The TypeScript SDK could leverage Zod for schema validation and provides a simple experience of defining items on the provider. For example:
```typescript
import { z } from 'zod';
import { Provider, StdioTransport } from '@opentofu/provider-sdk';
const provider = new Provider({
name: "my-custom-aws-s3",
version: "0.1.0",
});
// Schema-first resource definition
const s3BucketSchema = z.object({
bucket: z.string(),
region: z.string().default("us-east-1"),
versioning: z.boolean().default(false),
tags: z.record(z.string()).optional(),
// Computed fields
arn: z.string().optional(),
id: z.string().optional(),
});
provider.resource("s3_bucket", {
schema: s3BucketSchema,
methods: {
async read(id, config) {
const bucket = await s3Client.getBucket(id);
if (!bucket) return null;
return {
...config,
id,
arn: `arn:aws:s3:::${id}`,
versioning: bucket.versioningEnabled,
};
},
async create(config) {
const bucketResult = await s3Client.createBucket({
Bucket: config.bucket,
Region: config.region,
});
if (config.versioning) {
await s3Client.putBucketVersioning({
Bucket: config.bucket,
VersioningConfiguration: { Status: 'Enabled' },
});
}
return {
id: config.bucket,
state: {
...config,
id: config.bucket,
arn: bucketResult.arn,
},
};
},
async update(id, config) {
if (config.tags) {
await s3Client.putBucketTags({
Bucket: id,
Tags: config.tags,
});
}
return {
...config,
id,
arn: `arn:aws:s3:::${id}`,
};
},
async delete(id) {
await s3Client.deleteBucket({ Bucket: id });
},
},
});
// Data source automatically derived from resource read method, or you can define them yourself explicitly
provider.dataSource("s3_bucket", {
schema: z.object({
...
}),
resolve: async (query) => {
const bucket = await s3Client.getBucket(id);
if (!bucket) return null;
return {
...config,
id,
arn: `arn:aws:s3:::${id}`,
versioning: bucket.versioningEnabled,
};
}
});
new StdioTransport()
.connect(provider)
.then(() => {
console.log("AWS S3 Governance Provider ready");
})
.catch((error) => {
console.error(`Failed to start: ${error}`);
process.exit(1);
});
```
### Python SDK
The Python SDK could use Pydantic for schema validation and provide both decorator-based and class-based APIs:
```python
from pydantic import BaseModel, Field
from opentofu_provider_sdk import Provider, StdioTransport
provider = Provider(name="custom", version="1.0.0")
class S3BucketSchema(BaseModel):
bucket: str
region: str = "us-east-1"
versioning: bool = False
tags: dict[str, str] = Field(default_factory=dict)
# Computed fields
arn: str | None = None
id: str | None = None
@provider.resource("s3_bucket", schema=S3BucketSchema)
class S3BucketResource:
async def read(self, id: str, config: S3BucketSchema) -> S3BucketSchema | None:
bucket = await s3_client.get_bucket(id)
if not bucket:
return None
return S3BucketSchema(
**config.model_dump(),
id=id,
arn=f"arn:aws:s3:::{id}",
versioning=bucket.versioning_enabled,
)
async def create(self, config: S3BucketSchema) -> dict:
await s3_client.create_bucket(
Bucket=config.bucket,
Region=config.region,
)
if config.versioning:
await s3_client.put_bucket_versioning(
Bucket=config.bucket,
VersioningConfiguration={"Status": "Enabled"},
)
return {
"id": config.bucket,
"state": S3BucketSchema(
**config.model_dump(),
id=config.bucket,
arn=f"arn:aws:s3:::{config.bucket}",
),
}
async def update(self, id: str, config: S3BucketSchema) -> S3BucketSchema:
if config.tags:
await s3_client.put_bucket_tags(
Bucket=id,
Tags=config.tags,
)
return S3BucketSchema(
**config.model_dump(),
id=id,
arn=f"arn:aws:s3:::{id}",
)
async def delete(self, id: str) -> None:
await s3_client.delete_bucket(Bucket=id)
if __name__ == "__main__":
transport = StdioTransport()
transport.connect(provider)
```
### Go SDK
The Go SDK provides a familiar interface for Go developers while maintaining the simplified CRUD approach:
```go
package main
import (
"context"
"fmt"
"github.com/opentofu/provider-sdk-go"
)
type S3BucketConfig struct {
Bucket string `json:"bucket" validate:"required"`
Region string `json:"region" default:"us-east-1"`
Versioning bool `json:"versioning" default:"false"`
Tags map[string]string `json:"tags,omitempty"`
// Computed
ARN string `json:"arn,omitempty"`
ID string `json:"id,omitempty"`
}
func main() {
provider := sdk.NewProvider("custom", "1.0.0")
provider.Resource("s3_bucket", &sdk.ResourceDefinition{
Schema: &S3BucketConfig{},
Methods: &sdk.ResourceMethods{
ReadFunc: func(ctx context.Context, id string, config interface{}) (interface{}, error) {
cfg := config.(*S3BucketConfig)
bucket, err := s3Client.GetBucket(ctx, id)
if err != nil {
return nil, err
}
if bucket == nil {
return nil, nil
}
return &S3BucketConfig{
Bucket: cfg.Bucket,
Region: cfg.Region,
Versioning: bucket.VersioningEnabled,
Tags: cfg.Tags,
ARN: fmt.Sprintf("arn:aws:s3:::%s", id),
ID: id,
}, nil
},
CreateFunc: func(ctx context.Context, config interface{}) (*sdk.CreateResult, error) {
cfg := config.(*S3BucketConfig)
err := s3Client.CreateBucket(ctx, &s3.CreateBucketInput{
Bucket: &cfg.Bucket,
Region: &cfg.Region,
})
if err != nil {
return nil, err
}
if cfg.Versioning {
err = s3Client.PutBucketVersioning(ctx, &s3.PutBucketVersioningInput{
Bucket: &cfg.Bucket,
VersioningConfiguration: &s3.VersioningConfiguration{
Status: "Enabled",
},
})
if err != nil {
return nil, err
}
}
return &sdk.CreateResult{
ID: cfg.Bucket,
State: &S3BucketConfig{
Bucket: cfg.Bucket,
Region: cfg.Region,
Versioning: cfg.Versioning,
Tags: cfg.Tags,
ARN: fmt.Sprintf("arn:aws:s3:::%s", cfg.Bucket),
ID: cfg.Bucket,
},
}, nil
},
UpdateFunc: func(ctx context.Context, id string, config interface{}) (interface{}, error) {
cfg := config.(*S3BucketConfig)
if len(cfg.Tags) > 0 {
err := s3Client.PutBucketTags(ctx, &s3.PutBucketTagsInput{
Bucket: &id,
Tags: cfg.Tags,
})
if err != nil {
return nil, err
}
}
return &S3BucketConfig{
Bucket: cfg.Bucket,
Region: cfg.Region,
Versioning: cfg.Versioning,
Tags: cfg.Tags,
ARN: fmt.Sprintf("arn:aws:s3:::%s", id),
ID: id,
}, nil
},
DeleteFunc: func(ctx context.Context, id string) error {
return s3Client.DeleteBucket(ctx, &s3.DeleteBucketInput{
Bucket: &id,
})
},
},
})
transport := sdk.NewStdioTransport()
if err := transport.Connect(provider); err != nil {
panic(err)
}
}
```
## Resource Patterns
### Basic Resource Lifecycle
The SDK handles the complete resource lifecycle through four core operations:
**Create**: Provisions a new resource and returns its initial state
```typescript
async create(config) {
const result = await api.createResource(config);
return {
id: result.id,
state: { ...config, id: result.id, computed_field: result.value }
};
}
```
**Read**: Refreshes resource state from the remote system
```typescript
async read(id, config) {
const resource = await api.getResource(id);
return resource ? { ...config, ...resource } : null;
}
```
**Update**: Modifies an existing resource
```typescript
async update(id, config) {
await api.updateResource(id, config);
return { ...config, id, updated_at: new Date().toISOString() };
}
```
**Delete**: Removes the resource
```typescript
async delete(id) {
await api.deleteResource(id);
// No return value needed
}
```
### Advanced Resource Patterns
**Conditional Operations**: Handle resources that may not support all operations, either explicitly throw an error or omit the method completely.
```typescript
async update(id, config) {
if (!api.supportsUpdate()) {
throw new Error("This resource does not support updates");
}
// ... normal update logic
}
```
**Error Handling**: Provide meaningful error messages
```typescript
async create(config) {
try {
const result = await api.createResource(config);
return { id: result.id, state: { ...config, id: result.id } };
} catch (error) {
if (error.code === 'ALREADY_EXISTS') {
throw new Error(`Resource with name '${config.name}' already exists`);
}
throw error;
}
}
```
## Data Sources
### Automatic Data Source Generation
The SDK should be able to automatically generate data sources from resource read methods, this is a change from existing functionality and I am curious what others think heavily here. Authors should be able to define a specific data source method or have it inferred from the resource `read` method.
### Custom Data Sources
For data sources that don't correspond to resources:
```typescript
provider.dataSource("s3_buckets", {
schema: z.object({
region: z.string().optional(),
buckets: z.array(z.object({
name: z.string(),
region: z.string(),
creation_date: z.string(),
})).optional(),
}),
resolve: async (query) => {
const buckets = await s3Client.listBuckets({
Region: query.region,
});
return {
region: query.region,
buckets: buckets.map(bucket => ({
name: bucket.Name,
region: bucket.Region,
creation_date: bucket.CreationDate.toISOString(),
})),
};
}
});
```
## Provider-Defined Functions
The SDK supports provider-defined functions for custom computation:
```typescript
provider.function("base64_encode", {
parameters: [
{ name: "input", type: "string", description: "String to encode" }
],
returnType: "string",
implementation: async (input: string) => {
return Buffer.from(input, 'utf8').toString('base64');
}
});
provider.function("generate_password", {
parameters: [
{ name: "length", type: "number", description: "Password length" },
{ name: "special_chars", type: "bool", description: "Include special characters", default: true }
],
returnType: "string",
implementation: async (length: number, specialChars: boolean) => {
const charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
const special = "!@#$%^&*()_+-=[]{}|;:,.<>?";
const chars = specialChars ? charset + special : charset;
let password = "";
for (let i = 0; i < length; i++) {
password += chars.charAt(Math.floor(Math.random() * chars.length));
}
return password;
}
});
```
## Automatic Features
### Documentation Generation
We should attempt to find a way to generate documentation based on what is provided in the source written by the author. Or provide manual documentation to override this. Either way we should ensure that documentation is a first class citizen of this design.
One possible approach is to generate documentation based on the resource objects passed to the server object.
```typescript
// Schema definitions become documentation
const schema = z.object({
bucket: z.string().describe("The name of the S3 bucket"),
region: z.string().default("us-east-1").describe("AWS region for the bucket"),
versioning: z.boolean().default(false).describe("Enable versioning on the bucket"),
});
// Method implementations include examples
provider.resource("s3_bucket", {
schema,
examples: [
{
title: "Basic S3 bucket",
config: {
bucket: "my-app-bucket",
region: "us-west-2",
}
},
{
title: "S3 bucket with versioning",
config: {
bucket: "my-versioned-bucket",
versioning: true,
}
}
],
methods: { /* ... */ }
});
```
### Capability Negotiation
The SDK automatically handles protocol capability negotiation during initialization, enabling or disabling features based on what the OpenTofu Core version supports.
### Validation and Type Safety
Runtime validation occurs automatically using the defined schemas, providing clear error messages for configuration issues before resources are created or updated.
## Error Handling and Diagnostics
### Error Types
The SDK could provide structured error handling with different error types:
```typescript
import { ProviderError, ResourceError, ValidationError } from '@opentofu/provider-sdk';
async create(config) {
try {
// Validation happens automatically via schema
const result = await api.createResource(config);
return { id: result.id, state: { ...config, id: result.id } };
} catch (error) {
if (error.code === 'INSUFFICIENT_PERMISSIONS') {
throw new ProviderError(
'Insufficient permissions to create S3 bucket. Check AWS credentials.',
{ detail: error.message }
);
}
if (error.code === 'BUCKET_ALREADY_EXISTS') {
throw new ResourceError(
'Bucket name already exists. S3 bucket names must be globally unique.',
{ attribute: 'bucket' }
);
}
throw error;
}
}
```
## Implementation Considerations
### SDK Distribution
Each language SDK is distributed through that language's standard package manager:
- **TypeScript/JavaScript**: npm package `@opentofu/provider-sdk`
- **Python**: PyPI package `opentofu-provider-sdk`
- **Go**: Go module `github.com/opentofu/provider-sdk-go`
### Versioning Strategy
SDKs follow semantic versioning with compatibility guarantees:
- Major versions may introduce breaking changes to the developer API
- Minor versions add new features while maintaining backward compatibility
- Patch versions contain bug fixes and performance improvements
The SDK version is independent of the protocol version, allowing SDK improvements without protocol changes.

View File

@@ -0,0 +1,114 @@
# Local Execution
## Summary
This document defines the configuration and execution model for running OpenTofu providers by provdiing a command and a set of arguments, enabling local development, package manager distribution, and flexible deployment patterns.
## Configuration Model
### Design Philosophy: Familiarity Over Innovation
**Key Principle**: Use existing, familiar configuration structures rather than introducing new syntax. Users already understand the `required_providers` block from years of Terraform/OpenTofu usage. This RFC extends that familiar pattern rather than creating entirely new configuration mechanisms.
### Relationship to Other Proposals
This RFC is one of several approaches being explored for flexible provider execution, the combination of the two links below explore a different approach and is a recommended read before going into this document:
- **[Local-exec providers (PR #3027)](https://github.com/opentofu/opentofu/pull/3027)**: Explores opt-in local execution with different configuration patterns
- **[Registry in a file (PR #2892)](https://github.com/opentofu/opentofu/pull/2892)**: Proposes `.opentofu.deps.hcl` for source mapping and local overrides
This RFC focuses on extending the existing `required_providers` syntax to be more flexible, while the other proposals introduce new configuration files or mechanisms. Each approach has trade-offs that should be discussed. In an ideal world we would find a way to introduce both approaches and this proposal does not block the concept of a Registry in a file.
### Basic Configuration
Providers can be configured using the familiar `required_providers` block, extended with `cmd` and `args` fields:
```hcl
terraform {
required_providers {
# Traditional registry-based provider (unchanged)
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
# Local script execution (new capability)
governance = {
cmd = "python3"
args = ["./governance-provider.py", "--stdio"]
}
# Package manager execution (new capability)
custom = {
cmd = "npx"
args = ["-y", "@mycompany/provider@1.2.3", "--stdio"]
}
# Docker container execution (new capability)
scanner = {
cmd = "docker"
args = ["run", "--rm", "-i", "security-scanner:latest"]
}
}
}
```
The same `required_providers` block supports both traditional (`source`+`version`) and local (`cmd`+`args`) providers, allowing mixed usage patterns and gradual adoption. Usage of both sets of variables should result in an error.
### Advanced Configuration (Possible extension)
For cases requiring environment customization, the `env` field provides environment variable overrides:
```hcl
terraform {
required_providers {
governance = {
cmd = "python3"
args = ["./governance-provider.py", "--stdio"]
env = {
GOVERNANCE_API_KEY = var.api_key
GOVERNANCE_LOG_LEVEL = "debug"
PATH = "${env.PATH}:/opt/custom/bin"
}
}
}
}
```
This will override environment variables from the parent process and pass a set of env vars that is the result of combining the parent process env vars with the set in the block above, preferring the env vars defined in configuration.
## Working Directory
### Default Behavior
**The provider process working directory is set to the directory containing the Terraform configuration file**, not the directory where `tofu` was invoked. This ensures:
1. **Predictable relative paths**: `./provider.py` always relative to .tf file
2. **Consistency**: Same behavior regardless of where `tofu` is run
3. **Module support**: Providers in modules run in the module directory
### Example Layout
```bash
# Directory structure:
/project/
main.tf # Has provider config with cmd="./provider.py"
provider.py # The actual provider script
modules/
custom/
main.tf # Child Module has provider config with cmd="./local-provider"
local-provider
```
### Working Directory for Complex Configurations
For root module configurations, the working directory is the root module directory. For child modules with their own provider configurations, the working directory is the child module directory.
### Version Management
In the case of local providers, OpenTofu assumes that versioning is handled externally by another package manager or versioning system.
For example:
- NPX: `@company/provider@1.2.3` - exact version
- Docker: `image:tag` - image tags
- Local scripts: No versioning (use version control)

View File

@@ -0,0 +1,151 @@
# Registry Integration
## Summary
This document defines a simplified registry integration model for the new proposed OpenTofu providers focused on **discovery rather than distribution**. The registry serves as a lightweight catalog that helps users find providers and learn how to use them, while leaving versioning, installation, and distribution to external systems (package managers, version control, etc.).
## Design Philosophy: Discovery Over Distribution
### Core Principle
The registry's primary purpose for these new providers is **provider discovery and documentation**, not complex version management or binary distribution. Users discover providers through the registry, then follow installation instructions from the provider's documentation.
### Why This Approach
1. **Simplicity**: Avoids complex versioning, security, and distribution infrastructure
2. **Flexibility**: Providers can use any distribution method (npm, PyPI, Docker, Git, etc.)
3. **Security**: Eliminates registry as a potential attack vector for malicious code execution
4. **Maintenance**: Minimal registry infrastructure and maintenance burden
5. **Innovation**: Allows experimentation with different distribution approaches
### Relationship to Existing Registry
This proposal coexists with the existing OpenTofu registry:
- **Existing registry**: Continues to serve traditional binary providers with full version management
- **New registry section**: Adds a lightweight discovery-only section for local execution providers
- **User choice**: Users can choose between traditional providers (complex, secure) and local providers (simple, flexible)
## Registry Metadata Format
### Minimal Provider Entry
```json
{
"name": "myapp",
"namespace": "myorg",
"description": "Internal provider for pmyapp",
"repository": {
"type": "github",
"url": "https://github.com/myorg/myapp-provider"
},
"documentation": {
"readme": "https://github.com/myorg/myapp-provider/blob/main/README.md"
},
"tags": ["governance", "policy", "internal"],
"created_at": "2025-01-15T10:30:00Z",
"updated_at": "2025-01-20T14:22:00Z"
}
```
### Required Fields
- **`name`**: Provider name (used in `required_providers`)
- **`namespace`**: Organization or user namespace
- **`description`**: Brief description of provider functionality
- **`repository.url`**: Link to source repository
- **`documentation.readme`**: Link to installation and usage documentation
### Optional Fields
- **`repository.type`**: Repository type (`github`, `gitlab`, `bitbucket`, `git`)
- **`tags`**: Searchable keywords
- **`created_at`** / **`updated_at`**: Timestamps for registry management
### What's NOT Included
- **No version information**: Registry is versionless
- **No download URLs**: Users get software from repository or package managers
- **No checksums**: Integrity handled by external systems
- **No dependency information**: Dependencies managed by package managers
- **No execution metadata**: cmd+args specified by users locally
## Versionless Provider Model
### How It Works
```hcl
terraform {
required_providers {
# User discovers provider in registry
governance = {
# Registry provides repository URL, user follows README for installation
cmd = "python3"
args = ["./governance-provider.py", "--stdio"]
}
}
}
```
### Benefits of Versionless Registry
1. **Eliminates Version Conflicts**: No complex dependency resolution in registry
2. **Flexibility**: Providers can use any versioning scheme they choose
3. **Immediate Updates**: Provider updates don't require registry submissions
4. **Reduced Complexity**: Registry doesn't need to understand different versioning systems
5. **User Control**: Users explicitly choose which version/installation method to use
## Discovery Workflow
### Registry API
< REDO ALL OF THIS, PROPOSE A new v2 api maybe? say that it's unclear right now how or if we should communicate back to the opentofu binary>
## Provider Submission Process
The review process should be very similar to what we have today however require more fields from the form. Due to the nature of the new providers, we cannot infer as much information as we currently do. The flexibility given to the provider developers does negatively impact the registry slightly but this just requires a tiny amount more of human steps.
Note: There are existing tools in the MCP marketplace ecosystem that will read the README.md using an LLM and use that to infer specifics, this is an approach we could adopt in the long term.
### Namespace Management
By allowing provider authors to submit new providers from non-github sources, we are breaking the concept of a namespace being a 1 to 1 mapping to a github organization. We should discuss a way to allow for us to decouple these and allow people to define namespaces in a first-come-first-serve basis. Similar to other projects (dockerhub, npm, etc).
## Future Enhancements
### Potential Future Additions
**Enhanced Metadata** (if valuable):
- Provider category/type classification
- Minimum OpenTofu version requirements
- Provider protocol version
- Example configuration snippets
**Community Features** (if needed):
- Provider ratings/reviews
- Download/usage statistics
- Provider health monitoring
- Community discussion integration
**Registry Federation** (if ecosystem grows):
- Support for multiple registry sources
- Private/internal registry mirrors
- Registry synchronization protocols
### What We Shouldn't Add
**Complex Features to Avoid**:
- Version dependency resolution
- Binary/package distribution
- Security scanning or verification
- Complex approval workflows
- Usage analytics or tracking
## Versioning approaches
I propose that to get started we should start versionless and discuss with the community through RFC processes about how we can intoroduce versioning to this process.
## Conclusion
This registry integration model prioritizes simplicity and discovery over complex distribution mechanics. By focusing on helping users find and learn about providers rather than managing their installation and versioning, we create a lightweight system that supports the diverse needs of the OpenTofu provider ecosystem while maintaining security and flexibility.
The versionless approach eliminates many common registry problems (dependency hell, version conflicts, complex resolution) while empowering providers to use whatever distribution and versioning approach works best for their use case.

View File

@@ -0,0 +1,138 @@
# Provider Extensions
## Summary
This document outlines the extensibility philosophy for the OpenTofu provider protocol and introduces two major potential extensions: middleware integration and state management enhancements. The goal is to design an evolving protocol where new features can be added over time without breaking backward compatibility.
### HTTP-like Protocol Evolution
Similar to how HTTP has evolved over time (HTTP/1.0 → HTTP/1.1 → HTTP/2 → HTTP/3) while maintaining backward compatibility, the OpenTofu provider protocol should be designed to grow incrementally. Old versions of OpenTofu should be able to communicate with newer providers by simply using the subset of functionality they understand. This approach was inspired by @apparentlymart's insights on protocol evolution patterns.
### Core Compatibility Principle
The fundamental principle is that **any OpenTofu version should be able to talk to any provider version**. If OpenTofu doesn't understand middleware hooks, it simply doesn't use them. If a provider doesn't support batching, operations happen one at a time. The core CRUD operations remain universal and always work.
## Proposed Extensions
We are proposing two major extensions to demonstrate how the protocol can evolve:
### 1. Middleware Integration
Middleware allows interception and modification of provider operations, enabling powerful features like cost tracking, approval gates, and policy enforcement without requiring changes to individual providers.
**See**: [06a-middleware.md](./06a-middleware.md) for detailed specification.
### 2. State Management Enhancements
Enhanced state management capabilities including local caching, state transformation, and improved dependency tracking to optimize performance and provide new capabilities.
**See**: [06b-state-management.md](./06b-state-management.md) for detailed specification.
## Protocol Evolution Strategy
### Graceful Degradation
The protocol should be designed so that:
1. **Old OpenTofu + New Provider**: Works perfectly using basic functionality
2. **New OpenTofu + Old Provider**: Works perfectly, advanced features simply aren't available
3. **New OpenTofu + New Provider**: Can negotiate and use advanced features
### Feature Detection
Feature detection uses the capabilities system defined in the [provider protocol document](./01-provider-protocol.md). During the initialization handshake, providers declare their supported capabilities:
**Init Request** from OpenTofu:
```json
["init"]
```
**Init Response** from Provider:
```json
{
"supported_capabilities": [
"managed_resources",
"functions",
"middleware_hooks",
"state_caching"
],
"provider_info": {...}
}
```
This allows:
- **Providers** to declare which capabilities they implement
- **OpenTofu** to use only the capabilities it understands
- **Graceful degradation** when OpenTofu doesn't understand a capability
## Implementation Considerations
### Incremental Development
Rather than building a complex extension system upfront, features could be added incrementally over time.
### Backward Compatibility Testing
Any protocol changes should be tested to ensure:
- Old OpenTofu versions can still use new providers
- New OpenTofu versions gracefully handle old providers
- Core functionality is never compromised
### Provider Choice
Providers should have complete freedom to choose which enhancements to implement. A simple provider might only implement basic CRUD, while an enterprise provider might implement the full feature set. It's also possible for a provider to just handle functions.
## Examples of Future Features
### Middleware Configuration
```hcl
terraform {
required_providers {
myapp = {
cmd = "python3"
args = ["./myapp-provider.py"]
# Only used if both OpenTofu and provider support middleware
middleware = {
cost_tracking = {
budget_limit = 1000
}
approval_gate = {
require_approval = true
}
}
}
}
}
```
### Batch Operation Support
```json
{
"batch_request": {
"operations": [
{"type": "create", "resource": "user1", "config": {...}},
{"type": "create", "resource": "user2", "config": {...}},
{"type": "update", "resource": "user3", "config": {...}}
]
}
}
```
### State Caching Headers
```json
{
"read_response": {
"state": {...},
"cache_ttl": 300,
"etag": "abc123"
}
}
```
## Conclusion
The key is to design these features as optional enhancements that gracefully degrade when not supported, rather than as required capabilities that create compatibility matrices.

View File

@@ -0,0 +1,325 @@
# Middleware Integration
## Summary
This RFC proposes adding middleware capabilities to OpenTofu that allow interception and modification of provider operations. Middleware would enable powerful features like cost tracking, approval gates, policy enforcement, and audit logging without requiring changes to individual providers.
_This work is based on the [original middleware RFC (#3016)](https://github.com/opentofu/opentofu/issues/3016). We recommend reading that RFC first for background context and motivation._
>[!NOTE]
> **Community feedback wanted!** Naming is hard, I'm not 100% certain about the name "Middleware", please help me name this by providing your feedback as a comment here in the pull request.
## Middleware as Provider Extension
Middleware is implemented as an additional capability that regular providers can optionally support alongside their resource management functionality.
This means the same provider process that manages your AWS resources can also provide middleware hooks for cost tracking, policy enforcement, or audit logging.
### Example: CompanyCo Internal Provider with Middleware
```hcl
terraform {
required_providers {
companyco = {
source = "companyco/internal-platform"
version = "~> 2.1"
}
}
}
# CompanyCo provider also serves middleware for governance
middleware "companyco" "cost_tracker" {
budget_limit = 5000
cost_center = "engineering"
alert_channels = ["#platform-alerts"]
}
middleware "companyco" "security_policy" {
require_team_ownership = true
enforce_naming_convention = true
required_environments = ["staging", "production"]
}
middleware "companyco" "change_approval" {
production_requires_approval = true
approvers = ["platform-team@companyco.com"]
auto_approve_dev = true
}
provider "companyco" {
api_endpoint = "https://platform.companyco.internal"
# Use middleware from the same provider
middleware = [
provider.companyco.cost_tracker,
provider.companyco.security_policy,
provider.companyco.change_approval
]
}
# CompanyCo manages internal services and infrastructure
resource "companyco_application" "user_service" {
name = "user-service"
team = "platform"
environment = "production"
replicas = 3
}
resource "companyco_database" "user_db" {
name = "user-service-db"
application = companyco_application.user_service.name
size = "medium"
}
```
### Technical Implementation
Middleware functionality extends the existing provider protocol with additional message types:
- **Middleware Hook Messages**: `["middleware_hook", {...}]` for processing middleware events
- **Metadata Response**: Return middleware metadata alongside standard responses
- **Hook Registration**: Providers declare which hooks they want to receive during initialization
The provider process handles both regular resource operations AND middleware hook processing, using the same MessagePack-RPC protocol defined in the [provider protocol document](./01-provider-protocol.md).
## What is Middleware?
Middleware in OpenTofu is the idea of reacting to events and extending the functionality of OpenTofu Core. This means reacting to resources being created, planning completing or apply failing for example, and then reacting to that with code written as part of a provider.
## Use Cases
### Cost Tracking and Budget Enforcement
Middleware could track the cost of infrastructure changes and enforce budget limits:
- Monitor estimated costs during planning each resource
- Fail early if you know the plan is too expensive! No need to wait for the plan to complete.
- Prevent applies that exceed budget thresholds
- Generate cost reports and alerts after applying
- Track spending by team, project, or environment
### Policy Enforcement
Middleware could enforce organizational policies:
- Security policies (no public S3 buckets, encrypted storage required)
- Compliance requirements (tagging standards, naming conventions)
- Resource limits (maximum instance sizes, region restrictions)
- Integration with policy engines like OPA or Sentinel or jsPolicy.
### Audit and Compliance
Middleware could provide enhanced logging and audit trails:
- Detailed operation logs with metadata
- Change attribution and approval tracking
- Compliance reporting
- Integration with SIEM systems
### Integration with ITSM
Organizations could build custom middleware for specific needs:
- Store information about the generated resources into servicenow
- Trigger a backstage notification every time an apply fails
- Only allow applying if you have a change control ticket open in jira
## How Middleware Works
### Middleware Hook Points
Middleware operates at two distinct levels with different hook points:
#### Resource-Level Hooks
These hooks fire for each individual resource or data source:
- **`pre-plan`**: Before planning a specific resource
- **`post-plan`**: After planning a specific resource
- **`pre-apply`**: Before applying changes to a specific resource
- **`post-apply`**: After applying changes to a specific resource
- **`pre-refresh`**: Before refreshing state for a specific resource
- **`post-refresh`**: After refreshing state for a specific resource
#### Operation-Level Hooks
These hooks fire for entire OpenTofu operations:
- **`init-stage-start`**: Before the init stage begins
- **`init-stage-complete`**: After the init stage completes successfully
- **`init-stage-fail`**: After the init stage fails
- **`plan-stage-start`**: Before the plan stage begins
- **`plan-stage-complete`**: After the plan stage completes successfully
- **`plan-stage-fail`**: After the plan stage fails
- **`apply-stage-start`**: Before the apply stage begins
- **`apply-stage-complete`**: After the apply stage completes successfully
- **`apply-stage-fail`**: After the apply stage fails
### Middleware Chain and Execution Order
Multiple middleware components can be chained together, executing in a defined order:
**Execution Order**:
1. **Global middleware** (from `terraform.middleware`) - runs for all operations
2. **Provider-specific middleware** (from `provider.middleware`) - runs only for that provider's resources
Middleware will be executed in the order in which it appears in the array of middleware. If the same middleware is passed both globally and per-provider it will be executed multiple times, each time overwriting the metadata of the last.
**Example Execution Flow**:
```hcl
terraform {
middleware = [provider.cost.global_budget, provider.approval.gate]
}
provider "aws" {
middleware = [provider.cost.aws_optimizer, provider.policy.aws_checker]
}
```
For an AWS resource operation:
1. `provider.cost.global_budget` (global)
2. `provider.approval.gate` (global)
3. `provider.cost.aws_optimizer` (AWS-specific)
4. `provider.policy.aws_checker` (AWS-specific)
Each middleware component has access to:
- The original operation data
- Metadata returned by previous middleware in the chain
- Current resource/operation context
### Middleware Metadata System
Middleware can attach metadata to resources that shall be persisted in both the plan and state files:
#### Metadata Structure
```json
{
"__middleware_metadata__": {
"cost_estimator": {
"hourly": 0.102,
"monthly": 73.58,
"currency": "USD"
},
"approval_tracker": {
"approved_by": "team-lead@company.com",
"approval_timestamp": "2025-01-15T10:30:00Z"
}
}
}
```
#### Metadata Requirements
- Middleware returns an object that gets stored as `__middleware_metadata__.<MIDDLEWARENAME>`
- Metadata is read-only and cannot modify resource configurations
- Metadata persists across plan/apply cycles
- Metadata is accessible to subsequent middleware in the chain
- Metadata should be stored in both the plan file and the state file so external tooling can run against it
### Middleware Response Format
Middleware responds with the following data:
```json
{
"status": "success", // "success", "fail", or "warning"
"message": "Cost estimate: $73.58/month",
"metadata": {
"estimated_cost": {
"hourly": 0.102,
"monthly": 73.58,
"currency": "USD"
}
}
}
```
**Response Options**:
- **`success`**: Allow operation to continue
- **`fail`**: Block operation and return error
- **`warning`**: Allow operation but display warning
### Error Handling and Failures
When middleware returns a "fail" status:
- The operation is immediately halted
- The error message is displayed to the user
- No subsequent middleware in the chain executes
- The resource operation is not performed
### Configuration Model
Middleware is provided by providers and configured using a provider-based model similar to regular providers.
#### Configuration Syntax
**Middleware Declaration**: `middleware "providername" "middlewarename" { }`
- `providername` must be declared in `required_providers`
- `middlewarename` is a local name for this middleware instance
- Configuration block contains middleware-specific settings
**Global Middleware**: `terraform { middleware = [...] }`
- Runs for all providers and resources
- Configured in the `terraform` block alongside `required_providers`
**Provider-Specific Middleware**: `provider "name" { middleware = [...] }`
- Runs only for resources from that specific provider
- Configured in the provider block
## Implementation Considerations
### Sensitive Values and Security
#### Sensitive Data Handling
Middleware interaction with sensitive values could be controlled through configuration:
```hcl
middleware "audit" "logger" {
log_destination = "splunk"
# Sensitive value handling
send_sensitive = false # Don't send sensitive values to middleware
sanitize_logs = true # Remove sensitive data from logs
}
```
**Sensitive Value Modes**:
- **`send_sensitive = true`**: Middleware receives all values including sensitive ones
- **`send_sensitive = false`**: Sensitive values are redacted before sending to middleware
- **Default**: `false` for security by default
#### Security Considerations
**Read-Only Architecture**:
- Middleware cannot directly modify resource configurations
- Middleware cannot directly manipulate state files, only attach metadata
- All changes go through OpenTofu's standard validation and storage
### Performance Impact
Middleware adds overhead to operations:
- Each hook adds latency
- Complex middleware could slow operations significantly
- Need mechanisms to bypass middleware for emergency situations
- Consider async middleware for non-blocking operations
## Future Possibilities
### Advanced Features
- **Conditional Middleware**: Apply middleware based on environment, user, or change type
- **Middleware Dependencies**: Middleware that depends on other middleware
- **Dynamic Configuration**: Middleware configuration that changes based on context
- **Remote Middleware**: Middleware that runs as external services
### Ecosystem Development
- **Middleware Marketplace**: Registry of available middleware components
- **Standard Library**: Common middleware patterns and implementations
- **Integration Frameworks**: Easy integration with external systems
- **Testing Tools**: Tools for testing and debugging middleware
## Conclusion
Middleware integration would provide OpenTofu with powerful governance, compliance, and workflow capabilities while maintaining the simplicity and flexibility that makes OpenTofu valuable. By allowing interception and modification of provider operations, middleware enables organizations to implement custom business logic and controls without requiring changes to core OpenTofu or individual providers.

View File

@@ -0,0 +1,2 @@
# State Management Enhancements
< TODO >