Add AI development documentation and guidelines (#4777)

This commit is contained in:
Nathan Wallace
2025-11-11 19:03:27 +08:00
committed by GitHub
parent a7cb5caa02
commit 688e711419
9 changed files with 1000 additions and 0 deletions

11
.ai/.gitignore vendored Normal file
View File

@@ -0,0 +1,11 @@
# AI Working Directory
# Temporary files created by AI agents during development
wip/
*.tmp
*.swp
*.bak
*~
# Keep directory structure
!wip/.gitkeep

35
.ai/README.md Normal file
View File

@@ -0,0 +1,35 @@
# AI Development Guide for Steampipe
This directory contains documentation, templates, and conventions for AI-assisted development on the Steampipe project.
## Guides
- **[Bug Fix PRs](docs/bug-fix-prs.md)** - Two-commit pattern, branch naming, PR format for bug fixes
- **[GitHub Issues](docs/bug-workflow.md)** - Reporting bugs and issues
- **[Test Generation](docs/test-generation-guide.md)** - Writing effective tests
- **[Parallel Coordination](docs/parallel-coordination.md)** - Working with multiple agents in parallel
## Directory Structure
```
.ai/
├── docs/ # Permanent documentation and guides
├── templates/ # Issue and PR templates
└── wip/ # Temporary workspace (gitignored)
```
## Key Conventions
- **Base branch**: `develop` for all work
- **Bug fixes**: 2-commit pattern (demonstrate → fix)
- **Small PRs**: One logical change per PR
- **Issue linking**: PR title ends with `closes #XXXX`
## For AI Agents
- Reference the relevant guide in `docs/` for your task
- Use templates in `templates/` for PR descriptions
- Use `wip/<topic>/` for coordinated parallel work (gitignored)
- Follow project conventions for branches, commits, and PRs
**Parallel work pattern**: Create `.ai/wip/<topic>/` with task files, then agents can work independently. See [parallel-coordination.md](docs/parallel-coordination.md).

409
.ai/docs/bug-fix-prs.md Normal file
View File

@@ -0,0 +1,409 @@
# Bug Fix PR Guide
## Two-Commit Pattern
Every bug fix PR must have **exactly 2 commits**:
1. **Commit 1**: Demonstrate the bug (test fails)
2. **Commit 2**: Fix the bug (test passes)
This pattern provides:
- Clear demonstration that the bug exists
- Proof that the fix resolves the issue
- Easy code review (reviewers can see the test fail, then pass)
- Test-driven development (TDD) workflow
## Commit 1: Unskip/Add Test
### Purpose
Demonstrate that the bug exists by having a failing test.
### Changes
- If test exists in test suite: Remove `t.Skip()` line
- If test doesn't exist: Add the test
- **NO OTHER CHANGES**
### Commit Message Format
```
Unskip test demonstrating bug #<issue>: <brief description>
```
or
```
Add test for #<issue>: <brief description>
```
### Examples
```
Unskip test demonstrating bug #4767: GetDbClient error handling
```
```
Add test for #4717: Target.Export() should handle nil exporter gracefully
```
### Verification
```bash
# Test should FAIL
go test -v -run TestName ./pkg/path
# Exit code: 1
```
## Commit 2: Implement Fix
### Purpose
Fix the bug with minimal changes.
### Changes
- Implement the fix in production code
- **NO changes to test code**
- Keep changes minimal and focused
### Commit Message Format
```
Fix #<issue>: <brief description of fix>
```
### Examples
```
Fix #4767: GetDbClient returns (nil, error) on failure
```
```
Fix #4717: Add nil check to Target.Export()
```
### Verification
```bash
# Test should PASS
go test -v -run TestName ./pkg/path
# Exit code: 0
```
## Creating the Two Commits
### Method 1: Interactive Rebase (Recommended)
If you have more commits, squash them:
```bash
# View commit history
git log --oneline -5
# Interactive rebase to squash
git rebase -i HEAD~3
# Mark commits:
# pick <hash> Unskip test...
# squash <hash> Additional test changes
# pick <hash> Fix bug
# squash <hash> Address review comments
```
### Method 2: Cherry-Pick
If rebasing from another branch:
```bash
# In your fix branch based on develop
git cherry-pick <test-commit-hash>
git cherry-pick <fix-commit-hash>
```
### Method 3: Build Commits Correctly
```bash
# Start from develop
git checkout -b fix/1234-description develop
# Commit 1: Unskip test
# Edit test file to remove t.Skip()
git add pkg/path/file_test.go
git commit -m "Unskip test demonstrating bug #1234: Description"
# Verify it fails
go test -v -run TestName ./pkg/path
# Commit 2: Fix bug
# Edit production code
git add pkg/path/file.go
git commit -m "Fix #1234: Description of fix"
# Verify it passes
go test -v -run TestName ./pkg/path
```
## Pushing to GitHub: Two-Phase Push
**IMPORTANT**: Push commits separately to trigger CI runs for each commit. This provides clear visual evidence in the PR that the test fails before the fix and passes after.
### Phase 1: Push Test Commit (Should Fail CI)
```bash
# Create and switch to your branch
git checkout -b fix/1234-description develop
# Make commit 1 (unskip test)
git add pkg/path/file_test.go
git commit -m "Unskip test demonstrating bug #1234: Description"
# Verify test fails locally
go test -v -run TestName ./pkg/path
# Push ONLY the first commit
git push -u origin fix/1234-description
```
At this point:
- GitHub Actions will run tests
- CI should **FAIL** on the test you unskipped
- This proves the test catches the bug
### Phase 2: Push Fix Commit (Should Pass CI)
```bash
# Make commit 2 (fix bug)
git add pkg/path/file.go
git commit -m "Fix #1234: Description of fix"
# Verify test passes locally
go test -v -run TestName ./pkg/path
# Push the second commit
git push
```
At this point:
- GitHub Actions will run tests again
- CI should **PASS** with the fix
- This proves the fix works
### Creating the PR
Create the PR after the first push (before the fix):
```bash
# After phase 1 push
gh pr create --base develop \
--title "Brief description closes #1234" \
--body "## Summary
[Description]
## Changes
- Commit 1: Unskipped test demonstrating the bug
- Commit 2: Implemented fix (coming in next push)
## Test Results
Will be visible in CI runs:
- First CI run should FAIL (demonstrating bug)
- Second CI run should PASS (proving fix works)
"
```
Or create it after both commits are pushed - either way works.
### Why This Matters for Reviewers
This two-phase push gives reviewers:
1. **Visual proof** the test fails without the fix (failed CI run)
2. **Visual proof** the test passes with the fix (passed CI run)
3. **No manual verification needed** - just look at the CI history in the PR
4. **Clear diff** between what fails and what fixes it
### Example PR Timeline
```
✅ PR opened
❌ CI run #1: Test failure (commit 1)
"FAIL: TestName - expected nil, got non-nil client"
⏱️ Commit 2 pushed
✅ CI run #2: All tests pass (commit 2)
"PASS: TestName"
```
Reviewers can click through the CI runs to see the exact failure and success.
## PR Structure
### Branch Naming
```
fix/<issue-number>-brief-kebab-case-description
```
Examples:
- `fix/4767-getdbclient-error-handling`
- `fix/4743-status-spinner-visible-race`
- `fix/4717-nil-exporter-check`
### PR Title
```
Brief description closes #<issue>
```
Examples:
- `GetDbClient error handling closes #4767`
- `Race condition on StatusSpinner.visible field closes #4743`
### PR Description
```markdown
## Summary
[Brief description of the bug and fix]
## Changes
- Commit 1: Unskipped test demonstrating the bug
- Commit 2: Implemented fix by [description]
## Test Results
- Before fix: [Describe failure - panic, wrong result, etc.]
- After fix: Test passes
## Verification
\`\`\`bash
# Commit 1 (test only)
go test -v -run TestName ./pkg/path
# FAIL: [error message]
# Commit 2 (with fix)
go test -v -run TestName ./pkg/path
# PASS
\`\`\`
```
### Labels
Add appropriate labels:
- `bug`
- Severity: `critical`, `high-priority` (if available)
- Type: `security`, `race-condition`, `nil-pointer`, etc.
## What NOT to Include
### ❌ Don't Add to Commits
- Unrelated formatting changes
- Refactoring not directly related to the bug
- go.mod changes (unless required by new imports)
- Documentation updates (separate PR)
- Multiple bug fixes in one PR
### ❌ Don't Combine Commits
- Keep test and fix as separate commits
- Don't squash them together
- Don't add "fix review comments" commits (amend instead)
## Handling Review Feedback
### If Test Needs Changes
```bash
# Amend commit 1
git checkout HEAD~1
# Make test changes
git add file_test.go
git commit --amend
git rebase --continue
```
### If Fix Needs Changes
```bash
# Amend commit 2
# Make fix changes
git add file.go
git commit --amend
```
### Force Push After Amendments
```bash
git push --force-with-lease
```
## Multiple Related Bugs
If fixing multiple related bugs:
- Create separate issues for each
- Create separate PRs for each
- Don't combine into one PR
- Each PR: 2 commits
## Test Suite PRs (Different Pattern)
Test suite PRs follow a different pattern:
- **Single commit** with all tests
- Branch: `feature/tests-for-<packages>`
- Base: `develop`
- Include bug-demonstrating tests (marked as skipped)
See [templates/test-pr-template.md](../templates/test-pr-template.md)
## Verifying Commit Structure
Before pushing:
```bash
# Check commit count
git log --oneline origin/develop..HEAD
# Should show exactly 2 commits
# Check first commit (test only)
git show HEAD~1 --stat
# Should only modify test file(s)
# Check second commit (fix only)
git show HEAD --stat
# Should only modify production code file(s)
# Verify test behavior
git checkout HEAD~1 && go test -v -run TestName ./pkg/path # Should FAIL
git checkout HEAD && go test -v -run TestName ./pkg/path # Should PASS
```
## Common Mistakes
### ❌ Mistake 1: Combined Commit
```
Fix #1234: Add test and fix bug
```
**Problem**: Can't verify test catches the bug
**Solution**: Split into 2 commits
### ❌ Mistake 2: Modified Test in Fix Commit
```
Commit 1: Add test
Commit 2: Fix bug and adjust test
```
**Problem**: Test changes hide whether original test would pass
**Solution**: Only modify test in commit 1
### ❌ Mistake 3: Multiple Bugs in One PR
```
Fix #1234 and #1235: Multiple fixes
```
**Problem**: Hard to review, test, and merge independently
**Solution**: Create separate PRs
### ❌ Mistake 4: Extra Commits
```
Commit 1: Add test
Commit 2: Fix bug
Commit 3: Address review
Commit 4: Fix typo
```
**Problem**: Cluttered history
**Solution**: Squash into 2 commits
## Examples
Real examples from our codebase:
- PR #4769: [Fix #4750: Nil pointer panic in RegisterExporters](https://github.com/turbot/steampipe/pull/4769)
- PR #4773: [Fix #4748: SQL injection vulnerability](https://github.com/turbot/steampipe/pull/4773)
## Next Steps
- [GitHub Issues](bug-workflow.md) - Creating bug reports
- [Parallel Coordination](parallel-coordination.md) - Working on multiple bugs in parallel
- [Templates](../templates/) - PR templates

99
.ai/docs/bug-workflow.md Normal file
View File

@@ -0,0 +1,99 @@
# GitHub Issue Guidelines
Guidelines for creating bug reports and issues.
## Bug Issue Format
**Title:**
```
BUG: Brief description of the problem
```
For security issues, use `[SECURITY]` prefix.
**Labels:** Add `bug` label
**Body Template:**
```markdown
## Description
[Clear description of the bug]
## Severity
**[HIGH/MEDIUM/LOW]** - [Impact statement]
## Reproduction
1. [Step 1]
2. [Step 2]
3. [Observed result]
## Expected Behavior
[What should happen]
## Current Behavior
[What actually happens]
## Test Reference
See `TestName` in `path/file_test.go:line` (currently skipped)
## Suggested Fix
[Optional: proposed solution]
## Related Code
- `path/file.go:line` - [description]
```
## Example
```markdown
## Description
The `GetDbClient` function returns a non-nil client even when an error
occurs during connection, causing nil pointer panics when callers
attempt to call `Close()` on the returned client.
## Severity
**HIGH** - Nil pointer panic crashes the application
## Reproduction
1. Call `GetDbClient()` with an invalid connection string
2. Function returns both an error AND a non-nil client
3. Caller attempts to defer `client.Close()` which panics
## Expected Behavior
When an error occurs, `GetDbClient` should return `(nil, error)`
following Go conventions.
## Current Behavior
Returns `(non-nil-but-invalid-client, error)` leading to panics.
## Test Reference
See `TestGetDbClient_WithConnectionString` in
`pkg/initialisation/init_data_test.go:322` (currently skipped)
## Suggested Fix
Ensure all error paths return `nil` for the client value.
## Related Code
- `pkg/initialisation/init_data.go:45-60` - GetDbClient function
```
## When You Find a Bug
1. **Create the GitHub issue** using the template above
2. **Skip the test** with reference to the issue:
```go
t.Skip("Demonstrates bug #XXXX - description. Remove skip in bug fix PR.")
```
3. **Continue your work** - don't stop to fix immediately
## Bug Fix Workflow
See [bug-fix-prs.md](bug-fix-prs.md) for the bug fix PR workflow (2-commit pattern).
## Best Practices
- Include specific reproduction steps
- Reference exact code locations with line numbers
- Explain the impact clearly
- Link to the test that demonstrates the bug
- For security issues: assess severity carefully and consider private disclosure

View File

@@ -0,0 +1,117 @@
# Parallel Agent Coordination
Simple patterns for coordinating multiple AI agents working in parallel.
## Basic Pattern
When working on multiple related tasks in parallel:
1. **Create a work directory** in `wip/`:
```bash
mkdir -p .ai/wip/<topic-name>
```
Example: `.ai/wip/bug-fixes-wave-1/` or `.ai/wip/test-snapshot-pkg/`
2. **Coordinator creates task files**:
```bash
# In .ai/wip/<topic>/
task-1-fix-bug-4767.md
task-2-fix-bug-4768.md
task-3-fix-bug-4769.md
plan.md # Overall coordination plan
```
3. **Parallel agents read and execute**:
```
Agent 1: "See plan in .ai/wip/bug-fixes-wave-1/ and run task-1"
Agent 2: "See plan in .ai/wip/bug-fixes-wave-1/ and run task-2"
Agent 3: "See plan in .ai/wip/bug-fixes-wave-1/ and run task-3"
```
## Task File Format
Keep task files simple:
```markdown
# Task: Fix bug #4767
## Goal
Fix GetDbClient error handling bug
## Steps
1. Create worktree: /tmp/fix-4767
2. Branch: fix/4767-getdbclient
3. Unskip test in pkg/initialisation/init_data_test.go
4. Verify test fails
5. Implement fix
6. Verify test passes
7. Push (two-phase)
8. Create PR with title: "GetDbClient error handling (closes #4767)"
## Context
See issue #4767 for details
Test is already written and skipped
```
## Work Directory Structure
Example for a bug fixing session:
```
.ai/wip/bug-fixes-wave-1/
├── plan.md # Coordinator's overall plan
├── task-1-fix-4767.md # Task for agent 1
├── task-2-fix-4768.md # Task for agent 2
├── task-3-fix-4769.md # Task for agent 3
└── status.md # Optional: track completion
```
Example for test generation:
```
.ai/wip/test-snapshot-pkg/
├── plan.md # What to test, approach
├── findings.md # Bugs found during testing
└── test-checklist.md # Coverage checklist
```
## Benefits
- **Isolated**: Each focus area has its own directory
- **Clean**: Old work directories can be deleted when done
- **Reusable**: Pattern works for any parallel work
- **Simple**: Just files and directories, no complex coordination
## Cleanup
When work is complete:
```bash
# Archive or delete the work directory
rm -rf .ai/wip/<topic-name>/
```
The `.ai/wip/` directory is gitignored, so these temporary files won't clutter the repo.
## Examples
**Parallel bug fixes:**
```
Coordinator: Creates .ai/wip/bug-fixes-wave-1/ with 10 task files
Agents 1-10: Each picks a task file and works independently
```
**Test generation with bug discovery:**
```
Coordinator: Creates .ai/wip/test-generation-phase-2/plan.md
Agent: Writes tests, documents bugs in findings.md
```
**Feature development:**
```
Coordinator: Creates .ai/wip/feature-auth/
- task-1-backend.md
- task-2-frontend.md
- task-3-tests.md
Agents: Work in parallel on each component
```

View File

@@ -0,0 +1,230 @@
# Test Generation Guide
Guidelines for writing effective tests.
## Focus on Value
Prioritize tests that:
- Catch real bugs
- Verify complex logic and edge cases
- Test error handling and concurrency
- Cover critical functionality
Avoid simple tests of getters, setters, or trivial constructors.
## Test Generation Process
### 1. Understand the Code
Before writing tests:
- Read the source code thoroughly
- Identify complex logic paths
- Look for error handling code
- Check for concurrency patterns
- Review TODOs and FIXMEs
### 2. Focus Areas
Look for:
- **Nil pointer dereferences** - Missing nil checks
- **Race conditions** - Concurrent access to shared state
- **Resource leaks** - Goroutines, connections, files not cleaned up
- **Edge cases** - Empty strings, zero values, boundary conditions
- **Error handling** - Incorrect error propagation
- **Concurrency issues** - Deadlocks, goroutine leaks
- **Complex logic paths** - Multiple branches, state machines
### 3. Test Structure
```go
func TestFunctionName_Scenario(t *testing.T) {
// ARRANGE: Set up test conditions
// ACT: Execute the code under test
// ASSERT: Verify results
// CLEANUP: Defer cleanup if needed
}
```
### 4. When You Find a Bug
1. Mark the test with `t.Skip()`
2. Add skip message: `"Demonstrates bug #XXXX - description. Remove skip in bug fix PR."`
3. Create a GitHub issue (see [bug-workflow.md](bug-workflow.md))
4. Continue testing
Example:
```go
func TestResetPools_NilPools(t *testing.T) {
t.Skip("Demonstrates bug #4698 - ResetPools panics with nil pools. Remove skip in bug fix PR.")
client := &DbClient{}
client.ResetPools(context.Background()) // Should not panic
}
```
### 5. Test Organization
#### File Naming
- `*_test.go` in same package as code under test
- Use `<package>_test` for black-box testing
#### Test Naming
- `Test<FunctionName>_<Scenario>`
- Examples:
- `TestValidateSnapshotTags_EdgeCases`
- `TestSpinner_ConcurrentShowHide`
- `TestGetDbClient_WithConnectionString`
#### Subtests
Use `t.Run()` for multiple related scenarios:
```go
func TestValidation_EdgeCases(t *testing.T) {
tests := []struct {
name string
input string
shouldErr bool
}{
{"empty_string", "", true},
{"valid_input", "test", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := Validate(tt.input)
if (err != nil) != tt.shouldErr {
t.Errorf("Validate() error = %v, shouldErr %v", err, tt.shouldErr)
}
})
}
}
```
### 6. Testing Best Practices
#### Concurrency Testing
```go
func TestConcurrent_Operation(t *testing.T) {
var wg sync.WaitGroup
errors := make(chan error, 100)
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
if err := Operation(); err != nil {
errors <- err
}
}()
}
wg.Wait()
close(errors)
for err := range errors {
t.Error(err)
}
}
```
**IMPORTANT**: Don't call `t.Errorf()` from goroutines - it's not thread-safe. Use channels instead.
#### Resource Cleanup
```go
func TestWithResources(t *testing.T) {
resource := setupResource(t)
defer resource.Cleanup()
// ... test code ...
}
```
#### Table-Driven Tests
For multiple similar scenarios:
```go
tests := []struct {
name string
input string
expected string
wantErr bool
}{
{"scenario1", "input1", "output1", false},
{"scenario2", "input2", "output2", false},
{"error_case", "bad", "", true},
}
```
### 7. What NOT to Test
Avoid LOW-value tests:
- ❌ Simple getters/setters
- ❌ Trivial constructors
- ❌ Tests that just call the function
- ❌ Tests of external libraries
- ❌ Tests that duplicate each other
### 8. Test Output Quality
Tests should provide clear diagnostics on failure:
```go
// Good
t.Errorf("Expected tag validation to fail for %q, but got nil error", invalidTag)
// Bad
t.Error("validation failed")
```
### 9. Performance Considerations
- Use `testing.Short()` for slow tests
- Skip expensive tests in short mode
- Document expected execution time
```go
func TestLargeDataset(t *testing.T) {
if testing.Short() {
t.Skip("Skipping large dataset test in short mode")
}
// ... test code ...
}
```
### 10. Bug Documentation
When a test demonstrates a bug:
- Add clear comments explaining the bug
- Reference the GitHub issue number
- Show expected vs actual behavior
- Include reproduction steps
```go
// BUG: GetDbClient returns non-nil client even when error occurs
// This violates Go conventions and causes nil pointer panics
func TestGetDbClient_ErrorHandling(t *testing.T) {
t.Skip("Demonstrates bug #4767. Remove skip in fix PR.")
client, err := GetDbClient("invalid://connection")
if err != nil {
// BUG: Client should be nil when error occurs
if client != nil {
t.Error("Client should be nil when error is returned")
}
}
}
```
## Tools
- `go test -race` - Always run concurrency tests with race detector
- `go test -v` - Verbose output for debugging
- `go test -short` - Skip slow tests
- `go test -run TestName` - Run specific test
## Next Steps
When tests are complete:
1. Create GitHub issues for bugs found
2. Follow [bug-workflow.md](bug-workflow.md) for PR workflow

View File

@@ -0,0 +1,53 @@
# Bug Fix PR Template
## PR Title
```
Brief description closes #<issue>
```
## PR Description
```markdown
## Summary
[1-2 sentences: what was wrong and how it's fixed]
## Changes
### Commit 1: Demonstrate Bug
- Unskipped test `TestName` in `pkg/path/file_test.go`
- Test **FAILS** with [error/panic/wrong result]
### Commit 2: Fix Bug
- Modified `pkg/path/file.go` to [change description]
- Test now **PASSES**
## Verification
CI history shows: ❌ (commit 1) → ✅ (commit 2)
```
## Branch and Commit Messages
**Branch:**
```
fix/<issue>-brief-description
```
**Commit 1:**
```
Unskip test demonstrating bug #<issue>: description
```
**Commit 2:**
```
Fix #<issue>: description of fix
```
## Checklist
- [ ] Exactly 2 commits in PR
- [ ] Test fails on commit 1
- [ ] Test passes on commit 2
- [ ] Pushed commits separately (two CI runs visible)
- [ ] PR title ends with "closes #XXXX"
- [ ] No unrelated changes

View File

@@ -0,0 +1,46 @@
# Test Suite PR Template
## PR Title
```
Add tests for pkg/{package1,package2}
```
## PR Description
```markdown
## Summary
Added tests for [packages], focusing on [areas: edge cases, concurrency, error handling, etc.].
## Tests Added
- **pkg/package1** - [brief description of what's tested]
- **pkg/package2** - [brief description of what's tested]
## Bugs Found
[If bugs were discovered:]
- #<issue>: [brief description]
- #<issue>: [brief description]
[Tests demonstrating bugs are marked with `t.Skip()` and issue references]
## Execution
```bash
go test ./pkg/package1 ./pkg/package2
go test -race ./pkg/package1 # if concurrency tests included
```
```
## Branch
```
feature/tests-<packages>
```
Example: `feature/tests-snapshot-task`
## Notes
- Base branch: `develop`
- Single commit with all tests
- Bug-demonstrating tests should be skipped with issue references
- Bugs will be fixed in separate PRs

0
.ai/wip/.gitkeep Normal file
View File