1
0
mirror of synced 2025-12-30 03:02:21 -05:00
Files
airbyte/airbyte-integrations/infrastructure
Jared Rhizor 25674fc306 upgrade to Gradle 7.3.1 / Java 17 (#7964)
* upgrade gradle

* upgrade to Java 17 (and fix a few of the node versioning misses)

* oops

* try to run a different format version

* fix spotless by upgrading / reformatting some files

* fix ci settings

* upgrade mockito to avoid other errors

* undo bad format

* fix "incorrect" sql comments

* fmt

* add debug flag

* remove

* bump

* bump jooq to a version that has a java 17 dist

* fix

* remove logs

* oops

* revert jooq upgrade

* fix

* set up java for connector test

* fix yaml

* generate std source tests

* fail zombie job attempts and add failure reason (#8709)

* fail zombie job attempts and add failure reason

* remove failure reason

* bump gcp dependencies to pick up grpc update (#8713)

* Bump Airbyte version from 0.33.9-alpha to 0.33.10-alpha (#8714)

Co-authored-by: jrhizor <jrhizor@users.noreply.github.com>

* Change CDK "Caching" header to "nested streams & caching"

* Update fields in source-connectors specifications: file, freshdesk, github, google-directory, google-workspace-admin-reports, iterable (#8524)

Signed-off-by: Sergey Chvalyuk <grubberr@gmail.com>

Co-authored-by: Serhii Chvaliuk <grubberr@gmail.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>

* move S3Config into destination-s3; update dependencies accordingly (#8562)

Co-authored-by: Lake Mossman <lake@airbyte.io>
Co-authored-by: jrhizor <jrhizor@users.noreply.github.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
Co-authored-by: Iryna Grankova <87977540+igrankova@users.noreply.github.com>
Co-authored-by: Serhii Chvaliuk <grubberr@gmail.com>
Co-authored-by: Edward Gao <edward.gao@airbyte.io>
2021-12-10 16:57:54 -08:00
..

Airbyte Terraform

Connector Development Infrastructure

We use Terraform to manage any persistent infrastructure used for developing or testing connectors.

Directory structure is roughly as follows:

├── aws
│   ├── demo
│   │   ├── core
│   │   └── lb
│   ├── shared
│   └── ssh_tunnel
│       ├── module
│       │   ├── secrets
│       │   └── sql
│       └── user_ssh_public_keys
└── gcp

Top level is which provider the terraform is for. Next level is a directory containing the project name, or 'shared' for infrastructure (like the backend for terraform itself) that crosses projects.

Within each project directory, the top level main.tf contains the infrastructure for that project, in a high-level format. The module within it contains the fine grained details.

Do not place terraform in the top level per-provider directory, as that results in a monorepo where 'terraform destroy' has a too-wide blast radius. Instead, create a separate small terraform instance for each project. Then plan and destroy only affect that project and not other unrelated infrastructure.

Workflow

Setup Credentials

GCP

Copy the contents of the Lastpass credentials Connector GCP Terraform Key into gcp/connectors/secrets/svc_account_creds.json.

Any secrets directory in the entire repo is gitignored by default so there is no danger of checking credentials into git.

AWS

You'll find it useful to create an IAM user for yourself and put it in the terraform role, so that you can use terraform apply directly against the correct subaccount. This involves getting logged in to the aws console using the lastpass credentials, and then go to IAM and create a user through the GUI. Download your csv creds from there. You can use aws sts get-caller-identity to make sure your custom user is recognized.

Azure

Coming soon.

Iteration Cycle

To run terraform commands, use the tfenv wrapper available through brew or download:

brew install tfenv

Once you have tfenv and are in a directory with a .terraform-version file, just use the normal terraform commands:

terraform init
terraform plan
terraform apply

If this is your first time running Terraform run the init command before plan or apply.

To achieve isolation and minimize risks, infrastructure should be isolated by connector where feasible (but use your judgment w.r.t costs of duplicate infra).

To create connector-related resources in any of the clouds:

  1. Repeatedly modify the relevant terraform and apply as you work.

  2. Once satisfied, create a PR with your changes. Please post the output of the terraform plan command to show the diff in infrastructure between the master branch and your PR. This may require deleting all the infra you just created and running terraform apply one last time.