* add base files
* upd base
* save
* save sample files
* save & todo resolve state
* save the stage
* save the stage
* pre-gradle save
* fix catalogs
* merge && format
* normal stream slices
* apply requested changes
* requested_changes
* postfix
* update comment
* expand question stream add page_id
* upd typing once + rm missed inactual todo
* upd: caching with temp file
* upd requirements (requested)
* latest requested fixes
* posttext fix retn tempfile
* apply changes && comment
* newly requested changes
* return back spec -> changes to be set in new issue
* merge && usage comment
* add unit_test for get_updated_state function
* add simple date test
* parametrized (?) unittest
* upd comment on record_mode usage
* replace config with custom var
* pytest mark parametrized use
* rm unneeded var
* upd tests (requested)
* merge && upd texts
* add env airbyte_entrypoint
We were previously running the Kube acceptance test on images pulled from Dockerhub and not images built as part of the acceptance tests build.
This PR fixes that by explicitly loading the images into KIND. This happened because KIND does not have access local docker agent.
We stopped publishing dev images in PR:4425 so there are other updates for tests that were previously using these images.
Since we are no longer publishing dev images - I also went through and removed all the images on DockerHub - I'm pretty confident this is actually, finally, using locally built docker images.
Implement logging to and reading from Minio. Use the same S3 client for this.
Configure Airbyte Kube Prod and Staging to use Minio by default, so Airbyte Kube is a standalone deployment.
Also update documentation to reflect this.
* introduce automatic migration at the startup of server
* handle versions with non-zero patch
* it works!!!
* add dummy data
* cleanup orphan configs
* add more assertions
* format + add comments
* move migration acceptance test to acceptance test directory
* add automatic migration test to the build
* address review comments
* missed out on these
* format
* add more assertions
* format
* fix test
* format
* use default port for temporal
* move seed to server + introduce atomice replacement for config
* make tests better
* remove unwanted changes
* move atomic replacement logic behind persistence + pass path to latest seeds
* format
* update seeds
* review comments
* update seeds
* merge latest seeds with configs
* fix bug around latest seed
* update seed
* update seed
* seeds should be populated by separate container
* address review comment + change latest definition url
* update seeds
* format
* update seed references
* update seed
* update seed
* update seed
* update seed references
* update seed references + add Migration Acceptance Test
* update seed container in kube + disable automatic migration for kube + update docs
* update docs
* address review comments from Michel
* update doc
* temporary commmit to see if build becomes green
* delete seeds from airbyte config + undo temp commit
Only run Discover and 2 Syncs for Kube since these are already run as part of Docker acceptance tests. There is little value in rerunning; these also take a long time since it can take a while for Kube pods to spin up.
Use Log4j2 appender to support routing logs to S3.
Create LogClient to support reading from S3.
Some clean up of the Log4j2 xml variables.
Several dependency changes to be more explicit when configuring jackson.
* use ec2 runner for kube acceptance tests
* add missing newline
* log outputs
* add user and home env vars
* log more
* use root user
* fix ec2 runner
* remaining debugging
* try overwrite forcing
* fail fast
* get kubectl location
* even more
* just look for it
* see if it's a symlink
* symlink
* make sure it's started
* try with overrides
* Revert "try with overrides"
This reverts commit 123e3c033e.
* clean up
* describe pods
* display exception when getting address in use error
* retry installing socat
* try inet4 address specifically
* switch order of install
* use unique ports for each
* try to detect locations with home and user set
* STOPTTTTTTTTTTT
* fix typo
* move socat back up one more
* add update
* working except for too much logging and bad success case
* succeeds on passing case
* completes successfully
* just doesn't kill the main
* working zombie killing
* cleanup
* more cleanup
* use correct path
* fmt
* cleanups, bugfixes, integration tests
* run worker integration tests as part of ci
* delete tester class
* fix hanging checkpoint container problem
* fix name of command
* replace todo with clarifying comment
* tool for creating schemas from configured_catalog within genson stripping `required`
* gen from all output (add_schema for all)
* all except extra_strategies
* apply extra strategies
* merge && small upd
* upd docstring
* Use CDK to generate source that can be configured to emit a certain number of records and always works.
* Checkpoint: socat works from inside the docker container.
* Override the entry point.
* Clean up and add ReadMe.
* Clean up socat.
* Checkpoint: connect to Kube cluster and list all the pods.
* Checkpoint: Sync worker pod is able to send output to the destination pod.
* Checkpoint: Sync worker creates Dest pod if none existed previously. It also waits for the pod to be ready before doing anything else. Sync worker will also remove the pod on termination.
* update readme
* Checkpoint: Dest pod does nott restart after finishing. Comment out delete command in Sync worker.
* working towards named pipes
* named pipes working
* update readme
* WIP named pipe / socat sidecar kube port forwarding (#3518)
* nearly working sources
* update
* stdin example
* move all kube testing yamls into the airbyte-workers directories. sort the airbyte-workers resource folder; place all the poc yamls together.
* Format.
* Put back the original KubeProcessBuilderFactory.
* Fix slight errors.
* Checkpoint: Worker pod knows its own IP. Successfully starts and writes to Dest pod after refactor.
* remove unused file and update readme
* Dest pod loops back into worker pod. However, the right messages do not seem to be passing in.
* Switch back to worker ip.
* SWEET VICTORY!.
* wrap kube pod in process (#3540)
also clean up kubernetes deploys.
* More clean up. (#3586)
The first 6 points of #3464.
The only interesting thing about this PR is the kube pod shutdown. For whatever reason, the OkHttpPool isn't respecting the evictAll call and 1 idle thread remains. So instead of shutting down immediately, the worker pod shuts down after 5 mins when the idle thread id reaped. There isn't an easy way to modify the pool's idle reap configuration now. I do not think this issue is blocking since it's relatively benign, so I vote we create a ticket and come back to this once we do an e2e test.
* Implements redirecting standard error as well. (#3623)
* Clean up before next implementation.
* kube process launching (#3790)
* processes must handle file mounting
* remove comment
* default to base entrypoint
* use process builder factory / select stdin / use a pool of ports
* fix up
* add super hacky copying example
* Checkpoint: Works end to end!
* Checkpoint: Use API to make sure init container is ready instead of blind sleep. Propagate exception in DefaultCheckConnectionWorker.
* Refactor KubePodProcess. Checked to make sure everything still works.
* Format.
* Clean up code. Begin putting this into variables and breaking up long constructor function.
* Add comments to explain what is happening.
* fix normalization test
* increase timeout for initcontainer
Co-authored-by: Davin Chia <davinchia@gmail.com>
* facepalm moment
* clean up kube poc pr (#3834)
* clean up
* remove source-always-works
* create separate commons-docker
* fix test
* enable kube e2e tests (#3866)
* enable kube e2e tests
* use more generally accepted env definition
* use new runners
* use its own runner and install minikube differently
* update name
* use kubectl alias
* use link instead of alias that doesn't propagate
* start minikube
* use driver=none
* go back to using action
* mess with versions
* revert runner
* install socat
* print logs after run
* also try re-runnining tasks
* always wait for file transfer
* use ports
* increase wait timeout for kube
* use different localhost ips and bump normalization to include an entrypoint
* proposed fix
* all working locally
* revert temporary changes
* revert normalization image change that's happening in a separate pr
* readability
* final comment
* Working Kube Cancel. (#3983)
* Port over the basic changes.
* Add logic to return proper exit code in the event of termination. Add comments to explain why.
* revert envs change and merge master to fix kube acceptance tests (#4012)
* use older env format
* fix build
Co-authored-by: jrhizor <me@jaredrhizor.com>
Co-authored-by: Jared Rhizor <jared@dataline.io>
* add AIRBYTE_ENTRYPOINT for kubernetes support
* bump versions
* bump version in seed
* Update generic template
* keep scaffold sources at 0.1.0
* add missing newline
* handle python base versions correctly
* re-bump mysql and postgres sources
* re-bump snowflake destination
* add skip tests option
* switch to running tests
* reverse conditional to make it safer
* fix publish to include the test running
* fix iterable version
* fix file generation
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Update README icon links
* Update airbyte-specification doc
* Extend base connector
* Remove redundant region
* Separate warning from info
* Implement s3 destination
* Run format
* Clarify logging message
* Rename variables and functions
* Update documentation
* Rename and annotate interface
* Inject formatter factory
* Remove part size
* Fix spec field names and add unit tests
* Add unit tests for csv output formatter
* Format code
* Complete acceptance test and fix bugs
* Fix uuid
* Remove generator template files
They belong to another PR.
* Add unhappy test case
* Checkin airbyte state message
* Adjust stream transfer manager parameters
* Use underscore in filename
* Create csv sheet generator to handle data processing
* Format code
* Add partition id to filename
* Rename date format variable
* Asana source
* Fix creds for CI.
* Update connection status in acceptance test config
Change status from `exception` to `failed`.
* Implement change request.
Remove few files from /integration_tests folder.
Use `stream_slices` and/or `request_params` functions instear of `read_records` function
* Update sample_config.json file.
* Update airbyte-integrations/connectors/source-asana/CHANGELOG.md
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Update `stream_slices` using.
Create reneric `read_stream` function in AsanaStream class and move there logic from `stream_slices` function.
* Rename functions.
rename `read_stream` to `read_slices_from_records`.
* Changes about publishing.
Add asana source to `source_definitions.yaml`.
Add `asana.svg`.
Create connector related file in `STANDARD_SOURCE_DEFINITION` folder.
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Create new test_ephemeral and refactor with test_normalization
* Add notes in docs
* Refactor common normalization tests into DbtIntegrationTest
* Bumpversion of normalization image