Part 1 of #13122.
Rename airbyte-db:lib to airbyte-db:db-lib.
Rename airbyte-metrics:lib to airbyte-metrics:metrics-lib
Rename airbyte-protocol:models to airbyte-protocol:protocol-models.
Explanation for what is happening:
Identically named subprojects have the following issues:
- publishing as is leads to classpath confusion when the jars with the same names are placed in the Java distribution. This leads to NoClassDefFound errors on runtime.
- deconflicting the jar names without changing directory names leads to dependency errors as the OSS jar pom files are generated using project dependencies (suggesting a dependency a sibling subproject in the same repo) that use subprojects group and name as a reference. This means the generated jars look for Jars that do not exists (as their names have been changed) and cannot compile.
- the workaround to changing a subproject's name involves resetting the subproject's name in the settings.gradle and depending on the new name in each build.gradle. This increases configuration burden and decreases the ease of reading, since one will have to check the settings.gradle to know what the right subproject name is. See Projects with same name lead to unintended conflict resolution gradle/gradle#847 for more info.
- given that Gradle itself doesn't have support for identically named subprojects (see the linked issue), the simplest solution is to not allow duplicated directories. I've only renamed conflicting directories here to keep things simple. I will create a follow up issues to enforce non-identical subproject names in our builds.
* Default scaffold to use adaptive streaming config
* Switch more connectors to use adaptive streaming config
* Bump version for cockroach db
* Bump version for db2
* Bump mssql version
* Bump mysql version
* Bump oracle version
* Bump postgres version
* Bump redshift version
* Bump snowflake version
* Bump tidb version
* auto-bump connector version
* Fix db2 findbug issue
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* Fix more findbug issues
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* Fix findbug issue for mysql-strict-encrypt
* Fix findbugs issue for oracle source
* auto-bump connector version
* Remove suppress warnings annotation
* Fix oracle encrypt tests
* Fix oracle encrypt acceptance test
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Merge all streaming configs to one
* Implement new streaming query config
* Format code
* Fix comparison
* Use double for mean byte size
* Update fetch size only when changed
* Calculate mean size by sampling n rows
* Add javadoc
* Change min fetch size to 1
* Add comment by buffer size
* Update java connector template
* Perform division first
* Add unit test for fetching large rows
* Format code
* Fix connector compilation error
* parse jdbc parameters
* Also fix redshift
* other oracle source acceptance test
* This is & now
* This is & now
* This is & now
* This is & now
* This is & now
* also update nne
* increase sleep to 11 seconds
* Bump to 15 seconds
* gradlew format
* try to reformat
* gradlew format
* Run ./gradlew :airbyte-integrations:connector-templates:generator:testScaffoldTemplates --scan
* reset to master
* Revert "reset to master"
This reverts commit d6141ed933.
* Add jdbc compatible layer
* Support routine mysql types
* Format code
* Fix build
* Refactor abstract jdbc source and operation classes
* Update mysql source operations
* Test discover command for mysql
* Remove abstract jdbc compatible source layer
* Format code
* Update template
* Fix more types
* Bump version
* Log original field type
* Update comments
* Bump version in seed
* Redshift Source and Restination set SSL as default option
* add changelog
* remove SSL test| add more documentation
* bump new version
* bump new version
Co-authored-by: vmaltsev <vitalii.maltsev@globallogic.com>
# Summary
- A follow-up PR for #5543.
- This PR separates the `airbyte-db` project to two modules:
- `lib` is the original `airbyte-db`.
- `jooq` is for jOOQ code generation.
- This is necessary because the jOOQ generator requires a custom database implementation that can run Flyway migration. So the code generator logic needs to depend on the compilation of the original `airbyte-db` project.
# Commits
* Separate db to lib and jooq modules
* Update dependencies
* Add jobs db migrator test
* Fix compose build
* Add migration dev center
* Add schema dump task
* Update airbyte-db/lib/README.md
* Co-authored-by: Davin Chia <davinchia@gmail.com>
* Update readme
* Remove bom dependency
* Update readme
* Use jooq code in db config persistence
* Remove AirbyteConfigsTable
Co-authored-by: Davin Chia <davinchia@gmail.com>
* Use CDK to generate source that can be configured to emit a certain number of records and always works.
* Checkpoint: socat works from inside the docker container.
* Override the entry point.
* Clean up and add ReadMe.
* Clean up socat.
* Checkpoint: connect to Kube cluster and list all the pods.
* Checkpoint: Sync worker pod is able to send output to the destination pod.
* Checkpoint: Sync worker creates Dest pod if none existed previously. It also waits for the pod to be ready before doing anything else. Sync worker will also remove the pod on termination.
* update readme
* Checkpoint: Dest pod does nott restart after finishing. Comment out delete command in Sync worker.
* working towards named pipes
* named pipes working
* update readme
* WIP named pipe / socat sidecar kube port forwarding (#3518)
* nearly working sources
* update
* stdin example
* move all kube testing yamls into the airbyte-workers directories. sort the airbyte-workers resource folder; place all the poc yamls together.
* Format.
* Put back the original KubeProcessBuilderFactory.
* Fix slight errors.
* Checkpoint: Worker pod knows its own IP. Successfully starts and writes to Dest pod after refactor.
* remove unused file and update readme
* Dest pod loops back into worker pod. However, the right messages do not seem to be passing in.
* Switch back to worker ip.
* SWEET VICTORY!.
* wrap kube pod in process (#3540)
also clean up kubernetes deploys.
* More clean up. (#3586)
The first 6 points of #3464.
The only interesting thing about this PR is the kube pod shutdown. For whatever reason, the OkHttpPool isn't respecting the evictAll call and 1 idle thread remains. So instead of shutting down immediately, the worker pod shuts down after 5 mins when the idle thread id reaped. There isn't an easy way to modify the pool's idle reap configuration now. I do not think this issue is blocking since it's relatively benign, so I vote we create a ticket and come back to this once we do an e2e test.
* Implements redirecting standard error as well. (#3623)
* Clean up before next implementation.
* kube process launching (#3790)
* processes must handle file mounting
* remove comment
* default to base entrypoint
* use process builder factory / select stdin / use a pool of ports
* fix up
* add super hacky copying example
* Checkpoint: Works end to end!
* Checkpoint: Use API to make sure init container is ready instead of blind sleep. Propagate exception in DefaultCheckConnectionWorker.
* Refactor KubePodProcess. Checked to make sure everything still works.
* Format.
* Clean up code. Begin putting this into variables and breaking up long constructor function.
* Add comments to explain what is happening.
* fix normalization test
* increase timeout for initcontainer
Co-authored-by: Davin Chia <davinchia@gmail.com>
* facepalm moment
* clean up kube poc pr (#3834)
* clean up
* remove source-always-works
* create separate commons-docker
* fix test
* enable kube e2e tests (#3866)
* enable kube e2e tests
* use more generally accepted env definition
* use new runners
* use its own runner and install minikube differently
* update name
* use kubectl alias
* use link instead of alias that doesn't propagate
* start minikube
* use driver=none
* go back to using action
* mess with versions
* revert runner
* install socat
* print logs after run
* also try re-runnining tasks
* always wait for file transfer
* use ports
* increase wait timeout for kube
* use different localhost ips and bump normalization to include an entrypoint
* proposed fix
* all working locally
* revert temporary changes
* revert normalization image change that's happening in a separate pr
* readability
* final comment
* Working Kube Cancel. (#3983)
* Port over the basic changes.
* Add logic to return proper exit code in the event of termination. Add comments to explain why.
* revert envs change and merge master to fix kube acceptance tests (#4012)
* use older env format
* fix build
Co-authored-by: jrhizor <me@jaredrhizor.com>
Co-authored-by: Jared Rhizor <jared@dataline.io>
* add AIRBYTE_ENTRYPOINT for kubernetes support
* bump versions
* bump version in seed
* Update generic template
* keep scaffold sources at 0.1.0
* add missing newline
* handle python base versions correctly
* re-bump mysql and postgres sources
* re-bump snowflake destination
* add skip tests option
* switch to running tests
* reverse conditional to make it safer
* fix publish to include the test running
* fix iterable version
* fix file generation
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
Release all connectors affected by namespace change. Includes all JDBC sources and destinations.
Also add documentation for normalisation. Prerequisite to actually releasing 0.21.0-alpha.
This PR introduces the following behavior for JDBC sources:
Instead of streamName = schema.tableName, this is now streamName = tableName and namespace = schema. This means that, when replicating from these sources, data will be replicated into a form matching the source. e.g. public.users (postgres source) -> public.users (postgres destination) instead of current behaviour of public.public_users. Since MySQL does not have schemas, the MySQL source uses the database as it's namespace.
To do so:
- Make namespace a field class concept in Airbyte Protocol. This allows the source to propagate namespace and destinations to write to a source-defined namespace. Also sets us up for future namespace related configurability.
- Add an optional namespace field to the AirbyteRecordMessage. This field will be set by sources that support namespace.
- Introduce AirbyteStreamNameNamespacePair as a type-safe manner of identifying streams throughout our code base.
- Modify base_normalisation to better support source defined namespace, specifically allowing normalisation of tables with the same name to different schemas.
Last step (besides documentation) of namespace changes. This is a follow up to #2767 .
After this change, the following JDBC sources will change their behaviour to the behaviour described in the above document.
Namely, instead of streamName = schema.tableName, this will become streamName = tableName and namespace = schema. This means that, when replicating from these sources, data will be replicated into a form matching the source. e.g. public.users (postgres source) -> public.users (postgres destination) instead of current behaviour of public.public_users. Since MySQL does not have schemas, the MySQL source uses the database as it's namespace.
I cleaned up some bits of the CatalogHelpers. This affected the destinations, so I'm also running the destination tests.
* Handle destination sync mode in destinations
* Source & Destination sync modes are required (#2500)
* Provide Migration script making sure it is always defined for previous sync configs
* Fix JdbcSource handling of tables with same names in different schemas
* Previously the JdbcSource was combining the columns of any tables with the same name across different schemas into a single stream in the catalog.
* This was caught because in those tables there were columns of the same name with different types which triggered a precondition to check for this.
* The fix makes sure we group by both schema name and table name.
* Adds test to the standard jdbc tests to catch this case.
* This test does NOT run for mysql as, mysql has no concept of schemas.
* Add standard tests for sources that use the JdbcSource to guarantee that changes do not break any sources that rely on JdbcSource.
* Add JdbcStressTest to verify that we stream / chunk data properly (a.k.a can handle more data in any JdbcSource than fits in memory)
* Migrate MSSQL and Redshift to user the new base source