Implement new destination connector for databricks delta lake.
Resolves#2075.
Co-authored-by: George Claireaux <george@claireaux.co.uk>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* fixed s3 destination field naming for Parquet and Avro formats
* pull request number update
* updated resources for parquet
* snowflake s3 destination COPY is writing records from different table in the same raw table fix
* fixed s3 destination name transformer
* updated s3 destination name transformer
* updated snowflake s3 file name
* updated snowflake documentation
* updated snowflake documentation
* updated snowflake documentation
* updated code style
* updated code style
* updated code style
* updated redshift destination
* added test data for test filed with bad first char
* updated s3 documentation
* fixed remarks
* fixed code style
* fixed s3 tests
* oracle normalization
* correct dbt_project function for oracle
* unit tests
* run format
* correct ephemeral tests
* add gradle dependency for oracle destination
* run int tests
* add oracle in settings.gradle for normalization run[
* use default airbyte columns
* format
* test all destinatoin ephemeral
* correct unit test
* correct unit test
* destination docs update
* correct mypy
* integration test all dest
* refactor oracle function
* merge master
* run all destinations
* flake8 escape regex
* surrogate key function
* correct few minor comments
* refactor scd sql function
* refactor scd function
* revert test
* refactor minor details
* revert tests
* revert ephemeral test
* revert unit test table_registry
* revert airbyte_protocol format
* format
* bump normalization version in worker
* minor chnages
* minor chages
* correct json_column for other destinations
* gradlew format
* revert tests
* remove comments
* add Oracle destination explicit in safe_cast_str
* add quote_in_parenthesis inside if clause
* gradlew format
* Fixed (StackOverflow) destination bigquery denormalized destination to handle the case when received schema doesn't contain a data type for Array type
* new mongo db destination
* fix remarks
* updated documentation and added loggers
* updated documentation
* added hashCode field to mongodb document and fix minor remarks
* fix code style
* updated mongodb data hash from integer to UUID string
* Added the DynamoDB destination connector.
Implemented getConsumer and check methods.
Signed-off-by: Jinni Gu <jinnigu@uw.edu>
* Added auto-generated project files.
Signed-off-by: Yiqing Wang <yiqing@wangemail.com>
* Added config related files and output table helper.
Signed-off-by: Yiqing Wang <yiqing@wangemail.com>
* Added document for DynamoDB destination.
Signed-off-by: Jinni Gu <jinnigu@uw.edu>
* Implemented DynamodbWriter.
Added integration tests and unit tests.
Signed-off-by: qtz123 <qiutingzhi1995@gmail.com>
* Added DynamoDB in the SUMMARY.md.
Signed-off-by: qtz123 <qiutingzhi1995@gmail.com>
* Formatted code using ./gradlew format.
Signed-off-by: Jinni Gu <jinnigu@uw.edu>
* Added changelog to the doc.
Signed-off-by: qtz123 <qiutingzhi1995@gmail.com>
* Used PAY_PER_REQUEST instead of provisioned for DynamoDB.
Gave the value a name batchSize.
Removed unnecessary logs.
Signed-off-by: Yiqing Wang <yiqing@wangemail.com>
Co-authored-by: Yiqing Wang <yiqing@wangemail.com>
Co-authored-by: qtz123 <qiutingzhi1995@gmail.com>
* fix \u0000(NULL) value processing for Postgres + move postgres impl of SqlOperations to PostgresSqlOperations.
* changelog + format
* incr release version
* Add generic solution to adopt messages for a destination + remove unnecessary serialization
* revert version for build
* minor review fixes
* format
* add comments
* format
* incr version
* add batch size to 700 records x 3 columsn = 2100 params
* remove import
* add comment
* add value as variable
* docs and bump version
* add tests for mssql failure
* remove validation of msgs for batch test
* Adding Google Cloud Storage as destination
* Removed few comments and amended the version
* Added documentation in docs/integrations/destinations/gcs.md
* Amended gcs.md with the right pull id
* Implemented all the fixes requested by tuliren as per https://github.com/airbytehq/airbyte/pull/4329
* Renaming all the files
* Branch alligned to S3 0.1.7 (with Avro and Jsonl). Removed redundant file by making S3 a dependency for GCS
* Removed some additional duplicates between GCS and S3
* Revert changes in the root files
* Revert jdbc files
* Fix package names
* Refactor gcs config
* Format code
* Fix gcs connection
* Format code
* Add acceptance tests
* Fix parquet acceptance test
* Add ci credentials
* Register the connector and update documentations
* Fix typo
* Format code
* Add unit test
* Add comments
* Update readme
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
Co-authored-by: Marco Fontana <marco.fontana@sohohouse.com>
Co-authored-by: marcofontana.ing@gmail.com <marcofontana.ing@gmail.com>
Co-authored-by: Marco Fontana <MaxwellJK@users.noreply.github.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Support combined restrictions in json schema
* Bump s3 version
* Add more test cases
* Update changelog
* Add more test cases
* Update documentation
* Format code
* Bump mysql destination version to pick up normalization
* Also change the publish command to run on ec2-runners to try and avoid build errors with gradle unable to find the right volume.
Co-authored-by: Davin Chia <davinchia@gmail.com>