* Adds logic to fail upon non-deterministic custom S3 endpoint and documentation for insecure settings
* Reused config factory settings to a single static variable
* Updated error message and example in the spec.json to match expectation of secured endpoint
* Added validation check within the base s3
* Integrated AdaptiveDestinationRunner with S3Destination
* Reduced visibility for testing and fixed AdaptiveDestinationRunner issue
* Adds speicifc secure protocol with S3 and empty endpoint check
* Bumps docker version and adds comments and clearer string methods
* auto-bump connector version [ci skip]
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* added destination-r2 spec
added base-java-s3 module
updated common s3 lib
local
changed dependency of s3 in all realted modules
minor style fixes
added common module for s3 integration tests
* fix configuration for r2 integration tests
* fix configuration for r2 integration tests
* added docs for destination-r2
* minor import fixes
* mark test as disabled
* fixed imports in destination-snowflake
* restored styling task
* added tests for S3DEstinationConfig
* added upload threads count configuration for r2(due to limitations)
* deleted parquet format type
* Fix import in gcs
* Fix import in redshift
* Fix import in snowflake
* Fix one more import in gcs
* Fix one more import in redshift
* Fix import in databricks
Co-authored-by: Liren Tu <tuliren.git@outlook.com>
* Add test case for allOf and oneOf
* Bump version
* Add pr id
* Bump gcs version
* Bump dest jdbc
* Bump redshift
* Bump snowflake
* Bump databricks
* Bump bigquery
* Revert "Bump dest jdbc"
This reverts commit f10497e96a.
* Use a new pat to avoid api rate limit
* auto-bump connector version [ci skip]
* auto-bump connector version [ci skip]
* auto-bump connector version [ci skip]
* auto-bump connector version [ci skip]
* auto-bump connector version [ci skip]
* Revert databricks bump
* auto-bump connector version [ci skip]
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Fixed bucket naming for S3
* Destination S3: add LZO compression support for parquet files
* Destination S3: add LZO compression support for parquet files
* implemented logic for aarch64
* removed redundant logging
* updated changelog
* moved intstall of native-lzo lib to Dockerfile
* removed redundant logging
* add unit test for aarch64
* bump version
* auto-bump connector version [ci skip]
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Remove "additionalProperties": false from spec for connectors with staging
* Remove "additionalProperties": false from spec for Redshift destination
* bump versions
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Removed part_size from connectors that use StreamTransferManager
* fixed S3DestinationConfigTest
* fixed S3JsonlFormatConfigTest
* upadate changelog and bump version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* upadate changelog and bump version for Redshift and Snowflake destinations
* auto-bump connector version
* fix GCS staging test
* fix GCS staging test
* auto-bump connector version
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* S3 destination: Updating processing data types for Avro/Parquet formats
* S3 destination: handle comparing data types
* S3 destination: clean code
* S3 destination: clean code
* S3 destination: handle case with unexpected json schema type
* S3 destination: clean code
* S3 destination: Extract the same logic for Avro/Parquet formats to separate parent class
* S3 destination: clean code
* S3 destination: clean code
* GCS destination: Update data types processing for Avro/Parquet formats
* GCS destination: clean redundant code
* S3 destination: handle case with numbers inside array
* S3 destination: clean code
* S3 destination: add unit test
* S3 destination: update unit test cases with number types.
* S3 destination: update unit tests.
* S3 destination: bump version for s3 and gcs
* auto-bump connector version
* auto-bump connector version
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Publish new connectors to log debugging info in json to avro conversion
* Add pull request id
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
* auto-bump connector version
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* S3 destination: updating docs regarding certification
* S3 destination: updating docs by new template
* Apply suggestions from code review
Co-authored-by: Andy <andy@airbyte.io>
* S3 destination: updating docs by new template
Co-authored-by: Andy <andy@airbyte.io>
* Add gzip compression option
* Add file extension method to s3 format config
* Pass gzip compression to serialized buffer
* Add unit test
* Format code
* Update integration test
* Bump version and update doc
* Fix unit test
* Add extra gzip tests for csv and jsonl
* Make compression an oneOf param
* Migrate csv config to new compression spec
* Migrate jsonl config to new compression spec
* Update docs
* Fix unit test
* Fix integration tests
* Format code
* Bump version
* auto-bump connector version
* Bump gcs version in seed
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
* Add a test for listObjects permission to destination-s3 connector
* add testIAMUserHasListObjectPermission method to S3Destination
and call this method from S3Destination::check. Method throws
an exception if IAM user does not have listObjects permission
on the destination bucket
* add a unit test to S3DestinationTest to verify that S3Destination::check
fails if listObjects throws an exception
* add a unit test to S3DestinationTest to verify that S3Destination::check
succeeds if listObjects succeeds
* Add S3DestinationConfigFactory in order to be able to mock S3 client
used in S3Destination::check
* Addressing review comments:
- separate positive and negative unit tests
- fix formatting
- reuse s3 client for both positive and negative tests
* Add information about PR #10856 to the changelog
* Prepare for publishing new version:
* Bump version to 0.2.10 in Dockerfile
* Bump version to 0.2.10 in changelog
* Update destination-s3 version in connector index
* Update seed spec for destination-s3 connector
* use instanceprofile to auth if id is not provided
* restore support for using endpoint
* update readme
* update changelog
* update documentation, add setup guide
* Update docs/integrations/destinations/s3.md
Co-authored-by: Edward Gao <edward.gao@airbyte.io>
* minor fixes
* add error message
* now using RuntimeException
* Update airbyte-integrations/connectors/destination-s3/src/main/java/io/airbyte/integrations/destination/s3/S3DestinationConfig.java
Co-authored-by: Edward Gao <edward.gao@airbyte.io>
* bump connector version
* update seed file
Co-authored-by: Edward Gao <edward.gao@airbyte.io>
Co-authored-by: Marcos Marx <marcosmarxm@gmail.com>
* Fix GCS Avro file processing with invalid "-" character
* Extend test data to cover the case
* incr ver
* s3 ver upd
* add dependency
* add dependency
* Support array field with empty items specification
* Remove all exceptions
* Format code
* Bump connector versions
* Bump bigquery versions
* Update docs
* Remove unused code
* Update doc for PR #9363
* Update doc about defaulting all improperly typed fields to string
* Ignore bigquery
* Update version and doc
* Update doc
* Bump version in seed