* Additinal details about Windows specific long filename error during cloning
* Corrected the path
* Revert "Corrected the path"
This reverts commit bbd3b78fcb.
* Revert "Additinal details about Windows specific long filename error during cloning"
This reverts commit 0b695eea1a.
* Email is mandatory 'Specify your preferences', Link to docker inst guide
* Use relative paths while linking
* Typo in Destination section
* On Windows, fixing long filename error during cloning
* For Windows with WSL2 and Docker, clarify steps to locate destination local folder
* Link to locating local files on Windows
* Update docs/contributing-to-airbyte/updating-documentation.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/quickstart/set-up-a-connection.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/examples/postgres-replication.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/examples/postgres-replication.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/examples/postgres-replication.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/deploying-airbyte/local-deployment.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Update docs/deploying-airbyte/local-deployment.md
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
Co-authored-by: Patali, Prashanth <ppatali@hidglobal.com>
Co-authored-by: Abhi Vaidyanatha <abhi@airbyte.io>
* Add skeleton code for parquet writer
* Refactor s3 destination code
* Add parquet to spec
* Complete parquet writer
* Change testing data from int to double
* Add acceptance test for parquet writer
* Handle special schema field names
* Format code
* Add parquet config
* Add documentation
* Add unit tests
* Fix typo
* Update document
* Bump version
* Fix date format
* Fix credential filename
* Update doc
* Update test and publish commands
* Refactor s3 format config
* Append compression codec file extension
* Update doc
* Remove compression codec file extension
* Add comments
* Add README, CHANGELOG, and sample configs
* Move changelog
* Use switch statement
* Move filename helper method to base writer
* Rename converter
* Separate test cases
* Drop union type length restriction
* Support array with multiple types
* Move comments to connector doc
* Share config between acceptance tests
* Add doc about additional properties
* Move shared code out of if branch
* Add doc about adding a new format
* Format code
* Bump version to 0.1.4
* Fix default max padding size
Authored by @panhavad
* base from prev PR
* add s3 alternative destination connector feature
* fix testGetOutputFilename
* default using aws
* Update airbyte-integrations/connectors/destination-jdbc/src/main/java/io/airbyte/integrations/destination/jdbc/copy/s3/S3Config.java
* Update README icon links
* Update airbyte-specification doc
* Extend base connector
* Remove redundant region
* Separate warning from info
* Implement s3 destination
* Run format
* Clarify logging message
* Rename variables and functions
* Update documentation
* Rename and annotate interface
* Inject formatter factory
* Remove part size
* Fix spec field names and add unit tests
* Add unit tests for csv output formatter
* Format code
* Complete acceptance test and fix bugs
* Fix uuid
* Remove generator template files
They belong to another PR.
* Add unhappy test case
* Checkin airbyte state message
* Adjust stream transfer manager parameters
* Use underscore in filename
* Create csv sheet generator to handle data processing
* Format code
* Add partition id to filename
* Rename date format variable
* Abort sync if one of the part fails to copy to temp table
* Check for record size when copying data from s3 to redshift
* Handle big record in RedshiftInsertDestination too
The Redshift Copy strategy currently has it's part size set to 10 MB. Since S3 allows a file to be broken into max of 10k parts, this results in a 100GB table limit. A user is trying to sync a table of 115GB and running into this issue.
This makes the part size configurable so users can increase this size if needed.
Release all connectors affected by namespace change. Includes all JDBC sources and destinations.
Also add documentation for normalisation. Prerequisite to actually releasing 0.21.0-alpha.
* Add logging info when writing local data files
* Make Local CSV/Json destination always writes in /local (unecessary to specify it in the configs)
* BumpVersion of CSV and Local Json destinations
(throws Exceptions when failing too)
* initial attempt at generating local kube setup from docker compose
* update current state
* mounts not working
* working mounts, failing cors
* working UI
* add remaining todos
* update todos
* A
* use kustomize to select image versions
* kube process builder factory
* fix misalignment
* don't allow any retries for requested jobs
* fix log waiting and path handling
* update todos
* local volume handling
* propagate return code correctly
* update todos
* update docs
* fmt
* add to docs
* fix conflicting config file bug
* fmt
* delete unused file
* remove comment
* add job id and attempt as inputs
* rename to WorkerEnvironment
* fix example custom overlay
* less trigger-happy docs
* rename mounts
* show local csv as not working in kube in the docs
* use config maps for everything
* fix paths
* fix build
* fix stripe integration test usage
* fix papercups on kube