* Update README icon links
* Update airbyte-specification doc
* Extend base connector
* Remove redundant region
* Separate warning from info
* Implement s3 destination
* Run format
* Clarify logging message
* Rename variables and functions
* Update documentation
* Rename and annotate interface
* Inject formatter factory
* Remove part size
* Fix spec field names and add unit tests
* Add unit tests for csv output formatter
* Format code
* Complete acceptance test and fix bugs
* Fix uuid
* Remove generator template files
They belong to another PR.
* Add unhappy test case
* Checkin airbyte state message
* Adjust stream transfer manager parameters
* Use underscore in filename
* Create csv sheet generator to handle data processing
* Format code
* Add partition id to filename
* Rename date format variable
* Asana source
* Fix creds for CI.
* Update connection status in acceptance test config
Change status from `exception` to `failed`.
* Implement change request.
Remove few files from /integration_tests folder.
Use `stream_slices` and/or `request_params` functions instear of `read_records` function
* Update sample_config.json file.
* Update airbyte-integrations/connectors/source-asana/CHANGELOG.md
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Update `stream_slices` using.
Create reneric `read_stream` function in AsanaStream class and move there logic from `stream_slices` function.
* Rename functions.
rename `read_stream` to `read_slices_from_records`.
* Changes about publishing.
Add asana source to `source_definitions.yaml`.
Add `asana.svg`.
Create connector related file in `STANDARD_SOURCE_DEFINITION` folder.
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Create new test_ephemeral and refactor with test_normalization
* Add notes in docs
* Refactor common normalization tests into DbtIntegrationTest
* Bumpversion of normalization image
This PR introduces the following behavior for JDBC sources:
Instead of streamName = schema.tableName, this is now streamName = tableName and namespace = schema. This means that, when replicating from these sources, data will be replicated into a form matching the source. e.g. public.users (postgres source) -> public.users (postgres destination) instead of current behaviour of public.public_users. Since MySQL does not have schemas, the MySQL source uses the database as it's namespace.
To do so:
- Make namespace a field class concept in Airbyte Protocol. This allows the source to propagate namespace and destinations to write to a source-defined namespace. Also sets us up for future namespace related configurability.
- Add an optional namespace field to the AirbyteRecordMessage. This field will be set by sources that support namespace.
- Introduce AirbyteStreamNameNamespacePair as a type-safe manner of identifying streams throughout our code base.
- Modify base_normalisation to better support source defined namespace, specifically allowing normalisation of tables with the same name to different schemas.
* Zendesk Talk #2346: full refresh/incremental sync connector with adopt best practices
Co-authored-by: ykurochkin <y.kurochkin@zazmic.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
Last step (besides documentation) of namespace changes. This is a follow up to #2767 .
After this change, the following JDBC sources will change their behaviour to the behaviour described in the above document.
Namely, instead of streamName = schema.tableName, this will become streamName = tableName and namespace = schema. This means that, when replicating from these sources, data will be replicated into a form matching the source. e.g. public.users (postgres source) -> public.users (postgres destination) instead of current behaviour of public.public_users. Since MySQL does not have schemas, the MySQL source uses the database as it's namespace.
I cleaned up some bits of the CatalogHelpers. This affected the destinations, so I'm also running the destination tests.
* #2166 Issue: create Instagram connector and implement all relevant streams as full refresh
* #2166 Issue: add Insights streams
* #2273 Issue: add Incremental for streams
* #2273 Issue: code clean up
* update code after review
* add check on error for Story Insight
* add comments to code
* Source Instagram: adopt best practices, add docs, pull data from all IG business accounts (#2373)
* #2276 Issue: adopt best practices, add separate integration test for Insight's streams, create docs, update version of SDK library
* #2304 Issue: pull data from all IG business accounts
* add BASE_DIRECTORY to integration_test.py
* format configured_catalog(s)
* add credentials variables
* implement separated incremental states for different account_id, update docs
* Update instagram.md
* simplify state format
Co-authored-by: ykurochkin <y.kurochkin@zazmic.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* add sample_config.json file
Co-authored-by: ykurochkin <y.kurochkin@zazmic.com>
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* Google directory source #2110 - creating new source
* Google Directory #2110 - implementing new source
* Google directory #2110 - handling rate limit
* Google Directory #2110 - handling errors and rate limits
* Google Directory #2110 - reformat
* Google Directory #2110 - adding CI credentials
* Google Directory #2110 - adding to the source definition registry
* Google Directory #2110 - adding to the source definition registry(fix)
* Google Directory #2110 - injecting the config into the build environment
* Update google-directory.md
* Update google-directory.md
* Google directory #2110 - rename max_results to results_per_page and increase it to 100, fixing setup.py
Co-authored-by: Sherif A. Nada <snadalive@gmail.com>
* add initialization for status dashboard static api
* working status reporter for integration tests
* try different formatting for date
* actually run tests
* only report build status for master
* some level of functions + datetime switching by architecture
* mixpanel integration
* add gcc to build signer tap
* fix source
* update catalogs
* fix registration name
* use base classes for source and tests
+ fix issues with creds and incremental in configured_catalog.json
* restore json config file
* set default start_date to 1 year ago
* format
* fix logging of errors in BaseSingerSource