A few changes to improve debuggability:
1) We now print the stack trace that created an orphan thread, in addition to the current stack trace.
2) we add the ability for a connector to tell the integration runner to not care about certain threads (by adding a thread filter)
3) We change the log of connector containers to include the name of the test that created it (usually each test will create and close a container)
4) We also log when the timeout timer gets triggered, to try to understand why some tests are still taking way more time than their timeout should allow
Another change that was initially in there but has been removed and might be worth thinking about:
Today, only Write checks for live threads on shutdown. We might want to do that for other operations as well
As title, remove the GCS/S3 staging methods.
There isn't much usage so we can remove this. Internal Staging is also recommended by Snowflake, so using that is both cheaper and faster.
Co-authored-by: davinchia <davinchia@users.noreply.github.com>
Co-authored-by: Evan Tahler <evan@airbyte.io>
Co-authored-by: Pedro S. Lopez <pedroslopez@me.com>
* csv sheet generator supports 1s1t
* create+insert raw tables 1s1t
* add skeletons
* start writing tests
* progress in creating raw tables
* fix tests
* add s3 test; better csv generation
* handle case-sensitive column names
* also add gcs test
* hook T+D into the destination
* fix redshift; simplify
* Delete unused files?
* disable test; enable cleanup
* initialize config singleton in tests
* logistics
* header
* simplify
* fix unit tests
* correctly disable tests
* use default null for loaded_at
* fix test
* autoformat
* cython >.>
* more singleton init
* literally how?
* basic destinationhandler impl
* use raw string for type >.>
* add toDialectType
* basic createTable impl
* better sql query
* comment
* unused variables
* recorddiffer can be case-sensitive
* misc fixes
* add expected_records
* move constants to base-java
* use ternary
* fix tests
* resolve todo
* T+D can trigger on first commit
* fix test teardown
* implement softReset
* implement overwriteFinalTable
* better type stuff; check table schema
* fix
* derp
* implement updateTable?
* derp
* random wip stuff
* fix insertRaw
* theoretically implement stuff?
* stuff
* put suffix at the end
* different uuids
* fix expected records
* move tdtest resources into dat folder
* use resource files
* stuff
* move code around
* more stuff
* rename final table
* stuff
* cdc immediate deletion
* cdcComplexUpdate
* cleanup
* botched rebase
* more tests
* move back to old file
* Automated Commit - Format and Process Resources Changes
* add comments
* Automated Commit - Format and Process Resources Changes
* fix merge
* move expected_records into dat folder
* wip implement sqlgenerator test
* basic implementation
* tons of fixes, still tons more to go
* more stuff
* fix more things
* hacky convert temporal types to varchar
* test data fix
* fix variant parsing
* fix number
* fix time parsing; fix test data
* typo
* fix input data
* progress
* switch back to float
* add more test files
* swap int -> number
* fix PK null check
* fix overwriteTable
* better test
* Automated Commit - Format and Process Resources Changes
* type aliases, one more test
* also verify numeric precision/scale
* logistics
---------
Co-authored-by: edgao <edgao@users.noreply.github.com>
* Proof of concept parallel source stream reading implementation for MySQL
* Automated Change
* Add read method that supports concurrent execution to Source interface
* Remove parallel iterator
* Ensure that executor service is stopped
* Automated Commit - Format and Process Resources Changes
* Expose method to fix compilation issue
* Use concurrent map to avoid access issues
* Automated Commit - Format and Process Resources Changes
* Ensure concurrent streams finish before closing source
* Fix compile issue
* Formatting
* Exclude concurrent stream threads from orphan thread watcher
* Automated Commit - Format and Process Resources Changes
* Refactor orphaned thread logic to account for concurrent execution
* PR feedback
* Implement readStreams in wrapper source
* Automated Commit - Format and Process Resources Changes
* Add readStream override
* Automated Commit - Format and Process Resources Changes
* 🤖 Auto format source-mysql code [skip ci]
* 🤖 Auto format source-mysql code [skip ci]
* 🤖 Auto format source-mysql code [skip ci]
* 🤖 Auto format source-mysql code [skip ci]
* 🤖 Auto format source-mysql code [skip ci]
* Debug logging
* Reduce logging level
* Replace synchronized calls to System.out.println when concurrent
* Close consumer
* Flush before close
* Automated Commit - Format and Process Resources Changes
* Remove charset
* Use ASCII and flush periodically for parallel streams
* Test performance harness patch
* Automated Commit - Format and Process Resources Changes
* Cleanup
* Logging to identify concurrent read enabled
* Mark parameter as final
---------
Co-authored-by: jdpgrailsdev <jdpgrailsdev@users.noreply.github.com>
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
Co-authored-by: Rodi Reich Zilberman <867491+rodireich@users.noreply.github.com>
Co-authored-by: rodireich <rodireich@users.noreply.github.com>
* Add common async methods
* Automated Commit - Format and Process Resources Changes
---------
Co-authored-by: benmoriceau <benmoriceau@users.noreply.github.com>
While running the Snowflake certification test, we noticed NPE on this line, indicating the state id queue was empty.
This should never happen as we always keep an 'open' state id with a running counter to associate the next state the source emits to.
I eventually realised this happens because of a race condition in the FlushWorkers, where multiple threads flush the same queue not realising the state id the flushing logic was conducted on, was already flushed by another thread.
Partly closes airbytehq/oncall#2437 - a rare condition that can occur when comparator values change mid-comparison.
Closes#28154 - a NPE when retrieving from the queue.
- For the comparator error, we are unfortunately unable to write a test case to prove the error. It's exceedingly difficult to force the java.lang.IllegalArgumentException: Comparison method violates its general contract! error. We do know the current implementation contains an error this PR fixes.
- The NPE error is straight-forward. The peek returns a null and we were not accounting for that before.
* Adds data as JsonNode to pass through, running into memory issues so add JVM args to attach VisualVM
* JsonNode
* Lowers the optimal batch size to see if this improvements movement
* Fixes NPE by checking if PartialAirbyteMessage contains a PartialAirbyteRecord
* Fixes config switch when config is not explicitly set (config migration needed)
* Adds logic to check if queue has elements before getting timeOfLastMessage
* Add PartialSerialisedMessage test. (#27452)
* Test deserialise.
* Add tests.
* Simplify and fix tests.
* Format.
* Adds tests for deserializeAirbyteMessage
* Adds tests for deserializeAirbyteMessage with bad data
* Cleans up deserializeAirbyteMessage and throws Exception when invalid message
* More code cleanup
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
* 🤖 Auto format destination-snowflake code [skip ci]
* Cleans up code w/o JVM args & rebase
* 🤖 Auto format destination-snowflake code [skip ci]
* Adds breadcrumb on the STATE message deviation and where the deserialize/serialize is done to unpack
* 🤖 Auto format destination-snowflake code [skip ci]
* Adds back line formatter removed and comment describing rational for lower batchSize
* Bumps Snowflake version and type checks
* Added note to remove PartialAirbyteRecordMessage with low resource testing
* Automated Commit - Format and Process Resources Changes
* Fix issue with multiple namespaces in snowflake not writing to the correct staging schema
* Fix issue with multiple namespaces in snowflake not writing to the correct staging schema
* remove stage name maniuplating method
* update readme
* Source Stripe: update credit_notes expected records (#27941)
* Source Zendesk Talk: update expected records (#27942)
* Source Xero: update expected records (#27943)
* Metadata: Persist Registry entries (#27766)
* DNC
* Update poetry
* Update dagster
* Apply partition
* Get metadata entry
* Use helpers
* Write registry entry to appropriate location
* Delete when registry removed
* Update to use new file (broken)
* Render registry from registry entries
* Run format
* Fix plural issue
* Update to all metadata file blobs
* Fix test
* Update to all blobs
* Add ignore validation error for version logic
* Rename to max_run_request
* Pedros review
* Ella suggestions
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Update airbyte-ci/connectors/metadata_service/orchestrator/orchestrator/assets/registry_entry.py
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Update naming
* Add tests for connector type and deletion
* Test safe parse
* Format
---------
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Fix Dagster Deploy Failure (#27955)
* Add pydantic
* Add pydantic to orchestration deploy pipeline
* 🐛 Source Jira: update expected records (#27951)
* Source Jira: update expected records
* Update issues expected records
* Source Zendesk Chat: update expected records (#27965)
* 🐛 Source Pipedrive: update expected records (#27967)
* 🐛 Source Pinterest: update expected records (#27964)
* ✨ Source Amazon-Ads: Add streams for portfolios and sponsored brands v3 (#27607)
* Add stream for sponsored brands v3
* Add new stream Portfolios
* Source Google Search Console: added discover and googleNews to searchType (#27952)
* added discover and googleNews to searchType
* updated changelog
* fixed types for streams
* 🎉 Source Instagram: Improve, refactor `STATE` management (#27908)
* add test for enabling
* update versions
* fix test
* update other snowflake loading method types
* remove standard
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
Co-authored-by: Davin Chia <davinchia@gmail.com>
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>
Co-authored-by: Augustin <augustin@airbyte.io>
Co-authored-by: Arsen Losenko <20901439+arsenlosenko@users.noreply.github.com>
Co-authored-by: Ben Church <ben@airbyte.io>
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
Co-authored-by: Anatolii Yatsuk <35109939+tolik0@users.noreply.github.com>
Co-authored-by: Daryna Ishchenko <80129833+darynaishchenko@users.noreply.github.com>
Co-authored-by: Baz <oleksandr.bazarnov@globallogic.com>
* copy files from edgao branch
* start writing create table statement
* add basic unit test setup
* create a table, probably
* remove outdated todo
* derp, one more column
* ugh
* add partitioning+clustering
* use StringSubstitutor
* substitutions in updateTable
* wip generate update/insert statement
* split up into smaller methods
* handle json types correctly
* rename stuff
* more json_query vs _value stuff
* minor tweak
* super basic test setup
* laying foundation for type parsing
* more stuff
* tweaks
* more progress on type parsing
* fix json_value stuff?
* misc fixes in insert
* fix dedupFinalTable
* add testDedupRaw
* full e2e test
* type parsing: gave up and mirrored the dbt code structure to avoid bugs
* type parsing - more cleanup
* handle column name collisions
* handle tablename collisions...?
* comments
* remove original ns/name from quotedstream
* also javadoc
* remove redundant method
* fix table rename
* add incremental append test
* add full refresh append test
* comment
* call T+D sql in a reasonable location for standard inserts
* add config option
* use config option here
* type parsing - fix fromJsonSchema
* gate everything
* log query + runtime
* add spec option temporarily
* Raw Table Updates
* fix more stuff
* first big pass at toDialectType
* no quotes
* wrap everything in quotes
* resolve some TODOs
* log sql statement in tests
* overwriteFinalTable returns optional
* minor clean up
* add raw dataset override
* try to preserve the original namespace for t+d?
* write to the raw table correctly
* update todos
* write directly to raw table
this is kind of dumb because we're still trying to do tmp table operations,
and we still don't ack state until the end of the entire sync.
* standard inserts write to raw table correctly
* imports + log statements
* move logs + add comment
* explicitly create raw table
* move comment to better place
* Typing issues
* bash attempt
* formatting updates
* formatting updates
* write to the airbyte schema by default unless overriden by config options
* standard inserts truncate raw table at start of sync
* full refresh overwrite will overwrite correctly!
* fix avro record schema parsing
* better raw table recreate
* rename raw table to match standard inserts
* full refresh overwrite does tmp table things
* small clean up
* small clean up
* remove errors entry if no errors
* pull out destination config into singleton
* clean up singleton stuff
* make sure dest config exists when trying to do lookups
* avoid stringifying null
* quick thoughts on alter table
* add basic cdc testcase
* tweak cdc test setup
* rename raw table to match standard inserts
* minor tweak
* delete exact sql string assertions
* switch to JSON type
* minor cleanup
* sql whitespace changes
* explain cdc deletions
* GCS Staging Full Refresh create temp table
* assert schema
* first out of order cdc test
* add another cdc test case (currently failing)
* better test structure
* make this work
* oops, fix test
* stop trying to delete deletion records
* minor improvements to code+test
* enable concurrent test runs on integration test
* move stuff to static initializer
* extract utility method
* formatting
* Move conditional to the base java package, replace conditionals which did not use the typing and deduping flag but should have been.
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* switch back to empty list; write big assert
* minor wording tweaks
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* DestinationConfigTest
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* formatting
* remove ParsedType
* 🤖 Auto format destination-gcs code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* tests verify every data type
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* full update with all data types
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* move stuff to new base lib
* 🤖 Auto format destination-gcs code [skip ci]
* Automated Commit - Formatting Changes
* 🤖 Auto format destination-bigquery code [skip ci]
* fix test
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* asserts in dedupFinalTable
* better asserts in dedupRawTable
* [wip] test case for all data types
* 🤖 Auto format destination-gcs code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* AirbyteTypeTest
* Automated Commit - Formatting Changes
* remove comments
* test chooseOneOf
* slightly better test output
* Automated Commit - Formatting Changes
* add some awful pretty print code
* more comment
* minor tweaks
* verify array/object type
* fix test
* handle deletions more correctly
* test toDialectType
* Destinations v2: better namespace handling (#27682)
* [wip] better namespace handling
* 🤖 Auto format destination-bigquery code [skip ci]
* wip also implement in gcs
* get gcs working (?)
* 🤖 Auto format destination-bigquery code [skip ci]
* remove duplicate method
* 🤖 Auto format destination-bigquery code [skip ci]
* fixed my code style settings
* make ci happy?
* 🤖 Auto format destination-bigquery code [skip ci]
* make ci happy?
* remove incorrect test
* blank line change
* initialize singleton
---------
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
* reset args correctly
* Automated Commit - Formatting Changes
* more bash stuff
* parse implicit structs
* initialize singleton in more tests
* Automated Commit - Formatting Changes
* I missed this namespace handling thing
* test more schemas
* fix singular types specified in arrays
* Automated Commit - Formatting Changes
* disable test for unimplemented feature
* initialize singleton
* remove spec options; changelogs+metadata
* randomize namespace
* also bump dockerfile
* unremove namespace sanitizing in legacy mode
* ... disable the correct test
* even more unit test fixes!
* move integration test to integration tests
---------
Co-authored-by: Cynthia Yin <cynthia@airbyte.io>
Co-authored-by: Joe Bell <joseph.bell@airbyte.io>
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
Co-authored-by: edgao <edgao@users.noreply.github.com>
Co-authored-by: cynthiaxyin <cynthiaxyin@users.noreply.github.com>
* No Op change to see if Connectors Base is unrelated
* Isolated test to see the memory used after processing CSV writer
* Changes record writer
* Removed Json.serialize on string
* Turn on Snowflake Async
* Fixed emittedAt timestamp in milliseconds
* Add comments. Undo Evan build changes.
* Allow remote debugging.
* Adds calc for PartialAirbyteRecordMessage memory overhead
* Revert back airbyte build architecture
* Passes in the parsed emittedAt to the CSVwriter
* Cleans up lingering comments
* Performance enhancement - increase the available JVM memory known by the container
* Serializes the data earlier to avoid normalization issues and lowers optimal batch size
* Passes messageString instead of serializing the data due to memory overhead causes OOM early
* Removes memory overhead of record data
* Cleans up PR, fixes memory issues & increases throughput by lowering optimal batch size
* Add calculation comment & removes procps from docker install
* Automated Commit - Format and Process Resources Changes
* Turns on Async for testing purposes
* Moves up implementation higher up the classes to remove unnecessary no-op
* Automated Commit - Format and Process Resources Changes
* Adds TODOs for migrating buffers to use only serialized records
* Breaks down calc & throws UnsupportedOperationException
* Adds TODO for migration all destinations to use the getDataRow(id, formattedString, emittedAt) to avoid unnecessary ser-de overhead
---------
Co-authored-by: Davin Chia <davinchia@gmail.com>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>
* snowflake at end of coding retreat week
* turn off async snowflake
* add comment
* Fixed test to point to the ShimMessageConsumer instead of AsyncMessageConsumer
* Automated Change
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>