* rename for clarity
* fix cleanup method
* giant commit because I'm irresponsible
* rename constant
* better raw table creation
* fix build?
* move code around
* tweaks
* more code shuffling
* Automated Commit - Format and Process Resources Changes
* add tests
* minor tweak
* remove unimportant methods
* cleanup
* Automated Commit - Format and Process Resources Changes
* derp
* clean up tests
* some more fixes post-merge
* botched merge
* create NoopTyperDeduper
* try and update everything to work?
* tweak comment
* move suffix args to end of list
* fix exception message
* Automated Commit - Format and Process Resources Changes
* add sqlgenerator test for softReset
* only prepare once
* update log message
* do what intellij says
* implement one more test
* less indirection
* Automated Commit - Format and Process Resources Changes
* rename test
* use noop in test
* version bump + changelog
* use stringutils
* fix typo
* flip if-statement
* typo
* simplify logic
* fix schema change logic
* typo
* use spy for clarity
* Automated Commit - Format and Process Resources Changes
* better test teardown
* slightly better logs
* fix exception message
* softReset returns single string
* Automated Commit - Format and Process Resources Changes
* simplify if chain
---------
Co-authored-by: edgao <edgao@users.noreply.github.com>
* Add ability to detect differences in expected Schemas and perform soft resets
* Remove alter table for overwrite syncs since its unneccessary
* Updates after testing
* pr reorganize
* comments
* add collection util test
* Add Tests
* bump version
* Automated Commit - Format and Process Resources Changes
* Destination BigQuery - Reduce amount of typing and deduping for GCS staging (#28489)
* undo comment out
* centralize t&d logic for staging and standard, add valve to staging
* Share more logic for typing and deduping
* Remove record checking logic and use only time for staging inserts
* Add Javadoc
* Automated Commit - Format and Process Resources Changes
---------
Co-authored-by: jbfbell <jbfbell@users.noreply.github.com>
* Change TableNotMigratedException to extend runtime exception, remove SqlGenerator interface method
* Make Lambda slightly more readable
* add test for validating v2 schemas
* change soft reset to single string
* convert back to list, update dockerfile
* remove needless default
---------
Co-authored-by: jbfbell <jbfbell@users.noreply.github.com>
* `destination-redshift` will fail syncs if records or properties are too large, rather than silently skipping records and succeding
* Bump version
* remove tests that don't matter any more
* more test removal
* more test removal
---------
Co-authored-by: Augustin <augustin@airbyte.io>
Partly closes airbytehq/oncall#2437 - a rare condition that can occur when comparator values change mid-comparison.
Closes#28154 - a NPE when retrieving from the queue.
- For the comparator error, we are unfortunately unable to write a test case to prove the error. It's exceedingly difficult to force the java.lang.IllegalArgumentException: Comparison method violates its general contract! error. We do know the current implementation contains an error this PR fixes.
- The NPE error is straight-forward. The peek returns a null and we were not accounting for that before.
* use LSN as default cursor for postgres CDC
* Fixed static constant
* Set lsn default cursor value for postgres sync
* Bumped metadata and dockerfile versions
* Disable acceptance backwards compatibility discovery test as this is a breaking change
---------
Co-authored-by: Conor <cpdeethree@users.noreply.github.com>
* wip
* wip 2
* undo unwatned change
* some refactoring
* fix few tests
* more fixing
* more fixes
* Automated Commit - Format and Process Resources Changes
* more improvements
* 1 more test
* another test
* add flag for ctid enabling
* fix conflicts
* else block is not required
* use emittedAt
* skip WAL processing if streams under vacuuming
---------
Co-authored-by: subodh1810 <subodh1810@users.noreply.github.com>
Co-authored-by: Augustin <augustin@airbyte.io>
* Adds data as JsonNode to pass through, running into memory issues so add JVM args to attach VisualVM
* JsonNode
* Lowers the optimal batch size to see if this improvements movement
* Fixes NPE by checking if PartialAirbyteMessage contains a PartialAirbyteRecord
* Fixes config switch when config is not explicitly set (config migration needed)
* Adds logic to check if queue has elements before getting timeOfLastMessage
* Add PartialSerialisedMessage test. (#27452)
* Test deserialise.
* Add tests.
* Simplify and fix tests.
* Format.
* Adds tests for deserializeAirbyteMessage
* Adds tests for deserializeAirbyteMessage with bad data
* Cleans up deserializeAirbyteMessage and throws Exception when invalid message
* More code cleanup
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
* 🤖 Auto format destination-snowflake code [skip ci]
* Cleans up code w/o JVM args & rebase
* 🤖 Auto format destination-snowflake code [skip ci]
* Adds breadcrumb on the STATE message deviation and where the deserialize/serialize is done to unpack
* 🤖 Auto format destination-snowflake code [skip ci]
* Adds back line formatter removed and comment describing rational for lower batchSize
* Bumps Snowflake version and type checks
* Added note to remove PartialAirbyteRecordMessage with low resource testing
* Automated Commit - Format and Process Resources Changes
* Fix issue with multiple namespaces in snowflake not writing to the correct staging schema
* Fix issue with multiple namespaces in snowflake not writing to the correct staging schema
* remove stage name maniuplating method
* update readme
* Source Stripe: update credit_notes expected records (#27941)
* Source Zendesk Talk: update expected records (#27942)
* Source Xero: update expected records (#27943)
* Metadata: Persist Registry entries (#27766)
* DNC
* Update poetry
* Update dagster
* Apply partition
* Get metadata entry
* Use helpers
* Write registry entry to appropriate location
* Delete when registry removed
* Update to use new file (broken)
* Render registry from registry entries
* Run format
* Fix plural issue
* Update to all metadata file blobs
* Fix test
* Update to all blobs
* Add ignore validation error for version logic
* Rename to max_run_request
* Pedros review
* Ella suggestions
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Update airbyte-ci/connectors/metadata_service/orchestrator/orchestrator/assets/registry_entry.py
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Update naming
* Add tests for connector type and deletion
* Test safe parse
* Format
---------
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
* Fix Dagster Deploy Failure (#27955)
* Add pydantic
* Add pydantic to orchestration deploy pipeline
* 🐛 Source Jira: update expected records (#27951)
* Source Jira: update expected records
* Update issues expected records
* Source Zendesk Chat: update expected records (#27965)
* 🐛 Source Pipedrive: update expected records (#27967)
* 🐛 Source Pinterest: update expected records (#27964)
* ✨ Source Amazon-Ads: Add streams for portfolios and sponsored brands v3 (#27607)
* Add stream for sponsored brands v3
* Add new stream Portfolios
* Source Google Search Console: added discover and googleNews to searchType (#27952)
* added discover and googleNews to searchType
* updated changelog
* fixed types for streams
* 🎉 Source Instagram: Improve, refactor `STATE` management (#27908)
* add test for enabling
* update versions
* fix test
* update other snowflake loading method types
* remove standard
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
Co-authored-by: Davin Chia <davinchia@gmail.com>
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>
Co-authored-by: Augustin <augustin@airbyte.io>
Co-authored-by: Arsen Losenko <20901439+arsenlosenko@users.noreply.github.com>
Co-authored-by: Ben Church <ben@airbyte.io>
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
Co-authored-by: Ella Rohm-Ensing <erohmensing@gmail.com>
Co-authored-by: Anatolii Yatsuk <35109939+tolik0@users.noreply.github.com>
Co-authored-by: Daryna Ishchenko <80129833+darynaishchenko@users.noreply.github.com>
Co-authored-by: Baz <oleksandr.bazarnov@globallogic.com>
* copy files from edgao branch
* start writing create table statement
* add basic unit test setup
* create a table, probably
* remove outdated todo
* derp, one more column
* ugh
* add partitioning+clustering
* use StringSubstitutor
* substitutions in updateTable
* wip generate update/insert statement
* split up into smaller methods
* handle json types correctly
* rename stuff
* more json_query vs _value stuff
* minor tweak
* super basic test setup
* laying foundation for type parsing
* more stuff
* tweaks
* more progress on type parsing
* fix json_value stuff?
* misc fixes in insert
* fix dedupFinalTable
* add testDedupRaw
* full e2e test
* type parsing: gave up and mirrored the dbt code structure to avoid bugs
* type parsing - more cleanup
* handle column name collisions
* handle tablename collisions...?
* comments
* remove original ns/name from quotedstream
* also javadoc
* remove redundant method
* fix table rename
* add incremental append test
* add full refresh append test
* comment
* call T+D sql in a reasonable location for standard inserts
* add config option
* use config option here
* type parsing - fix fromJsonSchema
* gate everything
* log query + runtime
* add spec option temporarily
* Raw Table Updates
* fix more stuff
* first big pass at toDialectType
* no quotes
* wrap everything in quotes
* resolve some TODOs
* log sql statement in tests
* overwriteFinalTable returns optional
* minor clean up
* add raw dataset override
* try to preserve the original namespace for t+d?
* write to the raw table correctly
* update todos
* write directly to raw table
this is kind of dumb because we're still trying to do tmp table operations,
and we still don't ack state until the end of the entire sync.
* standard inserts write to raw table correctly
* imports + log statements
* move logs + add comment
* explicitly create raw table
* move comment to better place
* Typing issues
* bash attempt
* formatting updates
* formatting updates
* write to the airbyte schema by default unless overriden by config options
* standard inserts truncate raw table at start of sync
* full refresh overwrite will overwrite correctly!
* fix avro record schema parsing
* better raw table recreate
* rename raw table to match standard inserts
* full refresh overwrite does tmp table things
* small clean up
* small clean up
* remove errors entry if no errors
* pull out destination config into singleton
* clean up singleton stuff
* make sure dest config exists when trying to do lookups
* avoid stringifying null
* quick thoughts on alter table
* add basic cdc testcase
* tweak cdc test setup
* rename raw table to match standard inserts
* minor tweak
* delete exact sql string assertions
* switch to JSON type
* minor cleanup
* sql whitespace changes
* explain cdc deletions
* GCS Staging Full Refresh create temp table
* assert schema
* first out of order cdc test
* add another cdc test case (currently failing)
* better test structure
* make this work
* oops, fix test
* stop trying to delete deletion records
* minor improvements to code+test
* enable concurrent test runs on integration test
* move stuff to static initializer
* extract utility method
* formatting
* Move conditional to the base java package, replace conditionals which did not use the typing and deduping flag but should have been.
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* switch back to empty list; write big assert
* minor wording tweaks
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* DestinationConfigTest
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* formatting
* remove ParsedType
* 🤖 Auto format destination-gcs code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* tests verify every data type
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* full update with all data types
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* move stuff to new base lib
* 🤖 Auto format destination-gcs code [skip ci]
* Automated Commit - Formatting Changes
* 🤖 Auto format destination-bigquery code [skip ci]
* fix test
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* 🤖 Auto format destination-gcs code [skip ci]
* asserts in dedupFinalTable
* better asserts in dedupRawTable
* [wip] test case for all data types
* 🤖 Auto format destination-gcs code [skip ci]
* 🤖 Auto format destination-bigquery code [skip ci]
* AirbyteTypeTest
* Automated Commit - Formatting Changes
* remove comments
* test chooseOneOf
* slightly better test output
* Automated Commit - Formatting Changes
* add some awful pretty print code
* more comment
* minor tweaks
* verify array/object type
* fix test
* handle deletions more correctly
* test toDialectType
* Destinations v2: better namespace handling (#27682)
* [wip] better namespace handling
* 🤖 Auto format destination-bigquery code [skip ci]
* wip also implement in gcs
* get gcs working (?)
* 🤖 Auto format destination-bigquery code [skip ci]
* remove duplicate method
* 🤖 Auto format destination-bigquery code [skip ci]
* fixed my code style settings
* make ci happy?
* 🤖 Auto format destination-bigquery code [skip ci]
* make ci happy?
* remove incorrect test
* blank line change
* initialize singleton
---------
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
* reset args correctly
* Automated Commit - Formatting Changes
* more bash stuff
* parse implicit structs
* initialize singleton in more tests
* Automated Commit - Formatting Changes
* I missed this namespace handling thing
* test more schemas
* fix singular types specified in arrays
* Automated Commit - Formatting Changes
* disable test for unimplemented feature
* initialize singleton
* remove spec options; changelogs+metadata
* randomize namespace
* also bump dockerfile
* unremove namespace sanitizing in legacy mode
* ... disable the correct test
* even more unit test fixes!
* move integration test to integration tests
---------
Co-authored-by: Cynthia Yin <cynthia@airbyte.io>
Co-authored-by: Joe Bell <joseph.bell@airbyte.io>
Co-authored-by: octavia-squidington-iii <octavia-squidington-iii@users.noreply.github.com>
Co-authored-by: edgao <edgao@users.noreply.github.com>
Co-authored-by: cynthiaxyin <cynthiaxyin@users.noreply.github.com>
* Incorrect way to do this
* Working
* Make tests pretty
* Revert "Incorrect way to do this"
This reverts commit f8e29594c1b5fa07bad805806f2571af883d27fd.
* Add backwards compatibility docs
* bump version
* format
---------
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
* No Op change to see if Connectors Base is unrelated
* Isolated test to see the memory used after processing CSV writer
* Changes record writer
* Removed Json.serialize on string
* Turn on Snowflake Async
* Fixed emittedAt timestamp in milliseconds
* Add comments. Undo Evan build changes.
* Allow remote debugging.
* Adds calc for PartialAirbyteRecordMessage memory overhead
* Revert back airbyte build architecture
* Passes in the parsed emittedAt to the CSVwriter
* Cleans up lingering comments
* Performance enhancement - increase the available JVM memory known by the container
* Serializes the data earlier to avoid normalization issues and lowers optimal batch size
* Passes messageString instead of serializing the data due to memory overhead causes OOM early
* Removes memory overhead of record data
* Cleans up PR, fixes memory issues & increases throughput by lowering optimal batch size
* Add calculation comment & removes procps from docker install
* Automated Commit - Format and Process Resources Changes
* Turns on Async for testing purposes
* Moves up implementation higher up the classes to remove unnecessary no-op
* Automated Commit - Format and Process Resources Changes
* Adds TODOs for migrating buffers to use only serialized records
* Breaks down calc & throws UnsupportedOperationException
* Adds TODO for migration all destinations to use the getDataRow(id, formattedString, emittedAt) to avoid unnecessary ser-de overhead
---------
Co-authored-by: Davin Chia <davinchia@gmail.com>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>
* Ensure we only generate iso8601
* Remove format in case of pattern
* Remove unused types
---------
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@sers.noreply.github.com>
* snowflake at end of coding retreat week
* turn off async snowflake
* add comment
* Fixed test to point to the ShimMessageConsumer instead of AsyncMessageConsumer
* Automated Change
---------
Co-authored-by: ryankfu <ryan.fu@airbyte.io>
Co-authored-by: ryankfu <ryankfu@users.noreply.github.com>
* Multi-architecture normalization build (local)
When building and testing normalization locally, we need to force the base images to match the local host OS.
This is not a problem when publishing the connectors as `airbyte-ci`/dagger handles this for us
* Update build.gradle
* initial commit
* cleanup
* cleanup
* Throw a configuration exception in case of bad jdbc url param
* End-to-End test session variable jdbc param
* reafctoring sanity
* sanity
* bump version and update note
* cherry pick fix to unrelated build error
* fix failing test
---------
Co-authored-by: Ryan Fu <ryan.fu@airbyte.io>
Hiding the actual java Queue has an inner class to avoid the chance that someone tries to use native queue methods that we haven't overridden. Good thing that I did this too, because one of the changes we made during hack dayz wasn't reflected in our current feature branch. We need to override poll(time, unit) not just poll. This PR makes sure we won't make that mistake again!
Incorporate the changes from #26178 .
- Update queue method from setMaxMemory to addMaxMemory.
- Update work retrieval logic to only assign work
- if there are free worker threads
- to account for in-progress worker threads