## What
In the migration to update connector open source licenses, we missed a
couple references. This change necessary to pass our pre-release checks
since we assert that the license in the connector's pyproject.toml file
matches the license in the connector's metadata file. I don't believe
this warrants bumping connector versions.
## Can this PR be safely reverted and rolled back?
<!--
* If unsure, leave it blank.
-->
- [x] YES 💚
- [ ] NO ❌
# Update source-iterable
This PR was autogenerated by running `airbyte-ci connectors
--name=source-iterable up_to_date --pull`
We've set the `auto-merge` label on it, so it will be automatically
merged if the CI pipelines pass.
If you don't want to merge it automatically, please remove the
`auto-merge` label.
Please reach out to the Airbyte Connector Tooling team if you have any
questions or concerns.
## Operations
- Upgrade the base image to the latest version in metadata.yaml: Skipped
- Update versions of libraries in poetry: Successful
- PATCH bump source-iterable version to 0.6.52: Successful
- Build source-iterable docker image for platform(s) linux/amd64,
linux/arm64: Successful
- Get dependency updates: Successful
- Create or update pull request on Airbyte repository: Successful
- Add changelog entry: Successful
## Dependency updates
We use [`syft`](https://github.com/anchore/syft) to generate a SBOM for
the latest connector version and the one from the PR.
It allows us to spot the dependencies that have been updated at all
levels and for all types of dependencies (system, python, java etc.).
Here are the dependencies that have been updated compared to
`airbyte/source-iterable:latest`.
Keep in mind that `:latest` does not always match the connector code on
the main branch.
It is the latest released connector image when the head commit of this
branch was created.
| Type | Name | State | Previous Version | New Version |
|------|------|-------|-------------|------------------|
| python | MarkupSafe | updated | 3.0.2 | **3.0.3** |
| python | PyYAML | updated | 6.0.2 | **6.0.3** |
| python | anyio | updated | 4.10.0 | **4.11.0** |
| python | attrs | updated | 25.3.0 | **25.4.0** |
| python | cachetools | updated | 6.1.0 | **6.2.1** |
| python | cattrs | updated | 25.1.1 | **25.3.0** |
| python | certifi | updated | 2025.8.3 | **2025.7.9** |
| python | cffi | updated | 1.17.1 | **2.0.0** |
| python | charset-normalizer | updated | 3.4.3 | **3.4.4** |
| python | idna | updated | 3.10 | **3.11** |
| python | orjson | updated | 3.11.2 | **3.11.3** |
| python | platformdirs | updated | 4.3.8 | **4.4.0** |
| python | pycparser | updated | 2.22 | **2.23** |
| python | pydantic | updated | 2.11.7 | **2.12.1** |
| python | pydantic_core | updated | 2.33.2 | **2.41.3** |
| python | requests | updated | 2.32.4 | **2.32.5** |
| python | typing-inspection | updated | 0.4.1 | **0.4.2** |
| python | typing_extensions | updated | 4.14.1 | **4.15.0** |
| python | source-iterable | removed | 0.6.46 | **not present** |
> [!IMPORTANT]
> **Auto-merge enabled.**
>
> _This PR is set to merge automatically when all requirements are met._
Co-authored-by: octavia-bot-hoard[bot] <230633153+octavia-bot-hoard[bot]@users.noreply.github.com>
## What
Fixes array column serialization errors in source-iterable streams by
properly defining array item types in JSONSchema definitions.
**Problem:** Array columns (e.g., `campaigns.labels`, `emailListIds`,
`channelIds`, `categories`) were being written as null values to the S3
Data Lake destination with `DESTINATION_SERIALIZATION_ERROR` in the
`_airbyte_meta` column.
**Root Cause:** Multiple stream schemas had array definitions with empty
`items: {}`, which is ambiguous and prevents proper type mapping to
destinations like Iceberg/Glue.
**Affected Streams:**
- `email_unsubscribe` (emailListIds, channelIds)
- `email_send` (categories)
- `email_send_skip` (categories)
- `email_subscribe` (emailListIds)
- `campaigns` (listIds, suppressionListIds, labels)
## How
Updated JSONSchema definitions across affected stream schema files to
specify explicit item types for all array fields:
**Before:**
```
"emailListIds": {
"type": ["null", "array"],
"items": {}
}
```
After:
```
"emailListIds": {
"type": ["null", "array"],
"items": {
"type": "integer"
}
}
```
## Review guide
source_iterable/schemas/campaigns.json - Check listIds,
suppressionListIds, labels
source_iterable/schemas/email_unsubscribe.json - Check emailListIds,
channelIds
source_iterable/schemas/email_send.json - Check categories (in
transactional data)
source_iterable/schemas/email_send_skip.json - Check categories (in
transactional data)
source_iterable/schemas/email_subscribe.json - Check emailListIds
Verify that all "items": {} instances have been replaced with proper
type definitions.
## User Impact
None expected - this is a schema clarification that aligns with actual
data types.
## Can this PR be safely reverted and rolled back?
- [X] YES 💚
- [ ] NO ❌
---------
Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>