1
0
mirror of synced 2025-12-25 02:09:19 -05:00

🎉 Source HubSpot: Set Primary keys for streams with an identifier (#11121)

* Set partition key on streams with id field

* reset to master

* Update readme with primary key

* This can be incremental

* I think this can also support incremental

* incremental streams

* incremental

* Missing comma

* Everything can be incremental

* set pk

* Add a primary key

* Add missing pk

* format

* Update doc

* Bump version

* Not everything can be incremental

* fix field

* Update pk

* Update source_specs
This commit is contained in:
girarda
2022-03-15 15:06:28 -07:00
committed by GitHub
parent eff79e4cb3
commit b19f2415a9
7 changed files with 440 additions and 222 deletions

View File

@@ -328,7 +328,7 @@
- name: HubSpot
sourceDefinitionId: 36c891d9-4bd9-43ac-bad2-10e12756272c
dockerRepository: airbyte/source-hubspot
dockerImageTag: 0.1.46
dockerImageTag: 0.1.47
documentationUrl: https://docs.airbyte.io/integrations/sources/hubspot
icon: hubspot.svg
sourceType: api

View File

@@ -3331,7 +3331,7 @@
supportsNormalization: false
supportsDBT: false
supported_destination_sync_modes: []
- dockerImage: "airbyte/source-hubspot:0.1.46"
- dockerImage: "airbyte/source-hubspot:0.1.47"
spec:
documentationUrl: "https://docs.airbyte.io/integrations/sources/hubspot"
connectionSpecification:

View File

@@ -34,5 +34,5 @@ COPY source_hubspot ./source_hubspot
ENV AIRBYTE_ENTRYPOINT "python /airbyte/integration_code/main.py"
ENTRYPOINT ["python", "/airbyte/integration_code/main.py"]
LABEL io.airbyte.version=0.1.46
LABEL io.airbyte.version=0.1.47
LABEL io.airbyte.name=airbyte/source-hubspot

View File

@@ -1,51 +1,93 @@
# HubSpot Source
This is the repository for the HubSpot source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.io/integrations/sources/hubspot).
This is the repository for the HubSpot source connector, written in Python. For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.io/integrations/sources/hubspot).
## Primary keys
The primary key for the following streams is `id`:
- campaigns
- companies
- contacts
- deals
- email_events
- engaments
- engagements_calls
- engagements_emails
- engagements_meetings
- engagements_notes
- engagements_tasks
- feedback_submissions
- forms
- line_items
- marketing_emails
- owners
- products
- tickets
- ticket_pipelines
- workflows
- quotes
The primary key for the following streams is `canonical-vid`:
- contacts_list_memberships
The primary key for the following streams is `pipelineId`:
- deal_pipelines
The following streams do not have a primary key:
- contact_lists (The primary key could potentially be a composite key (portalId, listId) - https://legacydocs.hubspot.com/docs/methods/lists/get_lists)
- form_submissions (The entities returned by this endpoint do not have an identifier field - https://legacydocs.hubspot.com/docs/methods/forms/get-submissions-for-a-form)
- subscription_changes (The entities returned by this endpoint do not have an identified field - https://legacydocs.hubspot.com/docs/methods/email/get_subscriptions_timeline)
- property_history (The entities returned by this endpoint do not have an identifier field - https://legacydocs.hubspot.com/docs/methods/contacts/get_contacts)
## Local development
### Prerequisites
**To iterate on this connector, make sure to complete this prerequisites section.**
#### Minimum Python version required `= 3.7.0`
#### Build & Activate Virtual Environment and install dependencies
From this connector directory, create a virtual environment:
```
python -m venv .venv
```
This will generate a virtualenv for this module in `.venv/`. Make sure this venv is active in your
development environment of choice. To activate it from the terminal, run:
This will generate a virtualenv for this module in `.venv/`. Make sure this venv is active in your development environment of choice. To activate it from the terminal, run:
```
source .venv/bin/activate
pip install -r requirements.txt
```
If you are in an IDE, follow your IDE's instructions to activate the virtualenv.
Note that while we are installing dependencies from `requirements.txt`, you should only edit `setup.py` for your dependencies. `requirements.txt` is
used for editable installs (`pip install -e`) to pull in Python dependencies from the monorepo and will call `setup.py`.
If this is mumbo jumbo to you, don't worry about it, just put your deps in `setup.py` but install using `pip install -r requirements.txt` and everything
should work as you expect.
Note that while we are installing dependencies from `requirements.txt`, you should only edit `setup.py` for your dependencies. `requirements.txt` is used for editable installs (`pip install -e`) to pull in Python dependencies from the monorepo and will call `setup.py`. If this is mumbo jumbo to you, don't worry about it, just put your deps in `setup.py` but install using `pip install -r requirements.txt` and everything should work as you expect.
#### Building via Gradle
From the Airbyte repository root, run:
```
./gradlew :airbyte-integrations:connectors:source-hubspot:build
```
#### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.io/integrations/sources/hubspot)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_hubspot/spec.json` file.
Note that the `secrets` directory is gitignored by default, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_hubspot/spec.json` file. Note that the `secrets` directory is gitignored by default, so there is no danger of accidentally checking in sensitive information. See `sample_files/sample_config.json` for a sample config file.
**If you are an Airbyte core member**, copy the credentials in Lastpass under the secret name `source hubspot test creds`
and place them into `secrets/config.json`.
### Locally running the connector
```
python main.py spec
python main.py check --config secrets/config.json
@@ -54,32 +96,39 @@ python main.py read --config secrets/config.json --catalog sample_files/configur
```
## Testing
Make sure to familiarize yourself with [pytest test discovery](https://docs.pytest.org/en/latest/goodpractices.html#test-discovery) to know how your test files and methods should be named.
First install test dependencies into your virtual environment:
Make sure to familiarize yourself with [pytest test discovery](https://docs.pytest.org/en/latest/goodpractices.html#test-discovery) to know how your test files and methods should be named. First install test dependencies into your virtual environment:
```
pip install .[tests]
```
### Unit Tests
To run unit tests locally, from the connector directory run:
```
python -m pytest unit_tests
```
### Integration Tests
There are two types of integration tests: Acceptance Tests (Airbyte's test suite for all source connectors) and custom integration tests (which are specific to this connector).
#### Custom Integration tests
Place custom tests inside `integration_tests/` folder, then, from the connector root, run
```
python -m pytest integration_tests
```
#### Acceptance Tests
Customize `acceptance-test-config.yml` file to configure tests. See [Source Acceptance Tests](https://docs.airbyte.io/connector-development/testing-connectors/source-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
Customize `acceptance-test-config.yml` file to configure tests. See [Source Acceptance Tests](https://docs.airbyte.io/connector-development/testing-connectors/source-acceptance-tests-reference) for more information. If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
To run your integration tests with acceptance tests, from the connector root, run
```
python -m pytest integration_tests -p integration_tests.acceptance
```
@@ -87,13 +136,15 @@ python -m pytest integration_tests -p integration_tests.acceptance
To run your integration tests with docker
### Using gradle to run tests
All commands should be run from airbyte project root.
To run unit tests:
All commands should be run from airbyte project root. To run unit tests:
```
./gradlew :airbyte-integrations:connectors:source-hubspot:unitTest
```
To run acceptance and custom integration tests:
```
./gradlew :airbyte-integrations:connectors:source-hubspot:integrationTest
```
@@ -101,20 +152,25 @@ To run acceptance and custom integration tests:
### Locally running the connector docker image
#### Build
First, make sure you build the latest Docker image:
```
docker build . -t airbyte/source-hubspot:dev
```
You can also build the connector image via Gradle:
```
./gradlew :airbyte-integrations:connectors:source-hubspot:airbyteDocker
```
When building via Gradle, the docker image name and tag, respectively, are the values of the `io.airbyte.name` and `io.airbyte.version` `LABEL`s in
the Dockerfile.
When building via Gradle, the docker image name and tag, respectively, are the values of the `io.airbyte.name` and `io.airbyte.version` `LABEL`s in the Dockerfile.
#### Run
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-hubspot:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-hubspot:dev check --config /secrets/config.json
@@ -123,15 +179,18 @@ docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/sample_files:/sample_files
```
### Integration Tests
1. From the airbyte project root, run `./gradlew :airbyte-integrations:connectors:source-hubspot:integrationTest` to run the standard integration test suite.
2. To run additional integration tests, place your integration tests in a new directory `integration_tests` and run them with `python -m pytest -s integration_tests`.
Make sure to familiarize yourself with [pytest test discovery](https://docs.pytest.org/en/latest/goodpractices.html#test-discovery) to know how your test files and methods should be named.
2. To run additional integration tests, place your integration tests in a new directory `integration_tests` and run them with `python -m pytest -s integration_tests`. Make sure to familiarize yourself with [pytest test discovery](https://docs.pytest.org/en/latest/goodpractices.html#test-discovery) to know how your test files and methods should be named.
## Dependency Management
All of your dependencies should go in `setup.py`, NOT `requirements.txt`. The requirements file is only used to connect internal Airbyte dependencies in the monorepo for local development.
### Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing unit and integration tests
2. Bump the connector version in `Dockerfile` -- just increment the value of the `LABEL io.airbyte.version` appropriately (we use SemVer).
3. Create a Pull Request

View File

@@ -4,7 +4,9 @@
"stream": {
"name": "campaigns",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -13,43 +15,66 @@
"stream": {
"name": "companies",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "contact_lists",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "contacts",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "deal_pipelines",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -58,115 +83,180 @@
"stream": {
"name": "deals",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "email_events",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["created"]
"default_cursor_field": [
"created"
]
},
"sync_mode": "incremental",
"cursor_field": ["created"],
"cursor_field": [
"created"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["lastUpdated"]
"default_cursor_field": [
"lastUpdated"
]
},
"sync_mode": "incremental",
"cursor_field": ["lastUpdated"],
"cursor_field": [
"lastUpdated"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements_calls",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements_emails",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements_meetings",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements_notes",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "engagements_tasks",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "feedback_submissions",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "forms",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -175,16 +265,9 @@
"stream": {
"name": "form_submissions",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
},
{
"stream": {
"name": "form_submissions",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -193,21 +276,32 @@
"stream": {
"name": "line_items",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "marketing_emails",
"json_schema": {},
"supported_sync_modes": ["full_refresh"],
"supported_sync_modes": [
"full_refresh"
],
"source_defined_cursor": false,
"default_cursor_field": ["updated"]
"default_cursor_field": [
"updated"
]
},
"sync_mode": "full_refresh",
"cursor_field": null,
@@ -217,7 +311,9 @@
"stream": {
"name": "owners",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -226,66 +322,103 @@
"stream": {
"name": "products",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "property_history",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"default_cursor_field": ["timestamp"]
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"default_cursor_field": [
"timestamp"
]
},
"sync_mode": "full_refresh",
"cursor_field": ["timestamp"],
"cursor_field": [
"timestamp"
],
"destination_sync_mode": "overwrite"
},
{
"stream": {
"name": "quotes",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "subscription_changes",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["timestamp"]
"default_cursor_field": [
"timestamp"
]
},
"sync_mode": "incremental",
"cursor_field": ["timestamp"],
"cursor_field": [
"timestamp"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "tickets",
"json_schema": {},
"supported_sync_modes": ["full_refresh", "incremental"],
"supported_sync_modes": [
"full_refresh",
"incremental"
],
"source_defined_cursor": true,
"default_cursor_field": ["updatedAt"]
"default_cursor_field": [
"updatedAt"
]
},
"sync_mode": "incremental",
"cursor_field": ["updatedAt"],
"cursor_field": [
"updatedAt"
],
"destination_sync_mode": "append"
},
{
"stream": {
"name": "ticket_pipelines",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"
@@ -294,7 +427,9 @@
"stream": {
"name": "workflows",
"json_schema": {},
"supported_sync_modes": ["full_refresh"]
"supported_sync_modes": [
"full_refresh"
]
},
"sync_mode": "full_refresh",
"destination_sync_mode": "overwrite"

View File

@@ -3,24 +3,27 @@
#
import backoff
import pendulum as pendulum
import requests
import sys
import time
import urllib.parse
from abc import ABC, abstractmethod
from functools import lru_cache, partial
from http import HTTPStatus
from typing import Any, Dict, Iterable, Iterator, List, Mapping, MutableMapping, Optional, Tuple, Union
import backoff
import pendulum as pendulum
import requests
from airbyte_cdk.entrypoint import logger
from airbyte_cdk.models import SyncMode
from airbyte_cdk.sources.streams.http import HttpStream
from airbyte_cdk.sources.streams.http.requests_native_auth import Oauth2Authenticator
from airbyte_cdk.sources.streams.http.requests_native_auth import \
Oauth2Authenticator
from airbyte_cdk.sources.utils.sentry import AirbyteSentry
from functools import lru_cache, partial
from http import HTTPStatus
from requests import codes
from source_hubspot.errors import HubspotAccessDenied, HubspotInvalidAuth, HubspotRateLimited, HubspotTimeout
from typing import Any, Dict, Iterable, Iterator, List, Mapping, MutableMapping, \
Optional, Tuple, Union
from source_hubspot.errors import HubspotAccessDenied, HubspotInvalidAuth, \
HubspotRateLimited, HubspotTimeout
# The value is obtained experimentally, HubSpot allows the URL length up to ~16300 symbols,
# so it was decided to limit the length of the `properties` parameter to 15000 characters.
@@ -181,13 +184,13 @@ class API:
@retry_connection_handler(max_tries=5, factor=5)
@retry_after_handler(max_tries=3)
def get(
self, url: str, params: MutableMapping[str, Any] = None
self, url: str, params: MutableMapping[str, Any] = None
) -> Tuple[Union[MutableMapping[str, Any], List[MutableMapping[str, Any]]], requests.Response]:
response = self._session.get(self.BASE_URL + url, params=params)
return self._parse_and_handle_errors(response), response
def post(
self, url: str, data: Mapping[str, Any], params: MutableMapping[str, Any] = None
self, url: str, data: Mapping[str, Any], params: MutableMapping[str, Any] = None
) -> Tuple[Union[Mapping[str, Any], List[Mapping[str, Any]]], requests.Response]:
response = self._session.post(self.BASE_URL + url, params=params, json=data)
return self._parse_and_handle_errors(response), response
@@ -221,11 +224,11 @@ class Stream(HttpStream, ABC):
"""Default URL to read from"""
def path(
self,
*,
stream_state: Mapping[str, Any] = None,
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
*,
stream_state: Mapping[str, Any] = None,
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> str:
return self.url
@@ -242,7 +245,7 @@ class Stream(HttpStream, ABC):
return float(response.headers.get("Retry-After", 3))
def request_headers(
self, stream_state: Mapping[str, Any], stream_slice: Mapping[str, Any] = None, next_page_token: Mapping[str, Any] = None
self, stream_state: Mapping[str, Any], stream_slice: Mapping[str, Any] = None, next_page_token: Mapping[str, Any] = None
) -> Mapping[str, Any]:
return {
"Content-Type": "application/json",
@@ -256,11 +259,11 @@ class Stream(HttpStream, ABC):
return json_schema
def handle_request(
self,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
params: Mapping[str, Any] = None,
self,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
params: Mapping[str, Any] = None,
) -> requests.Response:
request_headers = self.request_headers(stream_state=stream_state, stream_slice=stream_slice, next_page_token=next_page_token)
request_params = self.request_params(stream_state=stream_state, stream_slice=stream_slice, next_page_token=next_page_token)
@@ -290,11 +293,11 @@ class Stream(HttpStream, ABC):
return response
def _read_stream_records(
self,
properties_list: List[str],
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
properties_list: List[str],
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> Tuple[dict, Any]:
# TODO: Additional processing was added due to the fact that users receive 414 errors while syncing their streams (issues #3977 and #5835).
@@ -321,11 +324,11 @@ class Stream(HttpStream, ABC):
return stream_records, response
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
stream_state = stream_state or {}
pagination_complete = False
@@ -483,10 +486,10 @@ class Stream(HttpStream, ABC):
yield record
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
default_params = {self.limit_field: self.limit}
params = {**default_params}
@@ -498,12 +501,12 @@ class Stream(HttpStream, ABC):
return self._api._parse_and_handle_errors(response)
def parse_response(
self,
response: requests.Response,
*,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
response: requests.Response,
*,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> Iterable[Mapping]:
response = self._parse_response(response)
@@ -630,11 +633,11 @@ class IncrementalStream(Stream, ABC):
"""Name of the field associated with the state"""
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
records = super().read_records(sync_mode, cursor_field=cursor_field, stream_slice=stream_slice, stream_state=stream_state)
latest_cursor = None
@@ -687,7 +690,7 @@ class IncrementalStream(Stream, ABC):
self._start_date = self._state
def stream_slices(
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
) -> Iterable[Optional[Mapping[str, Any]]]:
chunk_size = pendulum.duration(days=30)
slices = []
@@ -709,10 +712,10 @@ class IncrementalStream(Stream, ABC):
return slices
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
params = super().request_params(stream_state=stream_state, stream_slice=stream_slice, next_page_token=next_page_token)
if stream_slice:
@@ -721,7 +724,6 @@ class IncrementalStream(Stream, ABC):
class CRMSearchStream(IncrementalStream, ABC):
limit = 100 # This value is used only when state is None.
state_pk = "updatedAt"
updated_at_field = "updatedAt"
@@ -733,9 +735,9 @@ class CRMSearchStream(IncrementalStream, ABC):
return f"/crm/v3/objects/{self.entity}/search" if self.state else f"/crm/v3/objects/{self.entity}"
def __init__(
self,
include_archived_only: bool = False,
**kwargs,
self,
include_archived_only: bool = False,
**kwargs,
):
super().__init__(**kwargs)
self._state = None
@@ -744,7 +746,7 @@ class CRMSearchStream(IncrementalStream, ABC):
@retry_connection_handler(max_tries=5, factor=5)
@retry_after_handler(fixed_retry_after=1, max_tries=3)
def search(
self, url: str, data: Mapping[str, Any], params: MutableMapping[str, Any] = None
self, url: str, data: Mapping[str, Any], params: MutableMapping[str, Any] = None
) -> Tuple[Union[Mapping[str, Any], List[Mapping[str, Any]]], requests.Response]:
# We can safely retry this POST call, because it's a search operation.
# Given Hubspot does not return any Retry-After header (https://developers.hubspot.com/docs/api/crm/search)
@@ -753,11 +755,11 @@ class CRMSearchStream(IncrementalStream, ABC):
return self._api.post(url=url, data=data, params=params)
def _process_search(
self,
properties_list: List[str],
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
properties_list: List[str],
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> Tuple[dict, requests.Response]:
stream_records = {}
payload = (
@@ -780,11 +782,11 @@ class CRMSearchStream(IncrementalStream, ABC):
return stream_records, raw_response
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
stream_state = stream_state or {}
pagination_complete = False
@@ -836,10 +838,10 @@ class CRMSearchStream(IncrementalStream, ABC):
yield from []
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
params = {"archived": str(self._include_archived_only).lower(), "associations": self.associations, "limit": self.limit}
if next_page_token:
@@ -858,7 +860,7 @@ class CRMSearchStream(IncrementalStream, ABC):
return {"params": params, "payload": payload}
def stream_slices(
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
) -> Iterable[Optional[Mapping[str, Any]]]:
return [None]
@@ -899,10 +901,10 @@ class CRMObjectIncrementalStream(CRMObjectStream, IncrementalStream):
super().__init__(**kwargs)
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
params = IncrementalStream.request_params(
self, stream_state=stream_state, stream_slice=stream_slice, next_page_token=next_page_token
@@ -916,11 +918,11 @@ class CRMObjectIncrementalStream(CRMObjectStream, IncrementalStream):
return params
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
records = IncrementalStream.read_records(
self,
@@ -943,13 +945,14 @@ class Campaigns(Stream):
data_field = "campaigns"
limit = 500
updated_at_field = "lastUpdatedTime"
primary_key = "id"
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
for row in super().read_records(sync_mode, cursor_field=cursor_field, stream_slice=stream_slice, stream_state=stream_state):
record, response = self._api.get(f"/email/public/v1/campaigns/{row['id']}")
@@ -986,6 +989,7 @@ class ContactsListMemberships(Stream):
data_field = "contacts"
page_filter = "vidOffset"
page_field = "vid-offset"
primary_key = "canonical-vid"
def _transform(self, records: Iterable) -> Iterable:
"""Extracting list membership records from contacts
@@ -999,10 +1003,10 @@ class ContactsListMemberships(Stream):
yield {"canonical-vid": canonical_vid, **item}
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
params = super().request_params(stream_state=stream_state, stream_slice=stream_slice, next_page_token=next_page_token)
params.update({"showListMemberships": True})
@@ -1015,6 +1019,7 @@ class Deals(CRMSearchStream):
entity = "deal"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "companies"]
primary_key = "id"
class DealPipelines(Stream):
@@ -1026,6 +1031,7 @@ class DealPipelines(Stream):
url = "/crm-pipelines/v1/pipelines/deals"
updated_at_field = "updatedAt"
created_at_field = "createdAt"
primary_key = "pipelineId"
class TicketPipelines(Stream):
@@ -1037,6 +1043,7 @@ class TicketPipelines(Stream):
url = "/crm/v3/pipelines/tickets"
updated_at_field = "updatedAt"
created_at_field = "createdAt"
primary_key = "id"
class EmailEvents(IncrementalStream):
@@ -1049,6 +1056,7 @@ class EmailEvents(IncrementalStream):
more_key = "hasMore"
updated_at_field = "created"
created_at_field = "created"
primary_key = "id"
class Engagements(IncrementalStream):
@@ -1062,6 +1070,7 @@ class Engagements(IncrementalStream):
limit = 250
updated_at_field = "lastUpdated"
created_at_field = "createdAt"
primary_key = "id"
@property
def url(self):
@@ -1073,10 +1082,10 @@ class Engagements(IncrementalStream):
yield from super()._transform({**record.pop("engagement"), **record} for record in records)
def request_params(
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
stream_state: Mapping[str, Any],
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> MutableMapping[str, Any]:
params = {self.limit_field: self.limit}
if self.state:
@@ -1094,6 +1103,7 @@ class Forms(Stream):
url = "/marketing/v3/forms"
updated_at_field = "updatedAt"
created_at_field = "createdAt"
primary_key = "id"
class FormSubmissions(Stream):
@@ -1107,11 +1117,11 @@ class FormSubmissions(Stream):
updated_at_field = "updatedAt"
def path(
self,
*,
stream_state: Mapping[str, Any] = None,
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
self,
*,
stream_state: Mapping[str, Any] = None,
stream_slice: Mapping[str, Any] = None,
next_page_token: Mapping[str, Any] = None,
) -> str:
return f"{self.url}/{stream_slice['form_id']}"
@@ -1131,7 +1141,7 @@ class FormSubmissions(Stream):
yield record
def stream_slices(
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
self, *, sync_mode: SyncMode, cursor_field: List[str] = None, stream_state: Mapping[str, Any] = None
) -> Iterable[Optional[Mapping[str, Any]]]:
slices = []
seen = set()
@@ -1144,11 +1154,11 @@ class FormSubmissions(Stream):
return slices
def read_records(
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
self,
sync_mode: SyncMode,
cursor_field: List[str] = None,
stream_slice: Mapping[str, Any] = None,
stream_state: Mapping[str, Any] = None,
) -> Iterable[Mapping[str, Any]]:
for record in super().read_records(sync_mode, cursor_field=cursor_field, stream_slice=stream_slice, stream_state=stream_state):
record["formId"] = stream_slice["form_id"]
@@ -1165,6 +1175,7 @@ class MarketingEmails(Stream):
limit = 250
updated_at_field = "updated"
created_at_field = "created"
primary_key = "id"
class Owners(Stream):
@@ -1175,6 +1186,7 @@ class Owners(Stream):
url = "/crm/v3/owners"
updated_at_field = "updatedAt"
created_at_field = "createdAt"
primary_key = "id"
class PropertyHistory(IncrementalStream):
@@ -1241,67 +1253,80 @@ class Workflows(Stream):
data_field = "workflows"
updated_at_field = "updatedAt"
created_at_field = "insertedAt"
primary_key = "id"
class Companies(CRMSearchStream):
entity = "company"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts"]
primary_key = "id"
class Contacts(CRMSearchStream):
entity = "contact"
last_modified_field = "lastmodifieddate"
associations = ["contacts", "companies"]
primary_key = "id"
class EngagementsCalls(CRMSearchStream):
entity = "calls"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "deal", "company"]
primary_key = "id"
class EngagementsEmails(CRMSearchStream):
entity = "emails"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "deal", "company"]
primary_key = "id"
class EngagementsMeetings(CRMSearchStream):
entity = "meetings"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "deal", "company"]
primary_key = "id"
class EngagementsNotes(CRMSearchStream):
entity = "notes"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "deal", "company"]
primary_key = "id"
class EngagementsTasks(CRMSearchStream):
entity = "tasks"
last_modified_field = "hs_lastmodifieddate"
associations = ["contacts", "deal", "company"]
primary_key = "id"
class FeedbackSubmissions(CRMObjectIncrementalStream):
entity = "feedback_submissions"
associations = ["contacts"]
primary_key = "id"
class LineItems(CRMObjectIncrementalStream):
entity = "line_item"
primary_key = "id"
class Products(CRMObjectIncrementalStream):
entity = "product"
primary_key = "id"
class Tickets(CRMObjectIncrementalStream):
entity = "ticket"
associations = ["contacts", "deals", "companies"]
primary_key = "id"
class Quotes(CRMObjectIncrementalStream):
entity = "quote"
primary_key = "id"

View File

@@ -37,6 +37,7 @@ This source is capable of syncing the following tables and their data:
* [Workflows](https://legacydocs.hubspot.com/docs/methods/workflows/v3/get_workflows)
### A note on the `engagements` stream
Objects in the `engagements` stream can have one of the following types: `note`, `email`, `task`, `meeting`, `call`.
Depending on the type of engagement, different properties will be set for that object in the `engagements_metadata` table in the destination.
@@ -47,7 +48,6 @@ Depending on the type of engagement, different properties will be set for that o
* A `note` engagement will have a corresponding `engagements_metadata` object with non-null values in the `body` column.
* A `task` engagement will have a corresponding `engagements_metadata` object with non-null values in the `body`, `status`, and `forObjectType` columns.
**Note**: HubSpot API currently only supports `quotes` endpoint using API Key, using Oauth it is impossible to access this stream (as reported by [community.hubspot.com](https://community.hubspot.com/t5/APIs-Integrations/Help-with-using-Feedback-CRM-API-and-Quotes-CRM-API/m-p/449104/highlight/true#M44411)).
## Getting Started \(Airbyte Open-Source / Airbyte Cloud\)
@@ -58,9 +58,7 @@ Depending on the type of engagement, different properties will be set for that o
* Api credentials
* If using Oauth, [scopes](https://legacydocs.hubspot.com/docs/methods/oauth2/initiate-oauth-integration#scopes) enabled for the streams you want to sync
{% hint style="info" %}
HubSpot's API will [rate limit](https://developers.hubspot.com/docs/api/usage-details) the amount of records you can sync daily, so make sure that you are on the appropriate plan if you are planning on syncing more than 250,000 records per day.
{% endhint %}
{% hint style="info" %} HubSpot's API will [rate limit](https://developers.hubspot.com/docs/api/usage-details) the amount of records you can sync daily, so make sure that you are on the appropriate plan if you are planning on syncing more than 250,000 records per day. {% endhint %}
This connector supports only authentication with API Key. To obtain API key for the account go to settings -> integrations \(under the account banner\) -> api key. If you already have an api key you can use that. Otherwise generated a new one. See [docs](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key) for more details.
@@ -112,21 +110,22 @@ If you are using Oauth, most of the streams require the appropriate [scopes](htt
| Version | Date | Pull Request | Subject |
|:--------|:-----------| :--- |:-----------------------------------------------------------------------------------------------------------------------------------------------|
| 0.1.46 | 2022-03-14 | [10700](https://github.com/airbytehq/airbyte/pull/10700) | Handle 10k+ records reading in Hubspot streams |
| 0.1.45 | 2022-03-04 | [10707](https://github.com/airbytehq/airbyte/pull/10707) | Remove stage history from deals stream to increase efficiency |
| 0.1.44 | 2022-02-24 | [9027](https://github.com/airbytehq/airbyte/pull/9027) | Add associations companies to deals, ticket and contact stream |
| 0.1.43 | 2022-02-24 | [10576](https://github.com/airbytehq/airbyte/pull/10576) | Cast timestamp to date/datetime|
| 0.1.42 | 2022-02-22 | [10492](https://github.com/airbytehq/airbyte/pull/10492) | Add `date-time` format to datetime fields|
| 0.1.41 | 2022-02-21 | [10177](https://github.com/airbytehq/airbyte/pull/10177) | Migrate to CDK |
| 0.1.40 | 2022-02-10 | [10142](https://github.com/airbytehq/airbyte/pull/10142) | Add associations to ticket stream |
| 0.1.39 | 2022-02-10 | [10055](https://github.com/airbytehq/airbyte/pull/10055) | Bug fix: reading not initialized stream |
| 0.1.38 | 2022-02-03 | [9786](https://github.com/airbytehq/airbyte/pull/9786) | Add new streams for engagements(calls, emails, meetings, notes and tasks) |
| 0.1.37 | 2022-01-27 | [9555](https://github.com/airbytehq/airbyte/pull/9555) | Getting form_submission for all forms |
| 0.1.36 | 2022-01-22 | [7784](https://github.com/airbytehq/airbyte/pull/7784) | Add Property History Stream |
| 0.1.35 | 2021-12-24 | [9081](https://github.com/airbytehq/airbyte/pull/9081) | Add Feedback Submissions stream and update Ticket Pipelines stream |
| 0.1.34 | 2022-01-20 | [9641](https://github.com/airbytehq/airbyte/pull/9641) | Add more fields for `email_events` stream |
| 0.1.33 | 2022-01-14 | [8887](https://github.com/airbytehq/airbyte/pull/8887) | More efficient support for incremental updates on Companies, Contact, Deals and Engagement streams |
| 0.1.32 | 2022-01-13 | [8011](https://github.com/airbytehq/airbyte/pull/8011) | Add new stream form_submissions |
| 0.1.47 | 2022-03-15 | [11121](https://github.com/airbytehq/airbyte/pull/11121) | Add partition keys where appropriate |
| 0.1.46 | 2022-03-14 | [10700](https://github.com/airbytehq/airbyte/pull/10700) | Handle 10k+ records reading in Hubspot streams |
| 0.1.45 | 2022-03-04 | [10707](https://github.com/airbytehq/airbyte/pull/10707) | Remove stage history from deals stream to increase efficiency |
| 0.1.44 | 2022-02-24 | [9027](https://github.com/airbytehq/airbyte/pull/9027) | Add associations companies to deals, ticket and contact stream |
| 0.1.43 | 2022-02-24 | [10576](https://github.com/airbytehq/airbyte/pull/10576) | Cast timestamp to date/datetime |
| 0.1.42 | 2022-02-22 | [10492](https://github.com/airbytehq/airbyte/pull/10492) | Add `date-time` format to datetime fields |
| 0.1.41 | 2022-02-21 | [10177](https://github.com/airbytehq/airbyte/pull/10177) | Migrate to CDK |
| 0.1.40 | 2022-02-10 | [10142](https://github.com/airbytehq/airbyte/pull/10142) | Add associations to ticket stream |
| 0.1.39 | 2022-02-10 | [10055](https://github.com/airbytehq/airbyte/pull/10055) | Bug fix: reading not initialized stream |
| 0.1.38 | 2022-02-03 | [9786](https://github.com/airbytehq/airbyte/pull/9786) | Add new streams for engagements(calls, emails, meetings, notes and tasks) |
| 0.1.37 | 2022-01-27 | [9555](https://github.com/airbytehq/airbyte/pull/9555) | Getting form_submission for all forms |
| 0.1.36 | 2022-01-22 | [7784](https://github.com/airbytehq/airbyte/pull/7784) | Add Property History Stream |
| 0.1.35 | 2021-12-24 | [9081](https://github.com/airbytehq/airbyte/pull/9081) | Add Feedback Submissions stream and update Ticket Pipelines stream |
| 0.1.34 | 2022-01-20 | [9641](https://github.com/airbytehq/airbyte/pull/9641) | Add more fields for `email_events` stream |
| 0.1.33 | 2022-01-14 | [8887](https://github.com/airbytehq/airbyte/pull/8887) | More efficient support for incremental updates on Companies, Contact, Deals and Engagement streams |
| 0.1.32 | 2022-01-13 | [8011](https://github.com/airbytehq/airbyte/pull/8011) | Add new stream form_submissions |
| 0.1.31 | 2022-01-11 | [9385](https://github.com/airbytehq/airbyte/pull/9385) | Remove auto-generated `properties` from `Engagements` stream |
| 0.1.30 | 2021-01-10 | [9129](https://github.com/airbytehq/airbyte/pull/9129) | Created Contacts list memberships streams |
| 0.1.29 | 2021-12-17 | [8699](https://github.com/airbytehq/airbyte/pull/8699) | Add incremental sync support for `companies`, `contact_lists`, `contacts`, `deals`, `line_items`, `products`, `quotes`, `tickets` streams |
@@ -151,4 +150,4 @@ If you are using Oauth, most of the streams require the appropriate [scopes](htt
| 0.1.10 | 2021-08-17 | [5463](https://github.com/airbytehq/airbyte/pull/5463) | Fix fail on reading stream using `API Key` without required permissions |
| 0.1.9 | 2021-08-11 | [5334](https://github.com/airbytehq/airbyte/pull/5334) | Fix empty strings inside float datatype |
| 0.1.8 | 2021-08-06 | [5250](https://github.com/airbytehq/airbyte/pull/5250) | Fix issue with printing exceptions |
| 0.1.7 | 2021-07-27 | [4913](https://github.com/airbytehq/airbyte/pull/4913) | Update fields schema |
| 0.1.7 | 2021-07-27 | [4913](https://github.com/airbytehq/airbyte/pull/4913) | Update fields schema |