Compare commits

..

470 Commits

Author SHA1 Message Date
jzhao-talend
9cef13b946 fix(TDI-46297):FCB 'EnableRegionalDisco' is disabled error when using
tMicrosoftCrmInput
2021-06-23 12:05:06 +08:00
pyzhou
d040c3b781 fix(TDI-44917):error message compile error (#6337) 2021-06-16 11:00:16 +08:00
Dmytro Grygorenko
e0e744bfd7 fix(TDI-46065): Redshift - add "Credential Provider" option to XML layout. (#6207) (#6326) 2021-06-11 11:43:14 +03:00
jiezhang-tlnd
e6a05e0738 Jzhang/73/tup 31122 (#6311)
* feat(TUP-31122)new Data Center in AWS Australia
https://jira.talendforge.org/browse/TUP-31122

* update login ui

Co-authored-by: jie.zhang <jie.zhang@LT-DDC8R73.talend.com>
2021-06-10 17:59:12 +08:00
Emmanuel GALLOIS
2e5d89b14a feat(TDI-46215): bump component-runtime to 1.33.1 (#6293) 2021-06-10 09:04:33 +02:00
jzhao
a0bf8ea8b7 feat(TDI-45575):Support Parquet File format for regular DI (#6320)
* add talend-parquet lib
2021-06-10 14:43:04 +08:00
pyzhou
cb2d011370 fix(TDI-44917):Fix Compile Error (#6318)
* fix(TDI-44917):Fix Compile Error

* fix(TDI-44917):Fix Compile Error
2021-06-10 11:36:09 +08:00
vyu-talend
fe9f23eee5 Vyu/tdi 45797 s3put enable object lock feature (#6285)
* feat(TDI-45797):add object lock feature to s3put.

* feat(TDI-45797):optimize code.

* feat(TDI-45797):make some changes.

* feat(TDI-45797):fix code generation error.

* feat(TDI-45797):fix context value issue.

* feat(TDI-45797):fix the common file issue in ts3copy.

* feat(TDI-45797):fix the common file path issue in s3copy.

Co-authored-by: Balázs Gunics <bgunics@talend.com>
2021-06-09 11:17:59 +08:00
pyzhou
e7903640b2 fix(TDI-46062):add checkbox Preserve last modified time tFileCopy (#6261)
* fix(TDI-46062):add checkbox Preserve last modified time tFileCopy

fix(TDI-46062): add Junit test

* MigrationTask

* Hide preserve last modified time when check copy directory
2021-06-09 11:14:14 +08:00
ypiel
b6676e4fbd feat(TDI-45155): ms crm support odata expand 7.3 backport (#6312)
* feat(TDI-45155): ms crm support odata expand 7.3 backport

* feat(TDI-45155): Bump lib in MicrosoftCrmOutput_java.xml

* feat(TDI-45155): formatting issue
2021-06-08 16:16:35 +02:00
Dmytro Grygorenko
5ec26c5514 fix(TDI-46109): update XStream to 1.4.17 (#6283) 2021-06-08 09:20:02 +03:00
pyzhou
aee262c30d fix(TDI-46152):tmap die on error issue (#6291)
* fix(TDI-46152):tmap die on error issue

* fix NPE
2021-06-08 10:29:44 +08:00
bkatiukhov
4b68070278 APPINT-32851 Fix cSplitter with JSonPath is not working as expected (#6232)
Co-authored-by: bohdan.katiukhov <bohdan.katiukhov@KBP1-LHP-A00125.synapse.com>
2021-06-07 13:34:47 +03:00
Jill Yan
c7cc06102f Fix/jill/APPINT-32940 (maintenance/7.3.1) (#6286)
* APPINT-32940

* APPINT-32940

correct logic and if (unselectList.size() > 0), or it will add all dependencies into manifest

* APPINT-32940 format

* APPINT-32940 format

* APPINT-32940 compare maven uri

* APPINT-32940

format

* APPINT-32940 refector

* APPINT-32940  refactor

* APPINT-32940 correct check logic

Co-authored-by: jillyan <yan955599@gmail.com>
2021-06-07 18:01:07 +08:00
wang wei
cfe68fa443 fix(TDI-45979): commons-compress-1.18 to 1.19 (#6274) 2021-06-07 10:22:57 +08:00
pyzhou
a961d53357 Pyzhou/tdi 44917 check components error massage 7.3 (#5448)
* fix(TDI-44917):format

* fix(TDI-44917):add error message

* fix tdie and tCreate table

* fix some compile error

* fix tmap

* fix error

* fix toracleOutput

* fix error

* fix compile error tFileInputMSPositional

* add googleStorageConnection.javajet

* add omission (#5909)

* fix bug

* fix compile error of tNetezzaOutput
2021-06-04 09:27:54 +08:00
mbasiuk-talend
f9004ebd4c chore(TDI-46053): upgrade snakeyaml to 1.26 (#6254) 2021-06-03 11:20:44 +03:00
sbliu
e43b872877 feat(TUP-30834) repace axis1 with axis2. (#6152)
remove axis1 export code for export job. ref TUP-19079.
remove dependency to axis1 for repository metadata, remove dependency to salesforce.
for WSDL2JAVAController, still using axis1 to translate wsdl.
2021-06-02 18:28:38 +08:00
chmyga
387871fb59 fix(TDI-46100): connection support (#6253)
* fix(TDI-46100): connection support

* Add reuse connection support to standalone connectors

* fix(TDI-46100): connection support

* Add comment explaining reflection

Co-authored-by: Dmytro Chmyga <dmytro.chmyga@globallogic.com>
2021-05-28 16:21:21 +03:00
hcyi
159cdc9c9d fix(TUP-31553):Hadoop Metadata Wizard when using custom distro dialog (#6258)
box doesnt pop up to import dependencies.
2021-05-28 14:37:43 +08:00
bhe-talendbj
09bcc66d09 chore(TUP-31617): remove commons-compress-1.18 (#6277)
* chore(TUP-31617): remove commons-compress-1.18

* chore(TUP-31617^C update build.properties
2021-05-28 11:42:42 +08:00
hzhao-talendbj
3bd63f5795 feat(TUP-26184): add the same features to filter fields, same as in the (#6213)
* feat(TUP-26184): add the same features to filter fields, same as in the
tMap

* TUP-26184:  some temp code changes

* TUP-26184 add special checkbox for eltmap table column

* TUP-36184 remove useless code

* TUP-26184 remove useless code

* add line back

* TUP-26184 remove some useless code

* TUP-26184 remove useless code

* TUP-26184 remove useless code

* TUP-26184 fix NPE issue

* TUP-26184 remove useless code

* TUP-26184 fix link can't auto change position issue

* TUP-26184 fix link display issue

* TUP-26184 add Enable/disable column name filter

* TUP-26184 fix some NPE errors when filter is on and select below columns

* TUP-26184 change filter icon position to sync with tmap
2021-05-28 09:59:39 +08:00
Jane Ding
32d256d666 fix(TUP-31316):Error connecting to Azure SQL database with Azure Active (#6270)
directory method
https://jira.talendforge.org/browse/TUP-31316
2021-05-26 17:30:09 +08:00
zyuan-talend
b96ee6514b feat(TUP-30343):have the "Export Dependencies" option checked by default.(#6273) 2021-05-26 11:38:25 +08:00
jiezhang-tlnd
fa08aef33c fix(TUP-31228)Netsuite tck guess schema when use existing connection and (#6241)
* fix(TUP-31228)Netsuite tck guess schema when use existing connection and
Token-based login type

* fix(TUP-31228)Netsuite tck guess schema when use existing connection and
2021-05-26 10:05:04 +08:00
Jane Ding
f2325c166d fix(TUP-30849):Improve build Job performance (#6014)
* fix(TUP-30849):Improve build Job performance
https://jira.talendforge.org/browse/TUP-30849

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-31117):Improve performances related to recursive jobs (#6243)

https://jira.talendforge.org/browse/TUP-31117
2021-05-25 18:10:53 +08:00
jiezhang-tlnd
36f23162bb fix(TUP-TUP-31164)Guess schema button on the informix tDBinput component (#6221)
* fix(TUP-TUP-31164)Guess schema button on the informix tDBinput component
returns zero length datatype

* Guess schema button on the informix tDBinput component returns zero
length datatype
2021-05-25 11:28:41 +08:00
Oleksandr Zhelezniak
7118b02042 feat(TDI-45963): after variables (#5933)
* handle specific metakey
* feature uses metadata endpoint in the framework
* extract metadata in javajet
2021-05-24 12:20:24 +03:00
apoltavtsev
d2ae45d2df bugfix(APPINT-33067) Backport fix for NPE 2021-05-21 10:53:43 +02:00
bkatiukhov
dfa91dd61e APPINT-32843 Fix error while deploying a route with tAzureStorageQueue (#6156)
* APPINT-32843 Fix error while deploying a route with tAzureStorageQueue

* Update osgi-exclude.properties

Co-authored-by: Bohdan Katiukhov <bohdan.katiukhov@synapse.com>
2021-05-21 10:14:34 +03:00
jiezhang-tlnd
f75b7895db fix(TUP-31213)tNetSuiteV2019Input failed to guess schema because preview (#6237)
subjob can't work
https://jira.talendforge.org/browse/TUP-31213
2021-05-21 11:16:53 +08:00
apoltavtsev
e05955934f chore(APPINT-32953) fix the NPE when using DI license 2021-05-20 08:51:01 +02:00
SunChaoqun
38acaab6e1 TESB-31720:[7.4.1]Build a job as OSGi will get a jar with only META-INF (#5756) (#6248) 2021-05-19 20:05:39 +02:00
hzhao-talendbj
19300112e8 chore(TUP-29079): remove some urlpath (#6224) 2021-05-19 15:31:27 +08:00
pyzhou
25ace64c68 Pyzhou/tdi 41535 refactor scp components (#6188)
* fix(TDI-41535):change footer

* fix(TDI-41535):connection and close

* fix(TDI-41535):Filelist

* fix(TDI-41535):tSCPDelete

* fix(TDI-41535):tSCPFileExist

* fix(TDI-41535):tSCPGet & tSCPPut

* fix(TDI-41535):tSCPPut remove duplicate

* fix(TDI-41535):tSCPClose bug

* fix(TDI-41535):tSCPTruncate

* fix(TDI-41535):fix public key compile error

* fix(TDI-41535):tSCPFileList count 0 line

* fix(TDI-41535):tSCPGet overwrite warning

* fix(TDI-41535):tSCPGet error message

* fix(TDI-41535):tSCPGet escape space

* fix(TDI-41535):tSCPGet tSCPPut wildCard

* fix(TDI-41535):tSCPGet nb_line

* fix(TDI-41535):tSCPPut error

* fix(TDI-41535):tSCPPut truncate throw Exception

* fix(TDI-41535):jar for scp components

* fix(TDI-41535):add distributionManagement
2021-05-19 14:14:26 +08:00
apoltavtsev
a188cd0e07 fix(APPINT-32953) NPE is corrected 2021-05-19 04:51:05 +02:00
apoltavtsev
e3473f4aa5 fix(APPINT-32953) Routelets are built before Route 2021-05-18 09:02:45 +02:00
Dmytro Sylaiev
7ac39ecd46 fix(TDI-46060): Fix compile error for tFTPFileList (#6219) 2021-05-14 18:19:00 +03:00
sponomarova
bfe5e903c6 fix(TBD-12358): cfx lib change (#6228) 2021-05-14 13:58:54 +03:00
apoltavtsev
e18a8f48a0 fix(APPINT-32995) Root poms installed in CI mode 2021-05-14 10:54:49 +02:00
vdrokov
f7937c3710 Vdrokov fix appint 32987 maintenance/7.3 (#6202)
* APPINT-32905: Issue with Rest service flow

* APPINT-32987: Fix dublicate variable
2021-05-13 12:09:04 +03:00
hcyi
85e8040773 feat(TUP-26747):improve Outline for joblet and subjob. (#6211)
* feat(TUP-26747):improve Outline for joblet and subjob.

* feat(TUP-26747):improve outline for joblet and subjob.

* feat(TUP-26747):improve outline for joblet and subjob.

* feat(TUP-26747):improve outline for joblet.

* feat(TUP-26747):switch to ComponentSettingsView.
2021-05-13 16:48:24 +08:00
qiongli
70908ad2df chore(TDQ-19297): Upgrade 'cxf' to 3.3.10 (#6199) 2021-05-13 16:21:09 +08:00
Max
a7f1809476 fix(TBD-12358): CVE: Update CXF to 3.3.10 (#6158) 2021-05-13 09:13:52 +03:00
zyuan-talend
2d04f97a64 feat(TUP-25494):Correct dialog title. (#6217) 2021-05-13 10:30:41 +08:00
bkatiukhov
3d5992f017 APPINT-32254 Fix bean-validation ignored when deployed in Runtime (#6174)
* APPINT-32254 Fix bean-validation ignored when deployed in Runtime

* Add specific version for validation constraints

Co-authored-by: Bohdan Katiukhov <bohdan.katiukhov@synapse.com>
Co-authored-by: bohdan.katiukhov <bohdan.katiukhov@KBP1-LHP-A00125.mshome.net>
2021-05-12 19:02:36 +03:00
Max
6b11676a66 fix(TBD-12142): CVE: jersey-core-1.4 and jersey-1.9 (#6168) 2021-05-12 13:38:59 +03:00
zyuan-talend
a102775762 fix(TUP-30430): Clone the connection's UNIQUE_NAME for some links (#6194)
instead of generating a new one in joblet node container.
2021-05-12 17:44:27 +08:00
vyu-talend
2721082b75 feat(TDI-45122):add charset to filefetch's parameters (#6149)
* Added encoding option for the upload parameters. Default behavior is unchanged.
Used OPENED_LIST so context parameters can be used.

* feat(TDI-45122):add charset to filefetch's parameters

* feat(TDI-45122):fix issues.

* feat(TDI-45122):fix issue.

Co-authored-by: Balázs Gunics <bgunics@talend.com>
2021-05-12 16:08:09 +08:00
Liu Xinquan
0c9629ef55 fix(APPINT-32593) java.lang.ClassNotFoundException: org.apache.cxf.message.Message (#6200) 2021-05-12 13:55:30 +08:00
ovladyka
14da1383e1 fix(TDI-45910):Jakarta-ORO imported (#6166)
* fix(TDI-45910):Jakarta-ORO imported

* fix(TDI-45910):Jakarta-ORO updated the mvn GAV and required
2021-05-11 18:41:52 +03:00
mbasiuk-talend
01d97c8f63 chore(TDI-45882): bump cxf 3 3 10 (#6167)
* chore(TDI-45882): bump cxf to 3.3.10

* chore(TDI-45882): remove unused cxf version variable
2021-05-11 16:17:21 +03:00
Oleksandr Zhelezniak
59a1b91e4a chore(TDI-45929): bump talend-mscrm (#6101)
* bump in org.talend.libraries.crm
2021-05-11 15:41:26 +03:00
Oleksandr Zhelezniak
e01d4de5c3 fix(TDI-45968): fix commons-codec module name (#6136) 2021-05-11 15:34:58 +03:00
chmyga
d343213ecb fix(TDI-45859): tFileCopy with different fs (#6083)
* fix(TDI-45859): tFileCopy with different fs

* Add force copy parameter

* Copy and delete file instead of moving

* fix(TDI-45859): tFileCopy with different fs

* Add test

* Fix PR comment

* fix(TDI-45859): tFileCopy with different fs

* Fix UI issue

Co-authored-by: Dmytro Chmyga <dmytro.chmyga@globallogic.com>
2021-05-11 14:32:45 +03:00
Oleksandr Zhelezniak
1bd7157e10 fix(TDI-46005): cve bump httpclient to 4.5.13 (#6164)
* bump httpclient to 4.5.13
* fix groupId for talend-bonita-client in javajet
* update httpcore
2021-05-11 13:11:44 +03:00
Dmytro Sylaiev
b68c9ef23f fix(TDI-45824): Bump jersey for tRest (#6137) 2021-05-11 13:03:33 +03:00
Oleksandr Zhelezniak
bf8ae50d77 fix(TDI-45973): cve bump httpclient to 4.5.13 (#6148) 2021-05-11 12:49:09 +03:00
Dmytro Grygorenko
0757392e0a fix(TDI-46004): tEXist components and tXMLRPCInput dependency import. (#6178)
* fix(TDI-46004): fix for dependency import from Nexus.

* fix(TDI-46004): fixed GAV to match the ones from Maven Central.

* fix(TDI-46004): adding all tEXist components.
2021-05-11 10:33:24 +03:00
wang wei
bdb2545a42 fix(TDI-45491): Contextualized configuration properties for S3 component (#6143) 2021-05-11 14:26:35 +08:00
hcyi
3e46ca4dee feat(TUP-26747):Clickable hyperlink from Outline listed component to the job canvas. (#6121)
* feat(TUP-26747):
Clickable hyperlink from Outline listed component to the job canvas.

* feat(TUP-26747):Clickable hyperlink from Outline listed component to the
job canvas.

* feat(TUP-26747):Clickable hyperlink from Outline listed component to the
job canvas.

* feat(TUP-26747):Clickable hyperlink from Outline listed component to the
job canvas.

* feat(TUP-26747):Clickable hyperlink from Outline listed component to the
job canvas.

* feat(TUP-26747):Clickable hyperlink from Outline listed component to the
job canvas.

* feat(TUP-26747):moved the link with editor button to the first one.

* feat(TUP-26747):link with editor if selected component variable.
2021-05-11 09:51:17 +08:00
Denis Sergent
650b50420b Revert "APPINT-32905: Issue with Rest service flow (#6171)" (#6192)
This reverts commit d5386d1114.
2021-05-10 11:36:29 +02:00
kjwang
76137c4b3c Fix:TUP-31429 Fail to add reference project (#6183)
Fix:TUP-31429 Fail to add reference project
https://jira.talendforge.org/browse/TUP-31429
2021-05-10 17:23:41 +08:00
kjwang
371908919b Fix TUP-31096 Could not find or load main class error on the jobs (#6176)
Fix TUP-31096 Could not find or load main class error on the jobs created on the Feature Branch which has #
https://jira.talendforge.org/browse/TUP-31096
2021-05-10 17:22:56 +08:00
Jane Ding
8a20a15f9f fix(TUP-31237):Invalid username or password when creating a Snowflake (#6132)
Metadata Connection with a Snowflake password that has a slash character
https://jira.talendforge.org/browse/TUP-31237
2021-05-10 16:04:03 +08:00
wang wei
40e5c5f7fd fix(TDI-45663): tS3List: Adds Missing File Details (#6146) 2021-05-10 13:39:02 +08:00
wang wei
bcb2d60a99 fix(TDI-45580): Contextualize multipart threshold parameter for S3 components (#6141) 2021-05-10 13:29:50 +08:00
chmyga
3d9d6734c2 feat(TDI-45836): Standalone connectors support (#6018)
* feat(TDI-45836): Standalone connectors support

* Integrate TCK Standalone connectors to studio

* feat(TDI-45836): Standalone connectors support

* remove NB_LINE after var for Standalone connectors

Co-authored-by: Dmytro Chmyga <dmytro.chmyga@globallogic.com>
2021-05-10 11:16:17 +08:00
wang wei
a54823f72d fix(TDI-45821): CVE: json-smart-2.2.1.jar (#6161) 2021-05-10 11:02:24 +08:00
wang wei
2a4167eb4f fix(TDI-45913): Enforce that only Strings, Maps and HashMaps can be loaded from the crcMap file(#6098) 2021-05-10 10:47:42 +08:00
wang wei
a3a53e8447 fix(TDI-45912): Enforce that System path separator character is indeed a character for tRunjob(#6092) 2021-05-10 10:39:09 +08:00
bhe-talendbj
545bc72afa fix(TUP-31346): add default branch for remote uninitialized git repo (#6182)
* fix(TUP-31346): always show selected item

* fix(TUP-31346): select branch
2021-05-08 16:58:54 +08:00
Max
dbc2f213c2 fix/TBD-12115: CVE: derby-10.11.1.1.jar (#6159) 2021-05-06 12:51:23 +03:00
kjwang
2a5cb99f75 Feat:TUP-30377 Move the "Allow specific characters (UTF8,...)" (#6139)
Feat:TUP-30377 Move the "Allow specific characters (UTF8,...) preference setting to project setting.
https://jira.talendforge.org/browse/TUP-30377
2021-05-06 14:46:29 +08:00
ovladyka
73a00f14bb Fix(TUP-30413:Comma missing for tELTMap with multiple inputs, when aliases are used) (#5844) 2021-05-06 10:04:04 +08:00
vdrokov
d5386d1114 APPINT-32905: Issue with Rest service flow (#6171) 2021-05-05 12:31:19 +03:00
Dmytro Sylaiev
2b90106385 fix(TDI-45642): Fix codegen error for Greenplum (#6102)
* fix(TDI-45642): Fix codegen error for Greenplum

* chore(TDI-45642): Add missing empty line
2021-05-05 11:54:00 +03:00
kjwang
cfc6477b33 Revert "TUP-31096 Could not find or load main class error on the jobs created on the Feature Branch which has # (#6082)" (#6169)
This reverts commit 5f5c92a766.
2021-04-30 18:36:19 +08:00
Dmytro Grygorenko
df55122199 fix(TDI-45879): save initial value of NB_LINE to globalMap. (#6076) 2021-04-28 17:19:24 +03:00
ovladyka
98a1bed1e1 fix(TDI-45900): Encoding doesn't work for byte[] type when tFileOutputDelimited use CSV option (#6140)
Updated two javajet files tFileOutputDelimited_begin and tFileOutputDelimited_main
2021-04-28 10:43:13 +03:00
zyuan-talend
194ac012c4 feat(TUP-25494): provide branch search and memory from launcher. (#6138)
* feat(TUP-25494): provide branch search and memory from launcher.
2021-04-28 14:42:22 +08:00
kjwang
5f5c92a766 TUP-31096 Could not find or load main class error on the jobs created on the Feature Branch which has # (#6082)
* TUP-31096 Could not find or load main class error on the jobs created on
the Feature Branch which has #
https://jira.talendforge.org/browse/TUP-31096
2021-04-27 17:57:17 +08:00
OleksiiNimych
d46b547fc9 fix(TDI-45551): SingleStore fix blob type processing (#6084) 2021-04-26 12:32:43 +03:00
Oleksandr Zhelezniak
b05d599f3f fix(TDI-43931) pass full date to independand child job (#6094)
* Convert long as date from context args
* Update file root to be up to date
* Implement fix for tRunJob
* Mention string parse exception in log warn

(cherry picked from commit 87d3fd7c7d)

Co-authored-by: Dmytro Sylaiev <dmytro.sylaiev@globallogic.com>
2021-04-26 11:00:03 +03:00
ypiel
816d395f2d chore: bump to tck:1.32.0 2021-04-22 12:13:35 +02:00
Zhiwei Xue
bfe02643b3 fix(TUP-31246):Inner routine node disappeared after refresh repository (#6130)
view
2021-04-21 17:52:04 +08:00
kjwang
f99d97538f TUP-21090 Support connection component for tacokit (Fix command line (#6124)
* TUP-21090 Support connection component for tacokit (Fix command line
load image error)
https://jira.talendforge.org/browse/TUP-21090
2021-04-20 11:29:36 +08:00
Jane Ding
8d2ff69e40 fix(TUP-30992):CVE: junit-4.11,4.12,4.13 (#6123)
https://jira.talendforge.org/browse/TUP-30992
2021-04-19 19:29:22 +08:00
Zhiwei Xue
f23c9b02ee fix(TUP-31027):[performance] studio will hang when import a special job (#6114) 2021-04-19 15:35:56 +08:00
hcyi
d794cc9a7b fix(TUP-30793):to fix the password problem of RabbitMQ. (#6116) 2021-04-19 15:24:43 +08:00
Jane Ding
ac5cc1ee1d fix(TUP-30992):CVE: junit-4.11,4.13 (#6110)
https://jira.talendforge.org/browse/TUP-30992

Signed-off-by: jding-tlnd <jding@talend.com>
2021-04-16 19:19:19 +08:00
wang wei
70a75cf790 fix(TDI-45577): Job using snowflake dynamic schema fails with special char (#6095) 2021-04-16 18:11:34 +08:00
Jane Ding
db870ecc30 fix(TUP-30992):CVE: junit-4.11,4.13 (#6106)
https://jira.talendforge.org/browse/TUP-30992
fix(TUP-29033):Fail to run testcase in studio and CI
https://jira.talendforge.org/browse/TUP-29033

Signed-off-by: jding-tlnd <jding@talend.com>
2021-04-16 16:30:14 +08:00
Oleksandr Zhelezniak
78f9b554eb feat(TDI-45323): new auth provider web token (#6060)
* replace checkbox with drop-down credential provider list
* migration task
2021-04-15 12:57:13 +03:00
apoltavtsev
7146bdf26c bugfix(APPINT-32288) Propagate "bundleVersion" option 2021-04-15 10:28:05 +02:00
jiezhang-tlnd
4655c0a059 fix(TUP-30992)CVE: junit-4.11,4.13 (#6090) 2021-04-15 16:13:14 +08:00
Max
49658a28d3 fix(TBD-12112): commons-beanutils-core-1.8.0.jar to 1.9.4 (#6012)
* fix(TBD-12112): commons-beanutils-core-1.8.0.jar to 1.9.4

* fix(TBD-12112): dead code elimination

* fix(TBD-12112): additional code cleanup
2021-04-15 10:53:11 +03:00
SunChaoqun
cda46bb231 APPINT-32688:R2021-03 issue with tDB*** using datasource (#6085) 2021-04-15 10:25:13 +08:00
Zhiwei Xue
9df3a48b78 fix(TUP-30791): remove setup code dependencies action for testcase (#6089) 2021-04-14 16:54:13 +08:00
hcyi
9cce21a3bd fix(TUP-30438):Issue when updating snowflake table using tELTOutput. (#5883)
* fix(TUP-30438):Issue when updating snowflake table using tELTOutput.

* fix(TUP-30438):add junts.

* fix(TUP-30438):Issue when updating snowflake table using tELTOutput.
2021-04-14 16:14:48 +08:00
kjwang
174ea89be9 TUP-31145 TCK:Guess schema use an exist connection will overwrite the parameters of component tNetSuiteV2019Input (#6081)
* TUP-31145 TCK:Guess schema use an exist connection will overwrite the
parameters of component tNetSuiteV2019Input
https://jira.talendforge.org/browse/TUP-31145

* TUP-31145 TCK:Guess schema use an exist connection will overwrite the
parameters of component tNetSuiteV2019Input (Fix guess schema issue
cause by TDI-45246)
https://jira.talendforge.org/browse/TUP-31145
2021-04-14 15:26:20 +08:00
kjwang
b4f2124a60 kjwang/Feat TUP-21090 Support connection component for tacokit (#5977)
* kjwang/Feat TUP-21090 Support connection component for tacokit
https://jira.talendforge.org/browse/TUP-21090
2021-04-13 18:06:53 +08:00
Richard Lecomte
5ac16bb7cc TDI-45014 : SFTP auth with password and public key (#6030)
* TDI-45014 : SFTP auth with password and public key

* TDI-45014 : SFTP auth with password and public key

* TDI-45014 : SFTP auth with password and public key

* TDI-45014 : SFTP auth with password and public key

Added parenthesis

* TDI-45014 : SFTP auth with password and public key

Smarter getPassword method
2021-04-13 11:56:15 +02:00
Emmanuel GALLOIS
bc445f065c feat(TDI-45842): bump component-runtime to 1.31.2 (#6029)
* feat(TDI-45842): bump component-runtime to 1.31.1-SNAPSHOT

* feat(TDI-45842): bump component-runtime to 1.31.2

Co-authored-by: jzhao-talend <jzhao@talend.com>
Co-authored-by: mbasiuk <mbasiuk@talend.com>
2021-04-13 12:40:03 +03:00
zyuan-talend
891e6a9d5e fix(TUP-29284): only show proposals for enabled categories(proposal (#6070)
kinds).
2021-04-13 17:20:38 +08:00
hzhao-talendbj
c6e4e79411 fix(TUP-30625): update maven project after convert jobscript to job (#6069) 2021-04-13 14:49:16 +08:00
sbliu
47ffb3d242 feat(TUP-30358) Enhance Data Collector - Route usage details 2021-04-13 10:57:20 +08:00
jiezhang-tlnd
ae30bc1fb3 fix(TUP-30954)CVE_xstream-1.4.15.jar (#6035) 2021-04-13 10:54:41 +08:00
Emmanuel GALLOIS
6b7fce2f78 feat(TDI-45246): do not put in configuration hidden parameters 2021-04-12 12:02:03 +02:00
hcyi
6049577e03 feat(TUP-30291):Add Suggestable support for Table options (List<Row>) in Studio. (#6013)
* feat(TUP-30291):Add Suggestable support for Table options (List<Row>) in
Studio.

* feat(TUP-30291):Add Suggestable support for Table options (List<Row>) in
Studio.

* feat(TUP-30291):improve for the implementation.

* feat(TUP-30291):fix TUP-31031 Add condition, NOT select field, directly
select operator, then throw errors

* feat(TUP-30291):fix TUP-31031 Add condition, NOT select field, directly
select operator, then throw errors

* feat(TUP-30291):fix TUP-31032 [random] Studio can't save field value.

* feat(TUP-30291):fix TUP-31032 [random] Studio can't save field value.
2021-04-12 15:24:26 +08:00
jiezhang-tlnd
186fcafb29 feat(TUP-30381)Support greenplum driver for Greenplum Database in studio (#5995)
* feat(TUP-30381)Support greenplum driver for Greenplum Database in studio
metadata
https://jira.talendforge.org/browse/TUP-30381

Conflicts:
	main/plugins/org.talend.repository/plugin.xml

* remove hard code

* add dbversion for greenplum

* Add REPOSITORY_VALUE for Greenplum components

* set right dbversionString
2021-04-12 10:02:40 +08:00
wang wei
7f3d3b7a59 fix(TDI-45650): [7.3.1] tDeltaLakeOutput- not handling the updates when we are using Dynamic schema(#5908) 2021-04-12 09:35:14 +08:00
Zhiwei Xue
9a11a94043 fix(TUP-30783):Support inner routine with the same name in different (#6065)
custom jar.
2021-04-09 15:50:36 +08:00
Zhiwei Xue
a47de9821f feat(TUP-29952):Change "Assign Routine to" action to "Copy Routine to". (#6059) 2021-04-09 15:49:34 +08:00
Dmytro Sylaiev
f6114ef000 fix(TDI-45642): Count key columns (#6039)
* fix(TDI-45642): Count key columns

* fix(TDI-45642): Fix another regression related to dynamic schema
2021-04-08 19:29:23 +03:00
Zhiwei Xue
9d93ff1652 fix(TUP-30786):Avoid to click finish button more than once when do (#6053)
create custom routine jar/bean jar.
2021-04-08 18:32:57 +08:00
clesaec
51a97c8b24 TDI-45786 - add charset on dynamic (#6027)
* TDI-45786 - add charset on dynamic
2021-04-08 11:38:21 +02:00
Dmytro Sylaiev
29ec16e725 fix(TDI-42478): tFTPConnection : SSL/TLS Client Authentication does not work : no suitable certificate found - continuing without client authentication (#5971)
Co-authored-by: s.bovsunovskyi <s.bovsunovskyi@globallogic.com>
2021-04-08 10:56:55 +03:00
hcyi
6e5e7d1e0a fix(TUP-30731):tELTPostgresqloutput context not work when checked "use (#6021)
update statement without subqueries"
2021-04-08 14:57:12 +08:00
hcyi
5dda69da6a fix(TUP-30793):TCK Datastore on studio Metadata. (#6049) 2021-04-08 11:20:44 +08:00
hzhao-talendbj
e534bed3e0 chore(TUP-27039): Update Commons Compress to 1.19 backport to 7.3 (#5996) 2021-04-08 10:51:44 +08:00
wang wei
56bc8ee766 fix(TDI-45815): CVE: xstream-1.4.15.jar (#6040) 2021-04-08 09:34:57 +08:00
Zhiwei Xue
71413a41dc fix(TUP-30780):Only check used custom jars when run/build Job (#6041) 2021-04-06 15:52:30 +08:00
Dmytro Sylaiev
6240c4331e fix(TDI-45776): Bump slf4j-jdk14 version to make it downloadable (#6017)
* fix(TDI-45776): Bump slf4j-jdk14 version to make it downloadable

* fix(TDI-45776): Apply also for tBonitaInstantiateProcess
2021-04-06 10:48:53 +03:00
Zhiwei Xue
92fac62ac0 fix(TUP-30977):test run map using custom routines and beans doesn't work (#6023)
after switch branch
2021-04-06 12:05:56 +08:00
Laurent BOURGEOIS
8bdca657d4 fix(TBD-11968):CVE commons-collections 3.2.1 (#5866) 2021-04-02 15:43:30 +02:00
bhe-talendbj
ea33bcd37e feat(TUP-30047): Need support of tRunJob with Dynamic Job option enabled on test cases / CI/CD (#6007)
* fix(TUP-30047): Correct classpath for tRunJob in CI mode

* feat(TUP-30047): support running dynamic jobs

* feat(TUP-30047): change location of classpath.jar if running ci test
2021-04-02 16:14:27 +08:00
AlixMetivier
61b2b21833 feat(TBD-11317): allow tS3Configuration to work with assume role in joblet (#5980) 2021-04-01 14:11:27 +02:00
pyzhou
399ae80700 fix(TDI-45834):tFileCopy change module name (#6019) 2021-03-31 16:49:31 +08:00
jiezhang-tlnd
10fd426856 chore(TUP-29381): add dependency for assembly (#5574) (#5839)
* chore(TUP-29381): add dependency for assembly

* add dependency

* add to template

Co-authored-by: hzhao-talendbj <49395568+hzhao-talendbj@users.noreply.github.com>
2021-03-30 15:57:13 +08:00
bkatiukhov
c113df2c41 TESB-32307 tESBConsumer - wrong header content-type (#5976)
Co-authored-by: bohdan.katiukhov <bohdan.katiukhov@KBP1-LHP-A00125.synapse.com>
2021-03-29 10:18:38 +02:00
pyzhou
4eb679c6e9 fix(TDI-45727):CVE jackson-mapper-asl (#5946) 2021-03-26 18:49:01 +08:00
Dmytro Sylaiev
1cf44a07ec fix(TDI-45642): Throw an warning exception when every column is a key… (#5855)
* fix(TDI-45642): Throw an warning exception when every column is a key for update

* Refactor, deduplicate code

* fix(TDI-45642): Fix codegen error for mssql

* fix(TDI-45642): Throw an error for update on duplicate mysql

* fix(TDI-45642): Warn message instead of exception for insert or update
2021-03-25 11:23:49 +02:00
Zhiwei Xue
0bdf41d228 fix(TUP-30813):Add Junits for dependency management feature (#6003) 2021-03-25 09:39:19 +08:00
Laurent BOURGEOIS
c64fec7601 fix(TBD-12776): Fix testAddLog4jToModuleList unit test (#5998) 2021-03-24 11:05:56 +01:00
Oleksandr Zhelezniak
8ab6492011 feat(TDI-45732): extend not dieoneerror area (#5991)
* extend try-block that includes "Input tables (lookups)"
2021-03-24 10:28:51 +02:00
Zhiwei Xue
780ce47ad7 fix(TUP-30779):Custom jar resource unload issue after git pull&merge. (#5982)
* fix(TUP-30779):Custom jar resource unload issue after git pull&merge.

* fix(TUP-30845): fix refreshing NPE
2021-03-24 16:21:11 +08:00
clesaec
9d04099b86 TDI-45772 : unarchiv correction (#5981) 2021-03-23 09:29:59 +01:00
sbliu
e3775bacfe fix(TUP-25417) JDK11: run a job call a child job use twebservice meet error.
using relative path for building parent job(whose child job contains esb cxf components) on jdk11
2021-03-23 14:55:04 +08:00
Jane Ding
0e1a65b82f fix(TUP-30615):Schema Update Detection popping up everytime upon opening (#5973)
the job
https://jira.talendforge.org/browse/TUP-30615

Signed-off-by: jding-tlnd <jding@talend.com>
2021-03-23 11:30:23 +08:00
Oleksandr Zhelezniak
75c51b6dec feat(TDI-45746): fix date context variable (#5943)
* fix the logic of passing date variable to sub jobs
2021-03-19 10:58:02 +02:00
Dmytro Sylaiev
f809f597b4 fix(TDI-45741): Fix checkbox visibility (#5924) 2021-03-19 09:53:22 +02:00
sbliu
97bad0d5ca chore(TUP-30522) add test case for version contains 'SNAPSHOT' 2021-03-18 14:16:56 +08:00
apoltavtsev
a74a54214e fix(TESB-32507) Correct manifest generation for org.talend.esb.authorization.xacml.rt.pep 2021-03-17 20:21:23 +01:00
Jane Ding
bbc2e81686 fix(TUP-30758):tSingleStoreOutputBulkExec can't work (#5964)
https://jira.talendforge.org/browse/TUP-30758

Signed-off-by: jding-tlnd <jding@talend.com>
2021-03-17 09:43:47 +08:00
Max
6bf37640b9 fix(TBD-12011): tDBclose connection deltalake on error - moved to correct package (#5905) 2021-03-16 11:50:49 +02:00
clesaec
b5d8c8d0f3 Clesaec/tdi 45301 t s3 acl (#5829)
* TDI-45301 - ACL canned options added
2021-03-16 10:41:20 +01:00
Oleksandr Zhelezniak
95afb4904e feat(TCOMP-1877): clean cached libs studio-integration (#5961)
* clean cached libs during clean phase for studio-integration plugin
* help to avoid using the out-to-date version of jars in plugin
2021-03-16 10:45:16 +02:00
Zhiwei Xue
322a55e751 feat(TUP-29014): bugfix 2021-03-15 (#5951) 2021-03-15 15:33:36 +08:00
SunChaoqun
028578141e TESB-32453:DemoServiceConsumer/DemoRESTConsumer fail to deploy to (#5952)
runtime with message"[statistics] disconnected"
2021-03-15 15:04:23 +08:00
Dmytro Grygorenko
d7c09e2d71 feat(TDI-45590): migration task for CosmosDB (#5861) 2021-03-14 08:25:45 +02:00
AlixMetivier
24ae727858 feat(TBD-11882): update tCollectAndCheck for BD (#5919) 2021-03-12 16:49:57 +01:00
hzhao-talendbj
1e39f1e09c TUP-30589 fix junit failed (missing jar beanutils-1.9.2 & axis-1.4) (#5939) 2021-03-12 10:30:56 +08:00
SunChaoqun
3c58d86789 TESB-31465:[7.3.1] Studio ESB build performance issue (#5931)
* TESB-31465:[7.3.1] Studio ESB build performance issue

* TESB-32391
[7.3.1] Incorrect OSGi manifest for Routes

* TESB-32391
[7.3.1] Incorrect OSGi manifest for Routes

* TESB-32391:[7.3.1] Incorrect OSGi manifest for Routes

* TESB-32391:[7.3.1] Incorrect OSGi manifest for Routes

* TESB-32391:[7.3.1] Incorrect OSGi manifest for Routes
2021-03-11 23:01:59 +08:00
SunChaoqun
d51d53c3b5 TESB-30792:Upgrade maven plugins (#5932) 2021-03-11 23:01:39 +08:00
Chao MENG
2ca61108c6 fix(TUP-30651): Login page won't be refreshed after delete project (#5940)
https://jira.talendforge.org/browse/TUP-30651
2021-03-11 16:44:43 +08:00
Zhiwei Xue
8c2ea5dd99 feat(TUP-29014):disable rename of custom jar (#5942) 2021-03-11 16:24:51 +08:00
sbliu
ba7c5e45c2 fix(TUP-30257) TMAP - Java & Traces Debug preview not working 2021-03-11 15:03:00 +08:00
zyuan-talend
ba7830ad5c fix(TUP-30589): Mockito cannot mock this class. (#5937) 2021-03-11 11:29:23 +08:00
sbliu
82bc2123f1 fix(TUP-30109) Log4J preferences does not save
backport TUP-26197, needn't force to log4j2 when import old project which is unactive and log4j1.
resolved problem that after restore default value the preference changes cannot be saved on log4j preference page.
2021-03-11 10:54:27 +08:00
pyzhou
5f70c22c91 fix(TDI-45727): replace default IP for proxy (#5917)
* fix(TDI-45727): replace default IP for proxy

* add migration/RemoveDefaultProxyIPTask.java

* add debug

* remove debug
2021-03-10 16:34:54 +08:00
kjwang
9f48439f53 TUP-30648 Migration: tRun job cannot run if Install 731 R02 monthly (#5929)
TUP-30648 Migration: tRun job cannot run if Install 731 R02 monthly patch plus R03 temp patch
https://jira.talendforge.org/browse/TUP-30648
2021-03-10 15:44:50 +08:00
bhe-talendbj
cf25104e30 bugfix(TUP-30378): Talend 7.3.1 tDBInput or tMSSqlInput component will not open query editor (#5886)
* fix(TUP-30378): add exception log

* fix(TUP-30378): add log

* fix(TUP-30378): run open sqlbuilder in background
2021-03-10 10:55:46 +08:00
Jane Ding
ea33684b50 feat(TUP-30169):adapter the tjdbc bulk components in TDI-45487 for (#5862)
* feat(TUP-30169):adapter the tjdbc bulk components in TDI-45487 for
studio and support singlestore database only with a whitelist
https://jira.talendforge.org/browse/TUP-30169

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-30169):adapter the tjdbc bulk components in TDI-45487 for
studio and support singlestore database only with a whitelist
https://jira.talendforge.org/browse/TUP-30169

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-30169):adapter the tjdbc bulk components in TDI-45487 for studio
and support singlestore database only with a whitelist
https://jira.talendforge.org/browse/TUP-30169

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28699):[Bug] The mapping is wrong after dragging (#5514)

tELTMap/tJDBCSCDELT from the created metadata
https://jira.talendforge.org/browse/TUP-28699

Signed-off-by: jding-tlnd <jding@talend.com>

Conflicts:
	main/plugins/org.talend.designer.core/src/main/java/org/talend/designer/core/ui/editor/cmd/ChangeValuesFromRepository.java


Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28699):[Bug] The mapping is wrong after dragging (#5514)

tELTMap/tJDBCSCDELT from the created metadata
https://jira.talendforge.org/browse/TUP-28699

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-30169):adapter the tjdbc bulk components in TDI-45487 for
studio
and support singlestore database only with a whitelist
https://jira.talendforge.org/browse/TUP-30169

Signed-off-by: jding-tlnd <jding@talend.com>
2021-03-09 17:39:55 +08:00
vyu-talend
61c03b2eda fix(TDI-45702):fix the syntax error in eltoutput (#5895) 2021-03-09 17:12:39 +08:00
Zhiwei Xue
127c703af5 feat(TUP-29014): Add only compile code projects function for TDM 2021-03-09 16:08:54 +08:00
Zhiwei Xue
0176cb23ca feat(TUP-29014): Add only compile code projects function for TDM (#5921) 2021-03-09 15:51:29 +08:00
sbliu
d38412eb01 fix(TUP-30250) Not able to share snowflake connection between job and joblet. 2021-03-09 14:25:50 +08:00
Zhiwei Xue
f041bee6b8 feat(TUP-29014): Only build and package beans/routines that are used in Route/Job (#5794)
* feat(TUP-29018): Rearrange single routines into jars

* feat(TUP-29017):Setup routine dependencies for Job

* feat(TUP-29019): Generation and Build: Build routine jars in different
maven projects.

* feat(TUP-29019): Build routine jars in different maven projects

* feat(TUP-29019): fix codesjar cache and update job maven project problem

* feat(TUP-29019): fix codesjar cache and update job maven project problem

* feat(TUP-29019): fix several issues

* feat(TUP-29943):should not have create routines action when object in (#5715)

recycle bin
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29014): refactor codesjar resource cache

* feat(TUP-29019): fix wrong codesjar groupid of ref project in classpath

* feat(TUP-29943):codeJar import items issue (#5725)

* feat(TUP-29943):codeJar import items issue
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):codeJar import items issue
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29014): fix codesjar dependencies can't set required issue

* TESB-31500:[Dependency Management] Only build and package beans that are
used in Route

* feat(TUP-29014): fix NPE of data preview

* feat(TUP-29943): export codeJar with lib modules, import codeJar with (#5740)

lib modules
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29019): improve update codesjar project logic

* feat(TUP-29014):fix assign routine wrong package issue.

* feat(TUP-29014):fix empty class files of codesjar m2 jar issue.

* feat(TUP-29014):fix several assign to action issues and NPE of data
preview

* feat(TUP-29014):support edit codes dependencies for routelet and set
this action read only for testcases

* feat(TUP-29014):fix several import issue and build code jar problem

* feat(TUP-29014): fix nl and some spell issues

* feat(TUP-29943):add codejar check delete reference (#5773)

https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):export and import to deploy libs issue (#5779)

https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29014):fix i18n issue

* feat(TUP-29014):revert the change for building project

* feat(TUP-29014): update ref code projects after resolved dependencies

* Revert "feat(TUP-29014): update ref code projects after resolved dependencies"

This reverts commit 5a93e784e7.

* feat(TUP-29014): support shot class name for custom jars in component

* feat(TUP-29943):rename issues (#5823)

* feat(TUP-29943):rename issues
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):rename innercode; won't store namefor codejar
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):remove old module for properties change and delete
forever
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):check if exist relationship, warn user re-generate all
pom
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):reinstall codejar after inner code rename
https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29014): improve update codesjar project performance and fix
regression

* feat(TUP-29014):fix rename regressions and improve import performance

* feat(TUP-29943):build out job not include inner code items (#5852)

https://jira.talendforge.org/browse/TUP-29943

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29943):disable export action for innercode, and duplicate (#5854)

import issue
https://jira.talendforge.org/browse/TUP-29943
Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29014):fix bd generate code and data preview issues

* feat(TUP-30584):Do not show custom routine/bean jars in TOS (#5898)

* fix(TUP-30597):Fix junit failures caused by dependency management (#5899)

feature

* feat(TUP-29014):fix several git reset issues

* feat(TUP-29014): bugfix 2021-03-05

* feat(TUP-29014): bugfix 2021-03-08

* feat(TUP-29014): bugfix 2021-03-08 2

* feat(TUP-29014):bugfix 2021-03-09

Co-authored-by: Jane Ding <jding@talend.com>
Co-authored-by: SunChaoqun <csun@talend.com>
2021-03-09 12:13:05 +08:00
hcyi
dfcd6e3f2d fix(TUP-30043):Add a new option for tELTOracleMap to generate column alias on selection. (#5806)
* fix(TUP-30043):Add a new option for tELTOracleMap to generate column
alias on selection.

* fix(TUP-30043):Add a new option for tELTOracleMap to generate column
alias on selection.

* fix(TUP-30043):Add a new option for tELTOracleMap to generate column
alias on selection.

* fix(TUP-30043):update title for the new option for tELTOracleMap.

* fix(TUP-30043):Add a new option for tELTOracleMap to generate column
alias on selection.
2021-03-08 18:28:54 +08:00
sbliu
0a7e0e56e4 fix(TUP-30041) do not show warning message if db type column does not show. (#5723) 2021-03-08 14:04:17 +08:00
hzhao-talendbj
bd2e612a44 TUP-30333 tMap with Lookup model Reload at each row freezes Studio (#5817) (#5874)
* TUP-30333 tMap with Lookup model Reload at each row freezes Studio

* TUP-30333 add comments

* TUP-30333 add condition for refresh background
2021-03-08 09:50:43 +08:00
SunChaoqun
d561e36a7e TESB-31465:[7.3.1] Studio ESB build performance issue (#5864)
* TESB-31465:[7.3.1] Studio ESB build performance issue

* TESB-31465:[7.3.1] Studio ESB build performance issue
2021-03-05 18:40:08 +08:00
jiezhang-tlnd
2fd9e82220 TUP-26534 (#4813) (#5769)
Co-authored-by: hzhao-talendbj <49395568+hzhao-talendbj@users.noreply.github.com>
Co-authored-by: hzhao-talendbj <hzhao@talend.com>
2021-03-05 14:43:05 +08:00
kjwang
11a41a331e Fix: TUP-26185 Merge GIT branches - Conflict resolution - the "Compare (#5876)
Fix: TUP-26185 Merge GIT branches - Conflict resolution - the "Compare Result" view does not display differences with a tELTMap component
https://jira.talendforge.org/browse/TUP-26185
2021-03-04 14:12:06 +08:00
Jane Ding
e9fa81a1c8 fix(TUP-30548):Debugger does not work in 7.3 if the installation path (#5894)
* fix(TUP-30548):Debugger does not work in 7.3 if the installation path
contains space
https://jira.talendforge.org/browse/TUP-30548

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-30548):Debugger does not work in 7.3 if the installation path
contains space
https://jira.talendforge.org/browse/TUP-30548

Signed-off-by: jding-tlnd <jding@talend.com>
2021-03-03 22:54:28 +08:00
Emmanuel GALLOIS
9153d30f6e feat(TDI-45704): bump component-runtime to 1.30.0 (#5891)
* feat(TDI-45704): bump component-runtime to 1.30.0
* feat(TDI-45704): fix tests
* feat(TDI-45704): cleanup imports
2021-03-03 10:06:21 +01:00
wang wei
dd863cfd15 fix(TDI-45561): Getting Permission denied error in tFileInputExcel in (#5784)
Co-authored-by: qyliu <qyliu@talend.com>
2021-03-03 10:52:01 +08:00
pyzhou
55d48cfe91 fix(TDI-45418):Upgrade Jackson libraries (#5884) 2021-03-03 10:48:44 +08:00
vyu-talend
7b325e8707 fix(TDI-45613):fix the issue in md5. (#5851) 2021-03-02 18:25:49 +08:00
jiezhang-tlnd
7465b41a34 TUP-27851 Upgrade xstream to xstream 1.4.12 (#5788) 2021-03-01 18:33:37 +08:00
jiezhang-tlnd
79fb201844 chore(TUP-27224)Update Daikon Crypto Utils to 1.15.0 (#5807)
* chore(TUP-27224)Update Daikon Crypto Utils to 1.15.0

* chore(TUP-27224)add migration
2021-03-01 15:50:40 +08:00
pyzhou
45edbf18a1 fix(TDI-45668) CVE ant tfileUnactive (#5868) 2021-03-01 09:30:38 +08:00
bhe-talendbj
f347a16522 chore(TUP-30230): Remove org.talend.libraries.apache.batik (#5766) 2021-02-26 14:39:26 +08:00
apoltavtsev
b9e4faf2bd fix(TESB-32252) Ignore "SNAPSHOT" during dependencies comparison 2021-02-25 07:12:29 +01:00
Chao MENG
1ab7eaeca6 feat(TUP-29801): Improve logon dialog loading time (#5731)
* feat(TUP-29801): Improve logon dialog loading time
https://jira.talendforge.org/browse/TUP-29801

* feat(TUP-29801): Improve logon dialog loading time
https://jira.talendforge.org/browse/TUP-29801

* feat(TUP-29801): Improve logon dialog loading time
https://jira.talendforge.org/browse/TUP-29801
2021-02-24 10:35:43 +08:00
jiezhang-tlnd
069a7b26c3 fix(TUP-30273)tDBOutput component compilation error (#5797)
https://jira.talendforge.org/browse/TUP-30273
2021-02-23 16:33:16 +08:00
pyzhou
6a3651d3b6 fix(TDI-45436):upgrade xstream 2021-02-20 17:59:43 +08:00
Laurent BOURGEOIS
029c0ccb5c fix(TBD-11971):Spark joblets keeps defaulting to HDFS (#5824) 2021-02-12 11:24:12 +01:00
Emmanuel GALLOIS
1382b4efb4 feat(TDI-45225): bump component-runtime to 1.29.1 (#5801)
* feat(TDI-45225): bump component-runtime to 1.28.2 for bouncycastle CVE
2021-02-12 09:55:20 +01:00
hzhao-talendbj
b0d3a70cf3 fix(TUP-30373): Implicit Context with field separator tab "\t" doesn't work (#5825)
* fix(TUP-30373): Implicit Context with field separator tab "\t" doesn't
work anymore

* TUP-30373  code change + add junit
2021-02-09 17:33:32 +08:00
Dmytro Sylaiev
1f20471fbe fix(TDI-45476): Remove adding second list (#5838) 2021-02-09 11:08:34 +02:00
vdrokov
d28ffc5736 TESB-32126: Null pointer exception when accessing job properties (#5837) 2021-02-08 18:01:19 +02:00
vdrokov
146cf1dc0e TESB-31783: Failed to get nested archive for entry BOOT-INF/lib/aws-j… (#5745)
* TESB-31783: Failed to get nested archive for entry BOOT-INF/lib/aws-java-sdk-1.11.848.jar Caused by: java.lang.IllegalStateException: Zip64 archives are not support

* TESB-31783: Failed to get nested archive for entry BOOT-INF/lib/aws-java-sdk-1.11.848.jar Caused by: java.lang.IllegalStateException: Zip64 archives are not supported
2021-02-08 09:26:41 +01:00
hcyi
6c03ca588a fix(TUP-30067):Dataproc Hive metadata not in sync with Hive component. (#5815) 2021-02-08 15:39:19 +08:00
pyzhou
b528ca9a0d fix(TDI-45529):CVE bouncy castle tcloudXXX (#5805) 2021-02-07 16:05:50 +08:00
vyu-talend
888765461c Vyu/tdi 43232 improve where clause for eltoutput m3 (#5774)
* feat(TDI-43232):improve where clause for eltoutput

* feat(TDI-43232):fix some mistakes.

* feat(TDI-43232):fix some issues found by QA.

* feat(TDI-43232):optimize code

* feat(TDI-43232):fix some errors.
2021-02-07 15:17:20 +08:00
ovladyka
ae2f00f1a8 Revert "fix(TDI-45503):Comma missing for tELTMap with multiple inputs, when aliases are used (#5821)" (#5826)
This reverts commit a92fedb9b4.
2021-02-05 11:44:58 +02:00
sbliu
704b63d59c fix(TUP-30186) fix unit test failure. (#5819) 2021-02-05 10:37:00 +08:00
ovladyka
a92fedb9b4 fix(TDI-45503):Comma missing for tELTMap with multiple inputs, when aliases are used (#5821)
Co-authored-by: Oleksandr Vladyka <oleksandr.vladyka@synapse.com>
2021-02-04 16:31:25 +02:00
Hanna Liashchuk
f1a7d2f235 fix(TBD-11964): migration task for use local timezone (#5802)
* fix(TBD-11964): migration task for use local timezone

* fix(TBD-11964): correct date
2021-02-04 13:15:13 +02:00
Dmytro Sylaiev
1702b27493 fix(TDI-45476): Increate tFileOutputMSXML performance (#5741)
* fix(TDI-45476): Increate tFileOutputMSXML performance

* fix(TDI-45476): Change generic type for list to avoid compile error

* fix(TDI-45476): Store sublist into the resourceMap

* fix(TDI-45476): Fix code compile error
2021-02-04 13:02:17 +02:00
bhe-talendbj
cf55315820 fix(TUP-29764): encrypt all PASSWORD fields of job in new format (#5760)
* fix(TUP-29764): migrate all PASSWORD fields of job

* fix(TUP-29764): Resolve comments
2021-02-03 18:02:38 +08:00
vdrokov
d25b125c08 TESB-31044: Data service's endpoint can't be updated if passed as a context variable (#5772) 2021-02-03 09:58:30 +02:00
clesaec
290d9566ed TDI-40364 - change encoding buffer (#5764)
* TDI-40364 : adapt buffer size
2021-02-03 08:23:19 +01:00
pyzhou
0c85bdc4be Pyzhou/tdi 45463 t elt components support delta lake (#5758)
* feat(TDI-45463):tELT components support Delta Lake

* Add delta lake to xml

* add property name

* change logic to hive

* disable Delta Lake for tSQLTemplateMerge

* Revert "disable Delta Lake for tSQLTemplateMerge"

This reverts commit febd9b7e55.

* remove useless code

* correct partition

* correct partition

* add space

* correct mapping and create table

* revert change for tCreateTable_java.xml

* deactive update mode for Delta Lake tELTOutput

* format

* set not-required for partition

* remove test code
2021-02-02 17:36:45 +08:00
jiezhang-tlnd
a4cb0d13d2 fix(TUP-28519):Update org.talend.libraries.apache.google jars (#5199) (#5800)
remove org.talend.libraries.apache.google plugin
https://jira.talendforge.org/browse/TUP-28519

Co-authored-by: Jane Ding <jding@talend.com>
2021-02-02 16:16:20 +08:00
sbliu
502742bad2 fix(TUP-30186) On studio ,java.lang.RuntimeException: Illegal hexadecimal character m at index 0.
fix encode problem when saving job,fix load job problem caused by wrong hex value decode.
handle the wrong migration of hex value, add new migration task to handle wrong jar mvn GAV when import data during last migration, append unit test.
2021-02-02 14:04:52 +08:00
Laurent BOURGEOIS
83a64d3d2c feat(TBD-10921):Add Spark local 3.0.x (#5658) 2021-01-29 18:17:51 +01:00
kjwang
30bbb27e87 Fix TUP-29885 ERROR: Some patches are not compatible with current product (#5714)
* Fix TUP-29885 ERROR: Some patches are not compatible with current
product
https://jira.talendforge.org/browse/TUP-29885
2021-01-29 17:29:56 +08:00
pyzhou
b554d94736 Pyzhou/tdi 45542 t redshift bulk exec support parquet 7.3 (#5795)
* feat(TDI-45542):tRedshiftBulkExec support parquet

* Add STATUPDATE checkbox

* fix show if
2021-01-29 16:07:21 +08:00
Jane Ding
13bbfcca1d fix(TUP-23738):The couchbase icon is not correct when import (#5713)
https://jira.talendforge.org/browse/TUP-23738

Signed-off-by: jding-tlnd <jding@talend.com>
2021-01-29 10:05:04 +08:00
zyuan-talend
6bebbba8ee fix(TUP-23477):fix TreeToTable connection line issue on Mac with BigSur. (#5786) 2021-01-29 09:54:14 +08:00
hcyi
37dc7ca816 fix(TUP-30108):tELTMSSqlInput & tELTMSSqlMap context not recognized after migrating to 7.3.1. (#5751)
* fix(TUP-30108):tELTMSSqlInput & tELTMSSqlMap context not recognized
after migrating to 7.3.1.

* fix(TUP-30108):add more junits .

* fix(TUP-30108):tELTMSSqlInput & tELTMSSqlMap context not recognized
after migrating to 7.3.1
2021-01-28 15:31:23 +08:00
jzhao
242fa6c729 fix(TDI-45400):Dynamic Schema has default length of 100 even after setting to different amount (#5770)
* fix(TDI-45400):Dynamic Schema has default length of 100 even after
setting to different amount.

* fix(TDI-45400):add migration task
2021-01-28 10:57:34 +08:00
wang wei
d1d4dcd7f6 fix(TDI-45432): tBigQueryOutput fails when the checkbox "Create the table if it doesn't exist" is ticked (#5691) 2021-01-26 11:02:06 +08:00
pyzhou
885d14671a fix(TDI-45447): Upgrade bouncycastle to 1.68 2021-01-25 16:17:10 +08:00
clesaec
8fa189bd31 TDI-29308 - Json (#5726) 2021-01-22 07:50:49 +01:00
vdrokov
fe86a1ef43 TESB-31657: AWS SQS not able to deploy to Runtime (#5742) 2021-01-21 18:17:52 +01:00
vdrokov
8751efe56e TESB-31563: Microservice: ClassNotFoundException: org.eclipse.jetty.client.HttpClientTransport (#5670) 2021-01-21 16:55:52 +02:00
mbasiuk-talend
236fb7fc65 fix(TDI-45455): excel output exceeding characters (#5730)
* fix(TDI-45455): truncate exceeding characters, use property

* fix(TDI-45455): add one missing place

* fix(TDI-45455): improve property wording
2021-01-20 20:43:19 +02:00
Chao MENG
c55409088f fix(TUP-30170): [7.3.1] tHbaseInput error with R2020-11 or higher (#5749)
java.lang.String cannot be cast to java.util.List
https://jira.talendforge.org/browse/TUP-30170

Conflicts:
	main/plugins/org.talend.designer.core/src/main/java/org/talend/designer/core/ui/editor/cmd/ChangeValuesFromRepository.java
2021-01-19 17:18:42 +08:00
jzhao
7ead0130ac fix(TDI-45513):tGreenplumGPLoad global variable "RUNTIME" should be Long type(#5746) 2021-01-19 14:28:04 +08:00
clesaec
562d3700b1 TDI-32744 : sub process (#5724) 2021-01-18 11:16:52 +01:00
Emmanuel GALLOIS
3d5fb83763 feat(TDI-45225): upgrade component-runtime to 1.1.29 (#5600)
* feat(TDI-45225): fix test import errors due by bump xbean to 4.18
* feat(TDI-45225): fix deps svc
* feat(TDI-45225): bump component-runtime to 1.28.1
* feat(TDI-45225): fix microservices classloading issues with SPI
2021-01-14 10:24:26 +01:00
zyuan-talend
41f1df71d7 fix(TUP-29784): fix WebService display issues on Mac with BigSur. (#5720) 2021-01-13 17:22:43 +08:00
jzhao
0f7596dcb8 feat(TDI-45144): Support For GreenPlum 6.x (#5716)
* correct and update greenplum driver version
* deprecate postgresql driver
2021-01-12 10:13:46 +08:00
jzhao
520e379e48 feat(TDI-45007 & TDI-31602): Enhance component tGreenplumGPLoad (#5709)
* feat(TDI-45007 & TDI-31602): Enhance component tGreenplumGPLoad

## tGreenplumGPLoad/tGreenplumGPLoad_begin.javajet /
tGreenplumGPLoad/tGgreenplumGPLoad_java.xml

Added guessColumnList so the yaml will contain the columns of the talend
schema, this is extremely useful if the target table has extra columns
(for example to load only the primary keys to a staging table) This will
generate the "  COLUMNS:" section of the YAML file.

Added parameters to the advanced settings, more information for these
can be found at:
https://gpdb.docs.pivotal.io/6-12/utility_guide/ref/gpload.html#topic1__section7
Parameters include:
  LOG_ERRORS:
  MAX_LINE_LENGTH:
  EXTERNAL:
    -SCHEMA:
  PRELOAD:
    -TRUNCATE:
    -REUSE_TABLES:
    -STAGING_TABLE:
    -FAST_MATCH:
  SQL:
    -BEFORE:
    -AFTER:

Changed the way command is built, so the Log4j now displays a command
that can be copy-pasted to the console for testing.
The yaml file was missing from it.
The PythonPath was also not printed to the log4j logs.

Sometimes a database connection was opened and closed without executing
any actions.

Password was never passed to gpload utility so it had to be managed
separately, it is now passed via context variable, which means when
executed by talend it won't prompt for a password and when executing
from console it will prompt for a password.

GPLoad will terminate if there was an error (for example target table
does not exists)

Extra after variables, for the gpload details:
		NB_LINE_INSERTED	Integer
		NB_LINE_UPDATED		Integer
		NB_DATA_ERRORS		Integer
		STATUS				String
		RUNTIME				String

Option to GZip compress the datafile, this will increase CPU usage, but
decreases DISK usage. (50-90% smaller files)
Remove data file after successful execution. If there were *any* errors
the data file is not removed.

## tGreenplumGPLoad/tGreenplumGPLoad_main.javajet
Added byte[] type support. Byte[] is converted to escaped OCTET
characters, which is compatible with all PostgreSQL versions from 8 to
13.

Fixed when newline / escape characters appeared in the data.

Postgresql uses null terminated bytes so those have to be removed from
the data to avoid errors.

## tGreenplumGPLoad/tGreenplumGPLoad_end.javajet
BugFix don't overwrite the actual inserted record number with 0

* feat(TDI-45007 & TDI-31602):Refactor, removed unused code.

* feat(TDI-45007 & TDI-31602):Enhanced log4j logs, so errors / warnings
are easier to spot.
Removed the duplicate log entries for the errors.

* gpload logs can be distinguished propery: gpload says:

* feat(TDI-45007 & TDI-31602):Reworked the PRELOAD and other sections

* feat(TDI-45007 & TDI-31602):Exit loop when external schema is found

* feat(TDI-45007 & TDI-31602):use schema column based the schema dbtype
setting and remove system.out code

* feat(TDI-45007 & TDI-31602): Only rename file when we have an input row
and the filename doesn't end with .gz

* feat(TDI-45007 & TDI-31602): Added proper escape for delimiters such as
vertical tab "\013"

Co-authored-by: Balázs Gunics <bgunics@talend.com>
2021-01-11 15:32:46 +08:00
Dmytro Grygorenko
28ae35dec6 fix(TDI-45440): fix CVE for Hibernate-Core-3.6.10 (#5707) 2021-01-08 10:35:32 +02:00
AlixMetivier
21245a297a fix(TBD-10689): deprecate old BD distrib (#5694)
* fix(TBD-10689): deprecate old BD distrib

* fix UI for tHiveConnection
2021-01-05 09:33:40 +01:00
bhe-talendbj
5131d00675 feat(TUP-29700): initial implement (#5660) 2021-01-05 10:00:22 +08:00
clesaec
f256fc6f38 TDI-45383 : replace without regexp (#5641) 2021-01-04 07:54:31 +01:00
zyuan-talend
30fa41dd46 fix(TUP-29782): fixed ELTDBMap display issues on Mac with BigSur. (#5688)
* fix(TUP-29775): use CTabFolder instead of TabFolder to reslove the display issue in BigSur.

* fix(TUP-29782): fixed ELTDBMap display issues on Mac with BigSur.
2020-12-31 15:32:41 +08:00
zyuan-talend
8a3a1a0289 fix(TUP-29775): use CTabFolder instead of TabFolder to reslove the display issue in BigSur. (#5684) 2020-12-31 15:28:00 +08:00
zyuan-talend
b3468eba73 fix(TUP-29350): fixed the tMap display issues with Big Sur (#5616)
* fix(TUP-29350): fixed the tMap display issues with Big Sur

* fix(TUP-29350): fixed the tMap display issues with Big Sur

* fix(TUP-29641): fixed tAdvancedFileOutputXML display issues with MacOS BigSur (#5647)

* fix(TUP-29774): fixed scroll display issue with many columns.

* fix(TUP-29767): fixed Expression Builder can't show value and can't edit

* fix(TUP-29766): fixed blank expression issue with Tab key.
2020-12-30 16:01:46 +08:00
sbliu
9d9d6b135b fix(TUP-29550) fix problem that "rename" Job in property sheet page duplicates item in Pom file (#5663) 2020-12-29 11:00:15 +08:00
Dmytro Grygorenko
7cbcd2967b fix(TDI-45385): fix CVE for tBonita* components. (#5649)
* fix(TDI-45385): fix CVE for tBonita* components.

* fix(TDI-45385): added missing double quotes
2020-12-28 08:53:33 +02:00
wang wei
918ca363fa fix(TDI-45404): tFileOutputExcel component gives The maximum number of cell styles was exceeded error (#5662) 2020-12-28 10:24:06 +08:00
sbliu
a473b624b7 fix(TUP-29590) fix httpcore sometimes not included when parent job has tRunjob, child job has tHiveConnection , and routine has dependency to httpcore. (#5648) 2020-12-28 10:13:49 +08:00
zshen-talend
1f6128dfd7 feat(TDQ-18757): Support SAP Hana on DQ profiling in Studio (#5596) 2020-12-25 17:22:04 +08:00
SunChaoqun
6fa4c42df1 TESB-31188:[7.3] LinkageError when resolving method (#5611)
"javax.xml.soap.SOAPPart.setContent(Ljavax/xml/transform/Source;)V"
2020-12-23 15:56:15 +01:00
Dmytro Grygorenko
1e61474fc0 fix(TDI-45366): fix for CVE issue with Apache Axis library. (#5642) 2020-12-23 11:33:25 +02:00
pyzhou
d6242e11cd fix(TDI-45377):tGPGDecrypt fail with gpg2 (#5634)
* fix(TDI-45377):tGPGDecrypt fail with gpg2

* fix(TDI-45377):add option

* add migration task
2020-12-23 15:10:10 +08:00
Jane Ding
d007d3740a fix(TUP-28657):Wrong behavior while Job Setting project config to choose (#5632)
From Database
https://jira.talendforge.org/browse/TUP-28657

Signed-off-by: jding-tlnd <jding@talend.com>
2020-12-21 09:20:24 +08:00
mbasiuk-talend
765accb8a3 fix(TDI-45124): enable transfer mode for ftps type (#5520) 2020-12-16 21:46:35 +02:00
mbasiuk-talend
b51c1282f7 fix(TDI-45260): update components with new library version (#5602) 2020-12-16 16:55:54 +02:00
hcyi
c98e18a58c fix(TUP-29072):improve for When renaming link (table) between teltinput and teltmap , the generated SQL query is not updated. (#5628)
* fix(TUP-29072):improve for When renaming link (table) between teltinput
and teltmap , the generated SQL query is not updated.

* fix(TUP-29072):improve for When renaming link (table) between teltinput
and teltmap , the generated SQL query is not updated.

* fix(TUP-29072):improve for When renaming link (table) between teltinput
and teltmap , the generated SQL query is not updated.
2020-12-16 15:49:59 +08:00
bhe-talendbj
e576293e58 fix(TUP-29693): Do not replace maven uri for context inside jarname (#5625)
* fix(TUP-29693): Do not replace maven uri for context inside jarname

* fix(TUP-29693): fix context inside jar name

* fix(TUP-29693): fix context inside jar name

* fix(TUP-2969): fix migrate maven uri for javajet components

* fix(TUP-2969): fix migrate maven uri for javajet components

* fix(TUP-2969): add check for project settings as well

* fix(TUP-29693): Kepp main logic unchanged
2020-12-15 09:50:17 +08:00
pyzhou
55812da6b6 fix(TDI-45346):tFileFetch does not accept 202 (#5612) 2020-12-11 16:25:56 +08:00
Jane Ding
074c13dd4e fix(TUP-28657):Wrong behavior while Job Setting project config to choose (#5597)
* fix(TUP-28657):Wrong behavior while Job Setting project config to choose
From Database
https://jira.talendforge.org/browse/TUP-28657

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28657):Wrong behavior while Job Setting project config to choose
From Database
https://jira.talendforge.org/browse/TUP-28657

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28657):Wrong behavior while Job Setting project config to choose
From Database
https://jira.talendforge.org/browse/TUP-28657

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28657):Wrong behavior while Job Setting project config to choose
From Database
https://jira.talendforge.org/browse/TUP-28657
Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28657):Wrong behavior while Job Setting project config to choose
From Database
https://jira.talendforge.org/browse/TUP-28657

Signed-off-by: jding-tlnd <jding@talend.com>
2020-12-11 11:37:28 +08:00
AlixMetivier
6088e2c2ca fix(TBD-11616): set dataset parameter to true for job already importe… (#5620)
* fix(TBD-11616): set dataset parameter to true for job already imported to 7.3 studios

* refacto

* fix
2020-12-10 17:31:47 +01:00
zshen-talend
1637e6220c Zshen/bugfix/tdq 18791 fix conflict2 (#5619)
* fix(TDQ-18817): support context in Confidence Weight on 7.3 and 7.4

* fix(TDQ-18791): change migration version
2020-12-10 18:09:50 +08:00
AlixMetivier
e96d2af073 fix(TBD-11616): parameterize dataset API utilization (#5562)
* fix(TBD-11616): parameterize dataset API utilization

* apply migration to only 7.3 and 7.4 studios

* fix for future studio versions

* fix bad merge
2020-12-10 09:57:35 +01:00
hcyi
a1e453e79a feat(TUP-25346):tELTTeradataOutput - aliases are mandatory for calculated columns (#5572)
* feat(TUP-25346):tELTTeradataOutput - aliases are mandatory for
calculated columns

* feat(TUP-25346):add junits for calculated columns.

* feat(TUP-25346):NEW RULE for if output column name is different than
output db column.

* fix(TUP-29598):[BUG] The "property setting" setting is missing after
migrating data(TUP-25346)

* feat(TUP-25346):NEW RULE if checked the alias option。

* fix(TUP-29636):[BUG] Studio does not support SQL scripts with alias when
execute update database(TUP-25346)

* feat(TUP-25346):change the junits since NEW RULE if checked the alias
option.

* feat(TUP-25346):remove some junits since no need.

* feat(TUP-25346):format the code

* feat(TUP-25346):format the code
2020-12-10 15:17:08 +08:00
wchen-talend
c8c4a586d0 fix(TESB-31294):move velocity.log to configuration folder (#5586) 2020-12-10 15:00:15 +08:00
Dmytro Sylaiev
89435124fe fix(TDI-45135): Reuse same outputStream when 2 outputDelimited writing to the same file (#5584) 2020-12-09 09:23:50 +02:00
wang wei
ae8e41f36b fix(TDI-45310): trim the label string when log it to avoid the compiler issue(#5589) 2020-12-09 14:09:22 +08:00
Mike Yan
e6fd8b0614 fix(TESB-30734): Backport from patch 7.2.1 (#5468) 2020-12-08 18:07:03 +08:00
Andrii Medvedenko
f4223f7a2b fix(TBD-11724): manually CPing changes from 7.2, TBD-9864 (#5599)
* fix(TBD-11724): manually CPing changes from 7.2, TBD-9864

* fix(TBD-11724): manually CPing changes from 7.2, TBD-9864
2020-12-08 10:36:26 +01:00
wang wei
64dcafb080 fix(TDI-45331): Exit code for SSH component is not working as expected (#5598) 2020-12-08 16:14:40 +08:00
Dmytro Grygorenko
402fe1ffbc fix(TDI-45006): Hide "LIKE" operator from CRM-2016 On-Premise setup. (#5566)
* fix(TDI-45006): Removed unavailable 'CRM_2018' option so that the whole condition could be evaluated.

* fix(TDI-45006): correct way to fix.
2020-12-08 10:00:00 +02:00
hcyi
02c66a7e93 fix(TUP-29366):[bug] studio can rename alias to an existing name. (#5587)
* fix(TUP-29366):[bug] studio can rename alias to an existing name.

* fix(TUP-29366):[bug] studio can rename alias to an existing name.
2020-12-04 16:57:47 +08:00
sbliu
207d1c9635 fix(TUP-29391) fix problem that "Save As" loses changes in the studio for such as job, joblet. (#5560) 2020-12-02 15:38:06 +08:00
Jane Ding
a4d0adb671 fix(TUP-29383):DBInput component miss delta lake when use TDI.license (#5525)
* fix(TUP-29383):DBInput component miss delta lake when use TDI.license
https://jira.talendforge.org/browse/TUP-29383

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-29383):DBInput component miss delta lake when use TDI.license
https://jira.talendforge.org/browse/TUP-29383
Delta lake doesn't support SP, should not list tDBSP

Signed-off-by: jding-tlnd <jding@talend.com>
2020-12-02 14:42:54 +08:00
sbliu
0ce6a06f8e fix(TUP-29224) auto increment the index according to the drag/drop column numbers. (#5555) 2020-12-02 12:00:02 +08:00
hcyi
15cbbf362c fix(TUP-29090):improve since broken some junits . (#5578) 2020-11-30 20:15:08 +08:00
Oleksandr Zhelezniak
c139b0893d fix(TDI-45089): fix date for excel event mode (#5466)
* use date pattern from studio scheme during parse the date column for event mode
* update version of simpleexcel to [2.5-20201119] (the source code of the lib in master branch)
* apply inter-exchange pattern to avoid date rounding
2020-11-30 13:37:53 +02:00
Richard Lecomte
4c2e419bc0 Rlecomte/tdi 45147 t gs copy target issue (#5502)
* TDI-45147 : tGSCopy issue with path

* TDI-45147 : tGSCopy issue with path

Add checkbox to keep legacy behavior by default

* TDI-45147 : tGSCopy issue with path

Add checkbox to keep legacy behavior by default

* TDI-45147 : tGsCopy issue

Resolved conflicts
2020-11-30 11:18:36 +01:00
Dmytro Grygorenko
adc91e4169 fix(TDI-45162): Rearrange imports in tELTPostgresql* and tELTTeradata components. (#5501)
* fix(TDI-45162): Rearrange imports in tELTPostgresql* and tELTTeradata* components.

* fix(TDI-45162): some corrections after branch conflict resolve.

* fix(TDI-45162): more cleanup.
2020-11-30 11:18:25 +02:00
bhe-talendbj
09607ed581 fix(TUP-29166): Remove decorationfigure (#5549)
* fix(TUP-29166): Remove decorationfigure

* fix(TUP-29166): add back arrows

* fix(TUP-29166): remove drawString

* fix(TUP-29166): remove unused method
2020-11-30 10:58:47 +08:00
pyzhou
0a5b925dc4 Pyzhou/tdi 45160 add encording t file unarchive (#5540)
* fix(TDI-45160):add encoding for tFileUnarchive

* fix error

* typo

* add encoding for pass

* upgrade version

* remove import

* remove useless code

* Keep the old behavior

* correct mvn path
2020-11-30 10:38:20 +08:00
ypiel
414ab39434 feat(TDI-44873) : cve - bump olingo odata to 4.7.1 - maintenance
* feat(TDI-44873) : update odata/olingo to v4.7.1

* feat(TDI-44873) : bump to olingo 4.7.1

* feat(TDI-44873) : fix odata lib names

* feat(TDI-44873) : add version to odata MODULE

* feat(TDI-44873) : add version to odata IMPORT name
2020-11-27 17:27:11 +01:00
Dmytro Grygorenko
a269f74a30 fix(TDI-45224): Review and update tJasper* dependencies. (#5545)
* fix(TDI-45224): Review and update tJasper* dependencies.

* fix(TDI-45224): additional dependencies reviewed and updated
2020-11-27 12:15:18 +02:00
clesaec
530814c490 TDI-45193 : dynamic col on tFileInputPositional (#5511)
* TDI-45193 : dynamic col on tFileInputPositional
2020-11-27 10:35:19 +01:00
hcyi
1da69fb285 fix(TUP-29072):When renaming link (table) between teltinput and teltmap , the generated SQL query is not updated (#5467)
* fix(TUP-29072):When renaming link (table) between teltinput and teltmap
, the generated SQL query is not updated

* fix(TUP-29072):When renaming link (table) between teltinput and teltmap
, the generated SQL query is not updated 。

* fix(TUP-29072):add junits

* fix(TUP-29072):add more junits
2020-11-27 17:13:04 +08:00
mbasiuk-talend
3c00488dc8 feat(TDI-44915) access token feature (#5473)
* feat(TDI-44915): integrate Balazs POC

* feat(TDI-44915): implement other BigQuery component with access token

* feat(TDI-44915): update BigQuery bulkexec with new common jet

* feat(TDI-44915): update tGSBucketCreate

* feat(TDI-44915): update tGSBucketDelete

* feat(TDI-44915): update tGSBucketExist

* feat(TDI-44915): update tGSBucketList

* feat(TDI-44915): update connection and close part

* feat(TDI-44915): update tGSCopy

* feat(TDI-44915): update tGSDelete

* feat(TDI-44915): update tGSGet

* feat(TDI-44915): update tGSList

* feat(TDI-44915): update tGSPut

* feat(TDI-44915): use proxy to communicate with GS

* feat(TDI-44915): update code due to PR comments

* feat(TDI-44915): update Input code generation

* feat(TDI-44915): fix tGSPut xml definition

* feat(TDI-44915): update BigQuery Output and generic connection

* feat(TDI-44915): fix _end javajet parts

* feat(TDI-44915): fix bigqueryoutput config mappings

* feat(TDI-44915): fix tGSBuckerCreate dependencies

* feat(TDI-44915): fix PR comments
2020-11-27 10:04:14 +02:00
kjwang
17f54191cf Kjwang/feat tup 28891 temp folder (#5465)
TUP-28891:Shared Studio: Check which functions will write data into
folder "temp" of Studio installation folder from code
https://jira.talendforge.org/browse/TUP-28891
2020-11-27 14:47:48 +08:00
kjwang
ed99155812 Fix : TUP-29358 Performance: It takes more than 1 hour to show "Update Detection" after clicking "detect and update all jobs" icon in a special project (#5548)
* Fix : TUP-29358 Performance: It takes more than 1 hour to show "Update
Detection" after clicking "detect and update all jobs" icon in a special
project
https://jira.talendforge.org/browse/TUP-29358
2020-11-27 14:38:28 +08:00
wang wei
2276f4b51a fix(TDI-45227): talendStats_STATSProcess Error about ELTComponents and Stat&Log tables creation (#5537) 2020-11-27 12:02:29 +08:00
hcyi
da5744d1e5 fix(TUP-29090):[7.2.1] Extra plus '+' signs in generated SQL (#5453) 2020-11-25 10:30:46 +08:00
vyu-talend
3c04002b5e fix(TDI-45159):fix bug in xml of azuresynabe. (#5508) 2020-11-24 15:49:24 +08:00
sbliu
8b67961ade fix(TUP-26486): Can't quickly refresh the GUI when switch format.
when to create azure, quickly refresh the wizard GUI only once when switch format.  fix problem that 'Netsuite/tck: in Metadata credentials fields are not changed based on Auth type selected'.
2020-11-24 15:44:01 +08:00
clesaec
ec914f50fe TDI-45161 : tFile input delimited correction (#5510) 2020-11-24 08:19:06 +01:00
bhe-talendbj
9ab7f01201 fix(TUP-29424): fix junit failures related to Parameter change (#5551) 2020-11-23 11:57:26 +08:00
bhe-talendbj
af79e71c25 fix(TUP-29424): OutputSchemaParameterTest (#5546) 2020-11-23 10:09:59 +08:00
Emmanuel GALLOIS
c50e437c59 feat(TCOMP-1761): Support of complete schema definition in Studio (#5270)
* feat(TCOMP-1761): add nestedProperties for metadata
* feat(TCOMP-1761): update configuration.javajet
* feat(TCOMP-1761): change temporary variable name
2020-11-20 14:17:32 +01:00
Zhiwei Xue
ca07dd16cf fix(TUP-29360): Missing log4j2 jar on user routines (#5529)
* fix(TUP-29360): Missing log4j2 jar on user routines

* fix(TUP-29360): fix switch log4j level problem
2020-11-20 17:12:54 +08:00
ovladyka
64794a596c Fix/TDI45204_IncorrectOutlineFortFileFetch (#5532)
Co-authored-by: Oleksandr Vladyka <oleksandr.vladyka@synapse.com>
2020-11-19 14:08:00 +02:00
pyzhou
ff595fd205 fix(TDI-45167):Close pre Workbook for tFileOutputExcel (#5505) 2020-11-18 11:26:19 +08:00
apoltavtsev
c035091f88 fix(TESB-29553) Publishing a route with cTalendJob from Studio and commandline gives different results 2020-11-12 11:06:56 +01:00
SunChaoqun
3d8c28928a TESB-28330:cConfig/Beans - Support customer groupid and artifact name (#5458)
* TESB-28330:cConfig/Beans - Support customer groupid and artifact name
(additional to custom version)

* TESB-30961:[7.3.1 cConfig] The external jar name and mvn uri is changed
when import a route with cConfig of 721
2020-11-12 16:29:39 +08:00
sbliu
54af03d3ef Revert "feat(TUP-26486) quickly refresh the GUI only one time when switch format."
This reverts commit 1adad9fe93.
2020-11-11 16:05:43 +08:00
sbliu
1adad9fe93 feat(TUP-26486) quickly refresh the GUI only one time when switch format. 2020-11-11 15:38:45 +08:00
Jane Ding
a20ea15f4d fix(TUP-28934):JDBC connection created under folder can not generate the (#5495)
all the components for jdbc when drag to job.
https://jira.talendforge.org/browse/TUP-28934

Signed-off-by: jding-tlnd <jding@talend.com>
2020-11-11 10:45:23 +08:00
apoltavtsev
1df79674e1 feat(TESB-29949) Pass the data source to a job using a context variable 2020-11-10 15:57:31 +01:00
hcyi
c0fedf4ef4 feat(TUP-25235):tELTTeradataMap : allow to rename the alias. (#5491) 2020-11-10 11:59:47 +08:00
hcyi
f3b45bf229 feat(TUP-25235):tELTTeradataMap : allow to rename the alias. (#5284)
* feat(TUP-25235):tELTTeradataMap : allow to rename the alias.

* feat(TUP-25235):tELTTeradataMap : allow to rename the alias.

* feat(TUP-25235):improve and add some junits for rename alias.

* fix(TUP-25235):update some text messages for tELTTeradataMap : allow to
rename the alias

* fix(TUP-25235):update some text messages for tELTTeradataMap : allow to
rename the alias

* fix(TUP-25235):TUP-29225 [bug]ELTMAP should check alias name validation
.

* fix(TUP-25235):TUP-29225 [bug]ELTMAP should check alias name validation

* fix(TUP-25235):TUP-29225 [bug]ELTMAP should check alias name validation
2020-11-10 11:20:41 +08:00
wang wei
243a1f3326 fix(TDI-44993): tS3Get performance issue using multipart target to EFS/NAS(#5429) 2020-11-09 21:41:12 +08:00
wang wei
4abf245ca1 fix(TDI-44910): Add memsql support in studio (#5479) 2020-11-09 21:38:42 +08:00
Jane Ding
d71b4b148a Fix junit failure (#5489)
Signed-off-by: jding-tlnd <jding@talend.com>
2020-11-09 19:24:56 +08:00
Chao MENG
45f94c22be feat(TUP-28790): Enhance detection of localhost (#5379)
* feat(TUP-28790): Enhance detection of localhost

https://jira.talendforge.org/browse/TUP-28790

* feat(TUP-28790): Enhance detection of localhost

https://jira.talendforge.org/browse/TUP-28790
2020-11-09 17:31:55 +08:00
chmyga
1c41f0c05d chore(TDI-44004): update CXF and talend-ws version (#5485)
Co-authored-by: Dmytro Chmyga <dmytro.chmyga@synapse.com>
2020-11-09 10:34:51 +02:00
Chao MENG
d70ad09a49 feat(TUP-29126): Github Renaming the default branch from master (#5457)
* feat(TUP-29126): Github Renaming the default branch from master
https://jira.talendforge.org/browse/TUP-29126

* feat(TUP-29126): Github Renaming the default branch from master
https://jira.talendforge.org/browse/TUP-29126
2020-11-09 10:10:45 +08:00
bhe-talendbj
6e440ed726 fix(TUP-29164): Remove duplicated invokes of checkStartNodes (#5454) 2020-11-06 16:44:28 +08:00
kjwang
d8ace9d577 temp commit (#5411)
TUP-28833 Multi-User: support custom javajet component in shared studio
https://jira.talendforge.org/browse/TUP-28833
2020-11-06 15:02:22 +08:00
Jane Ding
8b3bbf7dcb feat(TUP-29076):support memsql in studio metadata and components (#5441)
* feat(TUP-29076):support memsql in studio metadata and components
https://jira.talendforge.org/browse/TUP-29076

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29076):support memsql in studio metadata and components
https://jira.talendforge.org/browse/TUP-29076
fix(TUP-29101):[bug] data viewer failed when export to context
https://jira.talendforge.org/browse/TUP-29101
Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-29076):support memsql in studio metadata and components
https://jira.talendforge.org/browse/TUP-29076

Signed-off-by: jding-tlnd <jding@talend.com>
2020-11-06 10:52:34 +08:00
pyzhou
0d0a67be0c Pyzhou/tdi 45062 t file copy last modified time 7.3 (#5472)
* fix(TDI-45062):add back last modification time

* add copy
2020-11-05 17:04:50 +08:00
clesaec
3e410b5373 TDI-44941 : ftp put connectors, renames (#5366)
* TDI-44941 : ftp put connectors, renames
2020-11-04 12:17:06 +01:00
pyzhou-talend
aad3651c38 fix(TDI-44924):change UI string 2020-11-04 16:04:59 +08:00
Colm O hEigeartaigh
6515a196e2 TDI-44685 - Close streams after calling getResourceAsStream in generated code (#5065) 2020-11-04 15:58:28 +08:00
mbasiuk-talend
cdc51a076c chore(TDI-44994): bump library version (#5427)
* chore(TDI-44994): bump library version

* chore(TDI-44994): change version to lower version, avoid bumping other libs

* chore(TDI-44994): update correct library
2020-11-03 09:49:45 +02:00
pyzhou
5e327d255d fix(TDI-44622):Correct Bonita mvn URL (#4992)
* fix(TDI-44622):Correct Bonita mvn URL

* revert default version
2020-11-03 14:47:28 +08:00
kjwang
63fffa6d6c Kjwang/fix tup 28050 avoid possible actions (#5283)
Fix TUP-28050 Multi-User: Avoid possible actions that are not supported on shared studio
https://jira.talendforge.org/browse/TUP-28050
2020-11-03 11:23:27 +08:00
vyu-talend
9a974a67c0 fix(TDI-45054):fix the protected file error. (#5428)
* fix(TDI-45054):fix the protected file error.

* fix(TDI-45054):update the excel jar in component.

* fix(TDI-45054):update lib version.
2020-11-03 11:17:29 +08:00
bhe-talendbj
2a8a6c074a bugfix(TUP-29131) Replace jar name by maven uri for spark jobs (#5444)
* fix(TUP-29131): Replace jar name by uri for spark job

* fix(TUP-29131): Replace jar name by maven uri for spark jobs
2020-11-02 15:11:42 +08:00
pyzhou
53435511fb fix(TDI-45098):tHttpRequest accept 2XX as success (#5442) 2020-11-02 13:53:21 +08:00
Dmytro Grygorenko
7840e1783e fix(TDI-45023): Update tJasper* components to use Apache POI 4.1.2 (#5426)
* fix(TDI-45023): Update libraries for tJasper* components.

* fix(TDI-45023): fix for HTML report generation.
2020-10-29 15:01:22 +02:00
Jane Ding
48c26b2d21 fix(TUP-28945):For Snowflake connection, the tDBConnection which is (#5407)
* fix(TUP-28945):For Snowflake connection, the tDBConnection which is
displayed in the joblet (in tDBInput) is not retained in the job from
where the joblet is invoked
https://jira.talendforge.org/browse/TUP-28945

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28945):For Snowflake connection, the tDBConnection which is
displayed in the joblet (in tDBInput) is not retained in the job from
where the joblet is invoked
https://jira.talendforge.org/browse/TUP-28945

Signed-off-by: jding-tlnd <jding@talend.com>
2020-10-29 20:59:28 +08:00
bhe-talendbj
8027965b4c fix(TUP-28978): Add migration (#5422)
* fix(TUP-28978): Add migration

* fix(TUP-28978): show jar for tELMap, show uri for routine/bean dependencies
2020-10-29 15:55:55 +08:00
qiongli
e9bad7dd32 fix(TDQ-18826)Set required to true for Dynamic jars (#5425) 2020-10-28 18:02:47 +08:00
Dmytro Grygorenko
593a717084 fix(TDI-44087): Update Xstream to 1.4.11.1 version (#5430) 2020-10-28 10:39:29 +02:00
cbadillo1603
fb884595dd fix(TBD-11405): there's no button to browse credential files for tHiveXXX components in DI job (#5333) 2020-10-28 09:10:25 +01:00
clesaec
756ba629ed TDI-44686 : potential NPE (#5164)
* TDI-44686 : potential NPE
2020-10-26 09:59:02 +01:00
Roman
aac496bf8f chore(TDI-44004): update talend-ws lib (#5415)
* chore(TDI-44004): update talend-ws lib

* chore(TDI-44004): update cxf for MicrosoftCRM components
2020-10-26 09:33:28 +02:00
Andrii Medvedenko
7ac98d48a3 fix(TBD-11378): tHiveInput in DI has authentication type for all distributions (#5313)
* fix(TBD-11378): wrong property fix

* an other part of ofix
2020-10-23 15:37:34 +03:00
Dmytro Grygorenko
8752a75abd fix(TDI-45015): tFileInputMail - slow attachments retrieval. (#5399)
* fix(TDI-45015): fix for low Input/OutputStreams performance.

* fix(TDI-45015): increase buffer size to align sizes of internal and external buffers.

* fix(TDI-45015): moved flush() out of cycle.
2020-10-23 11:24:33 +03:00
pyzhou
9e85699b85 fix(TDI-44924):tFileFetch should not trust all server by default (#5328)
* fix(TDI-44924):tFileFetch should not trust all server by default

* add migration task

* correct class name

* fix plugin.xml
2020-10-23 10:41:45 +08:00
bhe-talendbj
c09016f435 fix(TUP-28066): remove the lower jar based on gavct (#5409) 2020-10-22 14:17:38 +08:00
bhe-talendbj
028b6681c3 bugfix(TUP-29022) Fix detection warning popup (#5400) (#5401)
* fix(TUP-29022): fix change warning popup

* fix(TUP-2902): show changes warning popup for implicit context and jobsettings
2020-10-21 14:07:00 +08:00
bhe-talendbj
22fbf0ff0b fix(TUP-29022): fix jobsettings jdbc change detector (#5394) 2020-10-20 18:14:28 +08:00
Oleksandr Zhelezniak
ec74ac0f35 fix(TDI-44935): reuse custom query expression (#5349)
* make query expression idempotent in the generated code
* use instead of duplication of custom query expression a variable
2020-10-20 12:33:53 +03:00
Mike Yan
c5d44ab48b fix(TESB-30713): Beans folder is missing in routines.jar for route (#5391) 2020-10-20 15:43:04 +08:00
Chao MENG
1db9854428 fix(TUP-29012): If I open spark job before di job, on di job run tab I (#5387)
can see spark configuration tab

https://jira.talendforge.org/browse/TUP-29012
2020-10-20 11:31:51 +08:00
Jane Ding
5d1ab30b34 fix(TUP-28952):TOS: NPE when delete items to recycle bin (#5383)
https://jira.talendforge.org/browse/TUP-28952

Signed-off-by: jding-tlnd <jding@talend.com>
2020-10-20 10:15:34 +08:00
Dmytro Sylaiev
3502a8e79a fix(TDI-44896): Get nbline from stdout of gpload (#5304)
* fix(TDI-44896): Get nbline from stdout of gpload

* fix(TDI-44896): Fix the error message for failing get NBLine

* fix(TDI-44896): Fix codegen error

* fix(TDI-44896): Fix log printing 0 despite the real result

* fix(TDI-44897): Add exit code of GPLoad (#5307)
2020-10-19 18:22:46 +03:00
ypiel
f3b91a3cac feat(TDI-44950) : Support oauth 2.0 in mscrm 2016 on-premise (#5300)
* feat(TDI-44950) : Support oauth 2.0 in mscrm 2016 on-premise

* feat(TDI-44950) : some adjustement after meeting + qa

* feat(TDI-44950) : update talend-mscrm version
2020-10-19 14:50:45 +02:00
AlixMetivier
e1bceeea2d fix(TBD-11446): add tFileOutputDelimited to UpdateSeparatorAndEscapeForDatasetAPI migration task (#5374) 2020-10-19 11:24:23 +02:00
cbadillo1603
967a3d94ce fix(TBD-5167): tBigQueryInput project label should same as tBigQueryConfiguration (#5375) 2020-10-16 15:38:21 +02:00
bhe-talendbj
69a5234730 feat(TUP-28342):Save maven url for components with parameter MODULE_LIST (#5278)
* feat(TUP-25246): support custom maven uri for components and dbwizard

* feat(TUP-28342):Save maven url for components with parameter MODULE_LIST

* feat(TUP-28342): Save driver jar as mvnurl for implicit context.

* feat(TUP-25246): Fix necessary UI updates

* feat(TUP-25246): add migration to replace jar name by uri

* feat(TUP-25246): add migration

* feat(TUP-25246): generate poms after migration

* feat(TUP-25246): fix migrate driver jar path

* feat(TUP-25246): Fix context jar path and disable migration and show jar name instead of uri for modulelist

* feat(TUP-25246): fix stats and logs

* feat(TUP-25246): show jar name instead of uri for module table

* feat(TUP-25246): remove quotes

* feat(TUP-25246): fix parse maven uri from context

* feat(TUP-25246): use maven uri of component instead of driver jar path

* feat(TUP-25246): add workaround for components

Co-authored-by: Zhiwei Xue <zwxue@talend.com>
2020-10-16 16:22:59 +08:00
pyzhou
c8f4dd2e6d fix(TDI-44997): fix compile error (#5355) 2020-10-16 14:20:28 +08:00
mbasiuk-talend
a0b0366bcb fix(TDI-44893): update component, and localprovider (#5372)
* fix(TDI-44893): update component, and localprovider

* fix(TDI-44893): use latest soap library
2020-10-16 07:28:01 +03:00
Jane Ding
f24633190e feat(TUP-28640):Improve JDBC database support framework to load (#5368)
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640
2020-10-16 01:37:17 +08:00
jiezhang-tlnd
8e4e04c9cd feat(TUP-28758)Add a warning when we login to a project and if there is (#5295)
migration to do
2020-10-15 15:18:09 +08:00
Liu Xinquan
05a0815778 fix(TDQ-18784) add a new migration task (#5363) 2020-10-15 10:08:01 +08:00
Jane Ding
965c02c58d feat(TUP-28640):Improve JDBC database support framework to load (#5367)
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640
2020-10-15 08:48:14 +08:00
Emmanuel GALLOIS
5eb4100d98 feat(TCOMP-1757): support context in design time actions (#5220)
* feat(TCOMP-1757): support context in design time actions
* feat(TCOMP-1757): add context selection when multiple contexts
2020-10-14 16:53:38 +02:00
vdrokov
b0059fdb12 TESB-30427: Unable to utilize the Runtime datasource in DI Jobs with DB credentials blank (#5271)
Discover jndi source by different keys from the job note.
2020-10-14 16:30:12 +02:00
ypiel
061f3dc431 fix(TDI-44866) : update mscrm lib to 3.4-20200923 2020-10-14 15:40:21 +02:00
Chao MENG
e85afa939c fix(TUP-28612): [TCK] Guess Schema button run the subjob instead of (#5345)
calling discover schema method
https://jira.talendforge.org/browse/TUP-28612
2020-10-14 15:39:40 +08:00
jzhao
cb475f4e5e fix(TDI-45008):tFileOutputDelimited, 'Use OS line separator as row separator when CSV Row separator is set to CR, LF or CRLF' does not work properly (#5364) 2020-10-14 15:37:49 +08:00
Xilai Dai
a76b52f3b9 fix(TESB-30136) javax.activation.UnsupportedDataTypeException (#5329) 2020-10-14 10:21:20 +08:00
bhe-talendbj
4689d45132 fix(TUP-28659): migrate jobs because remove snapshot jars (#5294) 2020-10-14 09:45:15 +08:00
Jane Ding
9b5eccc67c feat(TUP-28640):Improve JDBC database support framework to load (#5287)
* feat(TUP-28640):Improve JDBC database support framework to load
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640

* feat(TUP-28640):Improve JDBC database support framework to load
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640

* feat(TUP-28640):Improve JDBC database support framework to load
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640
fix(TUP-28746):[Bug] after do "save the property to metadata" studio
will throw error logs.
cf: connection infos with quotes

* feat(TUP-28640):Improve JDBC database support framework to load
supported DBs and components automatically
https://jira.talendforge.org/browse/TUP-28640
2020-10-13 16:22:42 +08:00
Jane Ding
53d4b392bc fix(TUP-28618):[Bug] db type dont show Delta in impact page. (#5275)
https://jira.talendforge.org/browse/TUP-28618
2020-10-13 16:20:48 +08:00
Dmytro Grygorenko
a2510f5e2a fix(TDI-44957): tSSH timeouts use (#5303)
* fix(TDI-44957): add timeouts use and configuration.

* fix(TDI-44957): replicate old component's behavior (unlimited session if no timeout specified)

* fix(TDI-44957): set NIO2_READ_TIMEOUT to be infinite too.
2020-10-13 11:01:48 +03:00
Dmytro Grygorenko
539710015b fix(TDI-44839): update POSTGRESQL driver (#5273)
Co-authored-by: wwang-talend <wwang@talend.com>
2020-10-13 10:40:46 +03:00
Mike Yan
a398a56e95 Yyan/feat/tesb 29271 route debug 731 (#5351)
* fix(TESB-29271): Fix NPE for Spack job debug type

* fix(TESB-29271): Hide add breakpoint menu in routelet editor
2020-10-13 15:15:36 +08:00
hzhao-talendbj
ba647bde38 tup-28783 (#5331) 2020-10-13 14:57:05 +08:00
sbliu
3b37b58fd0 TUP-28778 Add 'if' judgment before type conversion. 2020-10-13 10:53:12 +08:00
kjwang
cc07722ebb kjwang/Feat_TUP-27762_new_version_of_ci (#5248)
kjwang/Feat_TUP-27762_new_version_of_ci
2020-10-13 10:50:05 +08:00
hzhao-talendbj
625792e472 tup 27356 (#5279)
* tup 27356

* modify code
2020-10-12 16:09:23 +08:00
mbasiuk-talend
b55e6c1c02 fix(TDI-44893): upgrade libraries, update code, remove old libs (#5229)
* fix(TDI-44893): upgrade libraries, update code, remove old libs

* fix(TDI-44893): update local provider pom with newest version
2020-10-12 10:33:23 +03:00
Andreas Mattes
b43b149ba4 TESB-30623 Ensure routine classes are compiled before the library is created. (#5322) 2020-10-10 10:43:53 +02:00
Mike Yan
918cf4eed5 fix(TESB-29271): Fix NPE for Spack job debug type (#5321) 2020-10-09 23:01:05 +08:00
Mike Yan
5fcaae4e48 fix(TESB-29271): Fix for Spack job debug type (#5320) 2020-10-09 17:48:22 +08:00
Mike Yan
41231a43b6 fix(TESB-29271): Fix for headless issue (#5319) 2020-10-09 10:47:47 +08:00
Mike Yan
03eeaac507 feat(TESB-29271): Route debuging feature (#5280)
* feat(TESB-29271):Initical update for route debugger

* feat(TESB-29271): conditional breakpoint component tab

* feat(TESB-29271): Route debuging feature

* fix(TESB-29271): Cumulative fixes and code improvement

* fix(TESB-29271): Added default view(DI)

* fix(TESB-29271): Code improvements by code review

* feat(TESB-29271): code updated by review
2020-10-07 21:50:35 +08:00
chmyga
ad7316b2ce Dchmyga/tdi 44868 tgsput huge files (#5230)
* fix(TDI-44868): tGSPut with huge files

* Add property to set part size to upload files in parts

* fix(TDI-44868): tGSPut with huge files

* add migration task

* fix(TDI-44868): tGSPut with huge files

* Fix PR comment

* fix(TDI-44868): tGSPut with huge files

* Fix overflow problem

Co-authored-by: Dmytro Chmyga <dmytro.chmyga@synapse.com>
2020-10-05 17:41:49 +03:00
Dmytro Grygorenko
31e27f97e0 fix(TDI-44880): set ERROR_MESSAGE in components. (#5207) 2020-10-05 17:20:34 +03:00
Dmytro Grygorenko
09fd069243 fix(TDI-44883): set value of ERROR_MESSAGE (#5216) 2020-09-29 16:26:04 +03:00
Dmytro Sylaiev
f5ea29812e fix(TDI-44771): Fix error code for prejob in multithread run (#5208) 2020-09-28 09:33:05 +03:00
Jane Ding
5acbcd229e fix(TUP-26156)tCreateTable change DBType and Property Type not work (#4465) (#5276)
* fix(TUP-26156)tCreateTable: change "DBType" and "Property Type" not work
https://jira.talendforge.org/browse/TUP-26156

* fix(TUP-26156)tCreateTable: change "DBType" and "Property Type" not work
https://jira.talendforge.org/browse/TUP-26156

* fix(TUP-26156)tCreateTable change DBType and Property Type not work
https://jira.talendforge.org/browse/TUP-26156
2020-09-26 17:11:27 +08:00
Zhiwei Xue
203187c05f Revert "feat(TUP-28342):Save maven url for components with parameter MODULE_LIST"
This reverts commit 50e7057eb9.
2020-09-25 21:35:06 +08:00
Dmytro Grygorenko
39996404a7 fix(TDI-44568): Cumulative changes for POI update task (all-in-one) (#5269)
* fix(TDI-44568): cumulative commit for all changes

* fix(TDI-44568): Distribution plugin added

* fix(TDI-44568): all latest changes

* fix(TDI-44568): recent changes

* fix(TDI-44568): latest + build fix

* fix(TDI-44568): group name changed, versions updated

* fix(TDI-44568): classpath and manifest files updated
2020-09-25 14:32:39 +03:00
Zhiwei Xue
50e7057eb9 feat(TUP-28342):Save maven url for components with parameter MODULE_LIST 2020-09-25 17:45:08 +08:00
pyzhou
94c39f79ff fix(TDI-44911):tFileInputExcel has malposition header when enabling (#5244)
column
2020-09-25 14:10:51 +08:00
SunChaoqun
a1b85f19c1 Bugfix/maintenance/7.3/tesb 30423 (#5267)
* TESB-30423:Install Patch_20200925_R2020-09_v1-7.3.1 with TDI.license,
studio export job fail

* TESB-30423:Install Patch_20200925_R2020-09_v1-7.3.1 with TDI.license,
studio export job fail
2020-09-24 13:33:18 +08:00
SunChaoqun
66755538e8 TESB-30423:Install Patch_20200925_R2020-09_v1-7.3.1 with TDI.license, (#5259)
studio export job fail
2020-09-23 18:32:09 +08:00
Emmanuel GALLOIS
a2209548ae fix(TCOMP-1772): typo 2020-09-23 11:43:37 +02:00
Emmanuel GALLOIS
d8e604bdb0 fix(TCOMP-1772): fix MEMO_X type in configuration.javajet (#5257) 2020-09-23 09:41:20 +02:00
Jane Ding
e4194eea26 feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE (#5255)
* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654
2020-09-23 14:20:12 +08:00
Dmytro Grygorenko
3a9ea6fafd Revert "fix(TDI-44568): Update TalendExcel library. (#5001)" (#5254)
This reverts commit e8c2b70985.
2020-09-22 17:06:25 +03:00
Dmytro Grygorenko
d71fce6268 Revert "fix(TDI-44568): Update talendMsgMailUtil library (#5040)" (#5253)
This reverts commit 75fb043c1c.
2020-09-22 17:06:18 +03:00
Dmytro Grygorenko
11368d3910 Revert "fix(TDI-44568): Update simpleexcel library (#5041)" (#5252)
This reverts commit 9b70682a48.
2020-09-22 17:06:12 +03:00
Dmytro Grygorenko
d01477ab11 Revert "fix(TDI-44568): Update org.talend.libraries.excel plugin (#5042)" (#5251)
This reverts commit e0b54b7df4.
2020-09-22 17:06:03 +03:00
Dmytro Grygorenko
7900e44e13 Revert "fix(TDI-44568): Update Apache POI version in components (#5043)" (#5250)
This reverts commit 04988ed52b.
2020-09-22 17:05:50 +03:00
Dmytro Grygorenko
04988ed52b fix(TDI-44568): Update Apache POI version in components (#5043)
* fix(TDI-44568): Update Apache POI version in components

* fix(TDI-44568): use custom POI libraries in components

* fix(TDI-44568): "simpleexcel" version updated.

* fix(TDI-44568): more complete version update )

* fix(TDI-44568): Jasper components to use previous POI version (could not be updated that easy).
2020-09-22 13:28:11 +03:00
Dmytro Grygorenko
e0b54b7df4 fix(TDI-44568): Update org.talend.libraries.excel plugin (#5042)
* fix(TDI-44568): Talend libraries versions updated.

* fix(TDI-44568): remove unnecessary files.

* fix(TDI-44568): classpath file restored

* fix(TDI-44568): use custom POI libraries

* fix(TDI-44568): Manifest file restored, versions updated.
2020-09-22 13:27:48 +03:00
Dmytro Grygorenko
9b70682a48 fix(TDI-44568): Update simpleexcel library (#5041)
* fix(TDI-44568): Update simpleexcel library

* fix(TDI-44568): Aligning code with "master" branch

* fix(TDI-44568): version updated

* fix(TDI-44568): code alignment reverted, library version changed.
2020-09-22 13:27:32 +03:00
Dmytro Grygorenko
75fb043c1c fix(TDI-44568): Update talendMsgMailUtil library (#5040) 2020-09-22 13:26:16 +03:00
Dmytro Grygorenko
e8c2b70985 fix(TDI-44568): Update TalendExcel library. (#5001)
* fix(TDI-44568): Bump up versions, remove unused dependencies, some refactoring.

* fix(TDI-44568): addressed issue with preWb.

* fix(TDI-44568): GAV update, removed "version = 6.0.0"

* fix(TDI-44568): Version updated
2020-09-22 13:25:56 +03:00
wang wei
e727191389 fix(TDI-44908): pass TMC function jar for trunjob independent process case and dynamic job case (#5235) 2020-09-22 18:14:48 +08:00
Jane Ding
340b59a17e feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE (#5240)
https://jira.talendforge.org/browse/TUP-27654
2020-09-21 19:59:33 +08:00
OleksiiNimych
6a2257f107 feat(TDI-39689): tTeradataTPT support for multiple schemas (#5193)
* feat(TDI-39689): tTeradataTPT support for multiple schemas

* feat(TDI-39689): tTeradataTPTUtility

* feat(TDI-39689): tTeradataTPT add migration task

* feat(TDI-39689): rename new property

* feat(TDI-39689): tTeradataTPT fix migration task
2020-09-21 11:36:14 +03:00
Jane Ding
d4751f4cfb feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE (#5233)
https://jira.talendforge.org/browse/TUP-27654
2020-09-21 14:39:27 +08:00
hcyi
9e9406e2fa fix(TUP-25171):Issue when using components inside Joblet. (#5119)
* fix(TUP-25171):Issue when using components inside Joblet.

* fix(TUP-25171):Issue when using components inside Joblet.

* fix(TUP-25171):code format for Issue when using components inside
Joblet.
2020-09-18 15:55:09 +08:00
Chao MENG
647dbe0676 fix(TUP-26413): Be able to compare metadata connection conflicts (#5222)
https://jira.talendforge.org/browse/TUP-26413
2020-09-18 12:25:18 +08:00
Jane Ding
775e5a59f6 feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE (#5078)
* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

Signed-off-by: jding-tlnd <jding@talend.com>

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654
TUP-28598:[Bug] driver class show as mvn:SparkJDBC42-2.6.14.1018.jar is
wrong.

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
TUP-28615:[Bug] Save the property to metadata will change Delta Lake to
JDBC.
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
TUP-28610:[Bug] drag Delta Lake component from Palatte to job meet NPE.
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654
Hard code to filter unsupported components
Set useAutoCommit and autocommit default value as true

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
TUP-28616:[Bug] in stat&logs/extra page, when select Delta Lake DB, it
shows as JDBC.
To fit on support database, delta lake should be filter
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
set delta lake/jdbc default mapping
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654

* feat(TUP-27654):Databricks Delta Lake - Support ELT & MERGE
https://jira.talendforge.org/browse/TUP-27654
2020-09-18 12:01:14 +08:00
wang wei
9a18f18a76 fix(TDI-44745): FileInputDelimited fails to properly consume CipherInputStream(#5140) 2020-09-17 14:15:43 +08:00
Chao MENG
7a66f2715c Cmeng/patch/7.3/tup 28063 improve start (#5092)
* workitem(TUP-28063): Improve studio bundles reloading logic
https://jira.talendforge.org/browse/TUP-28063

* workitem(TUP-28063): Improve studio bundles reloading logic
https://jira.talendforge.org/browse/TUP-28063

* workitem(TUP-28063): Improve studio bundles reloading logic
https://jira.talendforge.org/browse/TUP-28063
2020-09-16 16:40:47 +08:00
Zhiwei Xue
33c569d7cb feat(TUP-28323):Improve the maven build time spent during login project (#5166) 2020-09-16 16:35:24 +08:00
bhe-talendbj
41d611a140 fix(TUP-28628): NPE because of context (#5206) 2020-09-16 15:46:28 +08:00
vdrokov
7381677f63 fix(TESB-29853): TESB-29819 Studio doesn't select model beans during exporting (#5002)
* fix(TESB-29819): Studio doesn't select model beans during exporting

* fix(TESB-29819) Studio doesn't select model beans during exporting
Alternative fix
2020-09-16 08:19:49 +02:00
pyzhou
0685203687 fix(TDI-44855):fix index out of bound error (#5212) 2020-09-16 12:26:04 +08:00
wang wei
02bd1c98e7 fix(TDI-44824): Observability: Update daikon-audit library version (#5205) 2020-09-15 20:26:41 +08:00
apoltavtsev
2115adb42e fix(TESB-29963) MQ Dependency issues in Runtime (#5104)
* fix(TESB-29963) MQ Dependency issues in Runtime

* Update JobJavaScriptOSGIForESBManager.java
2020-09-15 11:38:27 +02:00
wang wei
cfd4a491ea fix(TDI-44806): Observability: Add customization for event frequency (#5200)
* fix(TDI-44806): Observability: Add customization for event frequency

* adjust the key to "audit.interval"
2020-09-15 17:30:15 +08:00
hcyi
c1c0c5481b fix(TUP-28635):[bug] output path always be changed when path contains \ (#5209) 2020-09-15 14:51:49 +08:00
Zhiwei Xue
9b6eb4d7ab fix(TUP-28576):Reference projects routine POM file is modified and can't (#5204)
be committed to GIT
2020-09-15 09:56:07 +08:00
hcyi
ebb4502761 feat(TUP-23827):Runtime Lineage for studio. (#5003)
* feat(TUP-23827):Runtime Lineage for studio.

* feat(TUP-23827):improve the GUI of Runtime Lineage for studio.

* feat(TUP-23827):improve the GUI of Runtime Lineage for studio.

* feat(TUP-23827):TUP-28186[bug]runtimelineage.prefs better remove job
version

* feat(TUP-23827):update the text Runtime Lineage to lower case after
review

* fix(TDI-44670): Runtime Lineage - Finalize component part

* split audit log and runtime lineage log

* add a custom api for other feature and future

* feat(TUP-23827):only EE product can have runtime lineage。

* fix something

* fix trunjob

* fix(TDI-44735): remove the duplicated parameters info in message, user
can fetch it from customInfo json object

* fix(TDI-44734): Enhance output path system for logging of runtime json

* fix the compiler issue

* filter the special table field

* fix the compiler and NPE risk

* feat(TUP-23827):only support for DI jobs if selected "Use runtime
lineage for all jobs" .

* feat(TUP-28539):Enhance GUI of Runtime Lineage.

* feat(TUP-28539):improve for Enhance GUI of Runtime Lineage.

* fix the compiler issue

* feat(TUP-28539):remove Output path test and verify 。

Co-authored-by: wwang-talend <wwang@talend.com>
2020-09-14 18:42:55 +08:00
Emmanuel GALLOIS
38ce7af3e9 fix(TCOMP-1770): bump component-runtime to 1.1.25 2020-09-14 11:10:26 +02:00
pyzhou
e5c07ac253 fix(TDI-44855):tSSH append new line in the result (#5190) 2020-09-14 10:58:32 +08:00
AlixMetivier
176e3489ce fix(TBD-10885): refacto date and timestamp handling with dataset (#5149) 2020-09-10 14:48:45 +02:00
vyu-talend
65c980efdf fix(TDI-44846):fix the compliation error. (#5187) 2020-09-10 14:04:58 +08:00
nrousseau
2fb8abe293 fix(TUP-28487): fix performance issue (#5152)
* fix(TUP-28487): fix performance issue
2020-09-10 09:52:59 +08:00
vyu-talend
036736f589 fix(TDI-44803):attachment issue in tfileinputmail. (#5176) 2020-09-09 18:08:02 +08:00
OleksiiNimych
5b8b168860 fix(TDI-44759): tTeradataTPTExec fix SHOW_IF statement (#5147)
* fix(TDI-44759): tTeradataTPTExec fix SHOW_IF statement

* fix(TDI-44759): tTeradataTPTExec fix display of properties
2020-09-09 10:20:36 +03:00
Oleksandr Zhelezniak
ea82a11f32 chore(TDI-44765): remove guava:13.0 (#5168)
* remove guava:13.0 dependency use 20.0 instead of it
2020-09-09 09:58:49 +03:00
chmyga
81e7e49dcf fix(TCOMP-1749): validation with ActiveIf broken (#5121)
* Don't show validation results for hidden parameters

* Revalidate parameters on show/hide property change

Co-authored-by: Dmytro Chmyga <dmytro.chmyga@synapse.com>
2020-09-08 15:58:28 +03:00
Emmanuel GALLOIS
e5f2b94511 feat(TCOMP-1651): handle Dynamic for component-runtime components (#4861)
* feat(TCOMP-1651): bump component-runtime to 1.1.23-SNAPSHOT
- should be changed after the 1.1.23 release
* feat(TCOMP-1651): javajet update to Dynamic
* feat(TCOMP-1651): copyright update
* feat(TCOMP-1651): update javajet stubs
* feat(TCOMP-1651): update configuration javajet
* feat(TCOMP-1651): fix errors on configuration javajet
* feat(TCOMP-1651): bump component-runtime to 1.1.23
* feat(TCOMP-1651): bump component-runtime to 1.1.24-SNAPSHOT
* feat(TCOMP-1651): pass dynamic column metadatas to RecordConverters
* feat(TCOMP-1651): bump component-runtime to 1.1.24
2020-09-08 10:19:37 +02:00
Andrii Medvedenko
1b6f6cb3cb feat(TBD-10473): Hive input components for Dataproc distribution to work with Service account (#5114) 2020-09-08 02:34:40 +03:00
wang wei
e7a58886dc fix(TDI-44760): TMomOutput not changing the value of the queue (#5145) 2020-09-07 16:59:17 +08:00
wang wei
84d87eb37a fix(TDI-44678): tFileOutputXML make unexpected empty file (#5052) 2020-09-07 15:37:20 +08:00
vyu-talend
3506e3b76a feat(TDI-44720):implement pause and resume for redshift cluster. (#5141)
* feat(TDI-44720):implement pause and resume for redshift cluster.

* feat(TDI-44720):change show-if.
2020-09-04 14:28:02 +08:00
vyu-talend
60b1216d84 feat(TDI-44755):upgrade aws sdk version. (#5139) 2020-09-04 13:55:49 +08:00
hcyi
38635cfa1d fix(TUP-28258):tETLInput and tETLMap does not show datatypes for (#5124)
Snowflake DB on Import Metadata
2020-09-04 11:28:09 +08:00
Emmanuel GALLOIS
73c93317fd (TCOMP-1759): fix random order for columns in guessSchema in Studio (#5151) 2020-09-03 11:36:46 +02:00
Oleksandr Zhelezniak
584dd71b8b fix(TDI-44654): fixed NPE for tFTP (#5071)
* add info message about nullity of the ssl session context
* update version of commons-net-ftps-proxy [3.6.1-talend-20190819] -> [3.6.1-talend-20200902]
* update version of 'commons-net-ftps-proxy' in javajet ftp's components
* remove slf4j-log4j12 dependency
2020-09-03 10:17:24 +03:00
jzhao
7179453025 fix(TDI-30522): revert all fixes. (#5163)
* Revert "fix(TDI-30522): extra '(' removed (#5161)"

This reverts commit 9497eee89e.

* Revert "fix(TDI-30522): reworked fix. (#5109)"

This reverts commit 478a707a7c.
2020-09-02 15:00:08 +08:00
Dmytro Grygorenko
9497eee89e fix(TDI-30522): extra '(' removed (#5161)
* fix(TDI-30522): extra '(' removed

* fix(TDI-30522): missing '<%' restored
2020-09-01 20:42:10 +03:00
Dmytro Grygorenko
b8bae80459 TalendExcel: read password protected files. (#5132)
* fix(TDI-44743): read password protected files.

* fix(TDI-44743): use new library.

* fix(TDI-44743): Error message changed.
2020-09-01 16:11:08 +03:00
Oleksandr Zhelezniak
8827109f7f chore(TDI-44145): update commons-codec:1.14 (#5154)
* update commons-codec to 1.14 in javajets
* update internal libraries(libs_src) where was updated commons-codec
2020-09-01 15:26:43 +03:00
jzhao
a2ffcbba3e fix(TDI-44748):Error in method "add" for the component tFileOutputMSXML (#5133) 2020-09-01 15:07:13 +08:00
mbasiuk-talend
6c5c7bcfec feat(TDI-44730): upgrade snowflake driver to latest (#5150) 2020-09-01 09:52:41 +03:00
Dmytro Sylaiev
c5f8b26687 fix(TDI-44074): Fix performance for date mssql (#4638) (#5155) 2020-08-31 17:32:58 +03:00
Oleksandr Zhelezniak
614556ce8c fix(TDI-31162): remove interfere condition (#5120)
* remove interfere SPECIFY_DATASOURCE_ALIAS from the import condition of the tOracle components
2020-08-31 10:42:48 +03:00
Dmytro Grygorenko
478a707a7c fix(TDI-30522): reworked fix. (#5109)
* fix(TDI-30522): reworked fix.

* fix(TDI-30522): Closing bracket restored
2020-08-28 21:03:49 +03:00
Jane Ding
a7c9258846 Jding/tup 28440 (#5138)
* Add test suffix for test case support (#5136)

Co-authored-by: Michael Verrilli <mverrilli@talend.com>

* Mverrilli patch 7.3.1 iterate test fix (#5137)

* Add test suffix for test case support

* Fixed missing reference to className

* Update iterate_subprocess_header.javajet

Co-authored-by: Michael Verrilli <mverrilli@talend.com>

Co-authored-by: Michael Verrilli <mverrilli@talend.com>
2020-08-28 15:03:11 +08:00
Jane Ding
4f63e93995 Jding/73/tup 28085 error while compiling a kafka component with big data studio (#5089)
* fix(TUP-28085):Error while compiling a Kafka component with Big Data
studio
https://jira.talendforge.org/browse/TUP-28085
Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28085):Error while compiling a Kafka component with Big Data
studio
https://jira.talendforge.org/browse/TUP-28085

Signed-off-by: jding-tlnd <jding@talend.com>
2020-08-28 14:01:41 +08:00
Emmanuel GALLOIS
9a44a3006f patch(TPS-4101): bump component-runtime to 1.1.15.2 (#5127) 2020-08-25 12:42:02 +02:00
Roman
59da4a3e14 fix(TDI-44063): add transitive xml-apis lib (#5128) 2020-08-25 11:48:42 +03:00
Dmytro Sylaiev
27928d664f fix(TDI-44394): Fix migration tcomp for javajet (#5129) 2020-08-25 11:07:09 +03:00
bhe-talendbj
e2bc6699fa fix(TUP-28308): build error and joblet maven xml file has incorrect values and Couldn't parse data error message (#5116)
* fix(TUP-28308): Check multiple version of joblets

* fix(TUP-28308): Revert unnecessary changes
2020-08-25 10:27:22 +08:00
hcyi
d840d36182 fix(TUP-28134):TCK Components shown as hidden by default in the palette (#5045)
settings
2020-08-25 09:57:02 +08:00
vdrokov
1492cf8130 fix(TESB-29980): Issue of tsendemail component deployed in runtime wi… (#5108)
* fix(TESB-29980): Issue of tsendemail component deployed in runtime with attachment

* TESB-29980 Simplified fix as it should cover Java 8 and 11.

Co-authored-by: Andreas Mattes <andreasmattes@operamail.com>
2020-08-24 19:00:21 +02:00
jiezhang-tlnd
bce81ea615 fix(TUP-28078)metadata folder generated under .../poms/jobs when create (#5075)
new connection in remote project
https://jira.talendforge.org/browse/TUP-28078
2020-08-24 10:28:58 +08:00
Emmanuel GALLOIS
c49a50b971 fix(TDI-44164): revert changes on talend.component.manager.m2.repository (#5111) 2020-08-21 17:19:39 +02:00
Roman
1f66234efa fix(TDI-44063): update xercesImpl dependency (#5004)
* fix(TDI-44063): update xercesImpl dependency (#4857)

* fix(TDI-44063): bump talend soap lib in maven dependency plugin

* fix(TDI-44063): back bundleID

* fix(TDI-44063): change GAV for soap lib
2020-08-21 17:46:20 +03:00
Dmytro Grygorenko
fd1fd16679 fix(TDI-44439): backport to "maintenance/7.3" (#5029)
* fix(TDI-44439): backport to "maintenance/7.3"

* fix(TDI-44439): in case dbName is null
2020-08-21 13:14:05 +03:00
Jane Ding
c7213657a5 fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes (#5101)
* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

Signed-off-by: jding-tlnd <jding@talend.com>

* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

Signed-off-by: jding-tlnd <jding@talend.com>
2020-08-20 18:35:55 +08:00
Pierre Teyssier
d362662fa1 fix(TDI-42668): replace hardcoded value 2020-08-20 04:27:31 +08:00
Andrii Medvedenko
5d53d880ed fix of tHiveInput (#5100)
tHiveInput with DataProc 1.4 in DI: no rows retrieved
2020-08-19 17:02:30 +03:00
Jane Ding
1d508903ea Jding/tup 28338 implicit context issue (#5097)
* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338

* fix(TUP-28338):[7.3.1] configuring Implicit tContextLoad in the job Goes
on error after installing the patch 07_2020
https://jira.talendforge.org/browse/TUP-28338
2020-08-19 18:49:00 +08:00
Mike Yan
87c312f5b5 fix(TESB-29957): Added properties for metrics (#5095)
* fix(TESB-29957):CI Fail to publish Microservice with Prometheus param

* fix(TESB-29957): Added contexts properties

* fix(TESB-29957): Added properties for metrics
2020-08-19 17:05:21 +08:00
Chao MENG
f206247506 fix(TUP-28295): UI Problem of TCOMPV0 in the recent 741NB and 731 (#5079)
Release + Aug. Nightly Build Patch
https://jira.talendforge.org/browse/TUP-28295
2020-08-17 15:43:56 +08:00
Mike Yan
c8ee01160c fix(TESB-29957): Added contexts properties (#5082)
* fix(TESB-29957):CI Fail to publish Microservice with Prometheus param

* fix(TESB-29957): Added contexts properties
2020-08-14 17:33:45 +08:00
Dmytro Grygorenko
8dd10f9d2d Update libraries: "httpclient-4.5.12" and "httpcore-4.4.13" (#4889)
* fix(TDI-44486): library update - "httpclient-4.5.12.jar"

* fix(TDI-44486): Updated httpcore and httpclient for all components.

* fix(TDI-44486): recent changes + "commons-codec-1.14.jar"
2020-08-14 11:53:53 +03:00
vyu-talend
fb83719828 feat(TDI-44630):support amazon EMR HA. (#5059)
* feat(TDI-44630):support amazon EMR HA.

* feat(TDI-44630):improve the error message.
2020-08-14 15:55:57 +08:00
Dmytro Sylaiev
3fa500b3ac tS3Connection setPathStyleAccess for better MinIO support (#5012) (#5076)
* Added a checkbox under Advanced settings of tS3Connection when a custom endpoint is used.
When MinIO is used via DNS/hostname this must be checked otherwise the operations fail.

* PathStyleAccess should only be used if a custom endpoint is enabled.

Co-authored-by: bgunics-talend <63251373+bgunics-talend@users.noreply.github.com>
2020-08-13 13:53:28 +03:00
Mike Yan
219c78267e fix(TESB-29957):CI Fail to publish Microservice with Prometheus param (#5073) 2020-08-13 17:38:54 +08:00
AlixMetivier
f43e648cd8 feat(TBD-10769): delta Features for R08 (#5067) 2020-08-13 10:16:58 +02:00
hzhao-talendbj
c69c81bba7 Hengzhao/backport73/tup 25103 (#5033)
* bugfix(TUP-25103):tELTMap - ctrl+space shortcut is not working to
retrieve easily job context

* TUP-25103

* Tup-25103

* tup-25103  use styleText

* tup-25103 add exception handler

* remove comments

* solve issues

* add doit false to avoid new line

* try replace "\r\n" to " " in tELTMssqlMap_main.javajet

* fix edit expression builder but sql tab not refresh

* remove comments and format

* improve code

* to fix other eltmap about multi-line expression  and import/export
expression from file

* remove javajet and remove buttons

* remove expressionbuilder

Co-authored-by: hwang <hwang@talend.com>
2020-08-13 13:56:20 +08:00
pyzhou
82a927aed4 feat(TDI-42668):update jar of tssh component branch 7.3 (#5038)
* feat(TDI-42668):update jar of tssh component

* fix status code

* fix error interactive

* support dsa

* fix compile error
2020-08-13 12:25:37 +08:00
Jane Ding
b507c601f2 fix(TUP-28180):[bug]Only latest version deleted from root pom when check (#5044)
"Exclude deleted items" and delete to recycle bin
https://jira.talendforge.org/browse/TUP-28180

Signed-off-by: jding-tlnd <jding@talend.com>
2020-08-13 10:42:52 +08:00
SunChaoqun
b7c7689554 Csun/tesb 29322/microservice monitoring expose camel metrics m73 (#5060)
Merge into maintenance/7.3 for patch  7.3.1-R2020-08 - 20200821, since master branch has been validated.
2020-08-13 10:42:06 +08:00
Dmytro Grygorenko
1dce8ebdb8 Revert "fix(TDI-30522): backport to "maintenance/7.3" branch (#4898)" (#5070)
This reverts commit 4380916dcf.
2020-08-12 11:20:05 +03:00
bhe-talendbj
dfb1157d92 fix(TUP-27374): Fix test case (#5055) (#5061) 2020-08-11 14:53:32 +08:00
SunChaoqun
3ee0dd2559 TESB-29133 Update dom4j to 2.1.3 (#5000) (#5053)
Co-authored-by: Maksym Sheverda <maksym.sheverda@synapse.com>

Co-authored-by: msheverda-talend <53863455+msheverda-talend@users.noreply.github.com>
Co-authored-by: Maksym Sheverda <maksym.sheverda@synapse.com>
2020-08-11 13:56:33 +08:00
wang wei
225ce06412 fix(TDI-44192): Update dom4j to 2.1.3 (#4899) 2020-08-11 10:21:07 +08:00
AlixMetivier
0a18ad8c80 fix(TBD-11052): fix dataset migration for csv value (#5051)
* fix(TBD-11052): wrong value was checked

* fix value
2020-08-10 09:39:02 +02:00
1265 changed files with 38740 additions and 21571 deletions

View File

@@ -584,13 +584,11 @@ EParameterName.jdbcURL=JDBC URL
EParameterName.driverJar=Driver jar
EParameterName.className=Class name
EParameterName.mappingFile=Mapping file
SetupProcessDependenciesRoutinesAction.title=Setup routine dependencies
SetupProcessDependenciesRoutinesAction.title=Setup Codes Dependencies
SetupProcessDependenciesRoutinesDialog.systemRoutineLabel=System routines
SetupProcessDependenciesRoutinesDialog.userRoutineLabel=User routines
PerformancePreferencePage.addAllSystemRoutines=Add all system routines to job dependencies, when creating a new job
PerformancePreferencePage.addAllUserRoutines=Add all user routines to job dependencies, when creating a new job
ShowRoutineItemsDialog.systemTitle=Select Sytem Routines
ShowRoutineItemsDialog.title=Select Routines
AbstractMultiPageTalendEditor_pleaseWait=Saving Please Wait....
DocumentationPreferencePage.use_css_template=Use CSS file as a template when export to HTML
DocumentationPreferencePage.css_file=CSS File

View File

@@ -1,5 +1,5 @@
NavigatorContent.contexts=Contexts
NavigatorContent.routines=Routines
NavigatorContent.routines=Global Routines
NavigatorContent.sqlTemplates=SQL Templates
NavigatorContent.documentation=Documentation
NavigatorContent.activation=di.fake.for.activation

View File

@@ -6,7 +6,7 @@
<license url="http://www.example.com/license">[Enter License Description here.]</license>
<requires>
<import feature="org.eclipse.test" version="0.0.0" match="greaterOrEqual"/>
<import plugin="org.junit" version="0.0.0" match="greaterOrEqual"/>
<import plugin="org.junit" version="4.13.2" match="greaterOrEqual"/>
<import plugin="org.talend.commons.runtime" version="0.0.0" match="greaterOrEqual"/>
<import plugin="org.talend.commons.ui" version="0.0.0" match="greaterOrEqual"/>
<import plugin="org.talend.core" version="0.0.0" match="greaterOrEqual"/>

View File

@@ -16,11 +16,9 @@
</requires>
<plugin id="org.talend.libraries.apache" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.axis2" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.batik" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.chemistry" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.common" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.cxf" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.google" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.http" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.lucene" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.apache.xml" download-size="0" install-size="0" version="0.0.0"/>
@@ -52,5 +50,4 @@
<plugin id="org.talend.libraries.slf4j" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.xml" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.zmq" download-size="0" install-size="0" version="0.0.0"/>
<plugin id="org.talend.libraries.zookeeper" download-size="0" install-size="0" version="0.0.0"/>
</feature>

View File

@@ -50,8 +50,11 @@ List<IConnection> allSubProcessConnection = codeGenArgument.getAllMainSubTreeCon
String cid = node.getUniqueName();
List<? extends INode> jobCatcherNodes = process.getNodesOfType("tJobStructureCatcher");
boolean enableLogStash = jobCatcherNodes != null && !jobCatcherNodes.isEmpty();
boolean logstashCurrent = !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend") && enableLogStash;
boolean jobCatcherExists = jobCatcherNodes != null && !jobCatcherNodes.isEmpty();
INode jobCatcherNode = jobCatcherExists ? jobCatcherNodes.get(0) : null;
boolean enableLogStash = !Boolean.getBoolean("deactivate_extended_component_log") && jobCatcherExists;
boolean logstashCurrent = enableLogStash && !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend");
if((codePart.equals(ECodePart.END))&&(stat || logstashCurrent)){
boolean iterateInVFComp = (node.getVirtualLinkTo() != null && node.getVirtualLinkTo() == EConnectionType.ITERATE);
@@ -88,10 +91,10 @@ if((codePart.equals(ECodePart.END))&&(stat || logstashCurrent)){
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
for (INode jobStructureCatcher : jobCatcherNodes) {
@@ -122,10 +125,10 @@ if((codePart.equals(ECodePart.END))&&(stat || logstashCurrent)){
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
for (INode jobStructureCatcher : jobCatcherNodes) {
@@ -164,6 +167,7 @@ if((codePart.equals(ECodePart.END))&&(stat || logstashCurrent)){
}
}
}
List<IMetadataTable> metadatas = node.getMetadataList();
if ((!node.isSubProcessStart())&&(NodeUtil.isDataAutoPropagated(node))) {
if (inputColName!=null) {
@@ -189,6 +193,117 @@ if((codePart.equals(ECodePart.END))&&(stat || logstashCurrent)){
}
}
}
//log runtime lineage
boolean enable_runtime_lineage_log = NodeUtil.isJobUsingRuntimeLineage(process) && jobCatcherExists && !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend");
if(enable_runtime_lineage_log) {//}
List<? extends IConnection> outConns = node.getOutgoingConnections();
if(!outConns.isEmpty()) {
%>
if(tos_count_<%=node.getUniqueName() %> == 0) {
<%
//}
}
for (IConnection conn : outConns) {
if(!conn.getLineStyle().equals(EConnectionType.FLOW_MAIN) && !conn.getLineStyle().equals(EConnectionType.FLOW_MERGE) && !conn.getLineStyle().equals(EConnectionType.FLOW_REF)) {
continue;
}
IMetadataTable metadata = conn.getMetadataTable();
if (metadata==null) {
continue;
}
List<IMetadataColumn> columns = metadata.getListColumns();
if(columns == null || columns.isEmpty()) {
continue;
}
%>
class SchemaUtil_<%=conn.getUniqueName()%>_<%=conn.getMetadataTable().getTableName()%> {
public java.util.List<java.util.Map<String, String>> getSchema(final <%=NodeUtil.getPrivateConnClassName(conn) %>Struct <%=conn.getName()%>) {
java.util.List<java.util.Map<String, String>> schema = new java.util.ArrayList<>();
if(<%=conn.getName()%> == null) {
return schema;
}
java.util.Map<String, String> field = null;
<%
for(IMetadataColumn column : columns){
if("id_Dynamic".equals(column.getTalendType())) {
%>
routines.system.Dynamic dynamic = <%=conn.getName()%>.<%=column.getLabel()%>;
if(dynamic != null) {
for(routines.system.DynamicMetadata metadata : dynamic.metadatas) {
field = new java.util.HashMap<>();
field.put("name", metadata.getName());
field.put("origin_name", metadata.getDbName());
field.put("iskey", "" + metadata.isKey());
field.put("talend_type", metadata.getType());
field.put("type", metadata.getDbType());
field.put("nullable", "" + metadata.isNullable());
field.put("pattern", metadata.getFormat());
field.put("length", "" + metadata.getLength());
field.put("precision", "" + metadata.getPrecision());
schema.add(field);
}
}
<%
continue;
}
String pattern = column.getPattern();
if(pattern == null || pattern.isEmpty() || pattern.equals("\"\"")) {
pattern = "\"\"";
}
%>
field = new java.util.HashMap<>();
field.put("name", "<%=column.getLabel()%>");
field.put("origin_name", "<%=column.getOriginalDbColumnName()%>");
field.put("iskey", "<%=column.isKey()%>");
field.put("talend_type", "<%=column.getTalendType()%>");
field.put("type", "<%=column.getType()%>");
field.put("nullable", "<%=column.isNullable()%>");
field.put("pattern", <%=pattern%>);
field.put("length", "<%=column.getLength()%>");
field.put("precision", "<%=column.getPrecision()%>");
schema.add(field);
<%
}
%>
return schema;
}
}
java.util.List<java.util.Map<String, String>> schema_<%=conn.getUniqueName()%>_<%=conn.getMetadataTable().getTableName()%> = new SchemaUtil_<%=conn.getUniqueName()%>_<%=conn.getMetadataTable().getTableName()%>().getSchema(<%=conn.getName()%>);
<%
INode target = conn.getTarget();
String targetNodeId = target.getUniqueName();
String targetNodeComponent = target.getComponent().getName();
%>
<%=jobCatcherNode.getUniqueName()%>.addConnectionSchemaMessage("<%=node.getUniqueName()%>","<%=node.getComponent().getName()%>",
"<%=targetNodeId%>","<%=targetNodeComponent%>", "<%=conn.getUniqueName()%>" + iterateId, schema_<%=conn.getUniqueName()%>_<%=conn.getMetadataTable().getTableName()%>);
<%=jobCatcherNode.getDesignSubjobStartNode().getUniqueName() %>Process(globalMap);
<%
}
if(!outConns.isEmpty()) {
//{
%>
}
<%
}
//{
}
//======================================TDI-17183 end=====================================
boolean traceCodeGenerated = false;
for (IConnection conn : node.getOutgoingConnections()) {

View File

@@ -8,6 +8,9 @@
org.talend.core.model.metadata.IMetadataColumn
org.talend.core.model.process.EConnectionType
org.talend.core.model.process.ElementParameterParser
org.talend.core.model.process.EParameterFieldType
org.talend.designer.core.model.components.EParameterName
org.talend.designer.core.model.components.ElementParameter
org.talend.designer.codegen.config.CodeGeneratorArgument
org.talend.core.model.utils.NodeUtil
org.talend.core.model.process.IConnectionCategory
@@ -140,10 +143,14 @@
connSet.addAll(node.getIncomingConnections(EConnectionType.FLOW_MAIN));
connSet.addAll(node.getIncomingConnections(EConnectionType.FLOW_MERGE));
List<? extends INode> jobCatcherNodes = node.getProcess().getNodesOfType("tJobStructureCatcher");
boolean enableLogStash = jobCatcherNodes != null && !jobCatcherNodes.isEmpty();
String cid = node.getUniqueName();
boolean logstashCurrent = !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend") && enableLogStash;
List<? extends INode> jobCatcherNodes = node.getProcess().getNodesOfType("tJobStructureCatcher");
boolean jobCatcherExists = jobCatcherNodes != null && !jobCatcherNodes.isEmpty();
INode jobCatcherNode = jobCatcherExists ? jobCatcherNodes.get(0) : null;
boolean enableLogStash = !Boolean.getBoolean("deactivate_extended_component_log") && jobCatcherExists;
boolean logstashCurrent = enableLogStash && !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend");
//about performance monitor, no way to support more than one job catcher component, also that is not necessary
final String subprocessName4Catcher = logstashCurrent ? jobCatcherNodes.get(0).getDesignSubjobStartNode().getUniqueName() : null;
@@ -190,10 +197,10 @@
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
%>
@@ -226,10 +233,10 @@
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
%>
@@ -253,10 +260,10 @@
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
@@ -289,10 +296,10 @@
String sourceNodeId = source.getUniqueName();
String sourceLabel = ElementParameterParser.getValue(source, "__LABEL__");
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel);
String sourceNodeLabel = ((sourceLabel==null || "__UNIQUE_NAME__".equals(sourceLabel) || sourceLabel.contains("\"")) ? sourceNodeId : sourceLabel.trim());
String targetLabel = ElementParameterParser.getValue(node, "__LABEL__");
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel);
String targetNodeLabel = ((targetLabel==null || "__UNIQUE_NAME__".equals(targetLabel) || targetLabel.contains("\"")) ? node.getUniqueName() : targetLabel.trim());
String sourceNodeComponent = source.getComponent().getName();
@@ -331,11 +338,104 @@
<%
log.startWork();
log.logCompSetting();
boolean enable_runtime_lineage_log = NodeUtil.isJobUsingRuntimeLineage(node.getProcess()) && jobCatcherExists && !cid.startsWith("tJobStructureCatcher") && !cid.startsWith("talend");
if(enable_runtime_lineage_log) {
%>
class ParameterUtil_<%=cid%>{
public java.util.Map<String, String> getParameter() throws Exception{
java.util.Map<String, String> component_parameters = new java.util.HashMap<>();
<%
java.util.Set<org.talend.core.model.process.EParameterFieldType> ignoredParamsTypes = new java.util.HashSet<org.talend.core.model.process.EParameterFieldType>();
ignoredParamsTypes.addAll(
java.util.Arrays.asList(
org.talend.core.model.process.EParameterFieldType.SCHEMA_TYPE,
org.talend.core.model.process.EParameterFieldType.SCHEMA_REFERENCE,
org.talend.core.model.process.EParameterFieldType.LABEL,
org.talend.core.model.process.EParameterFieldType.EXTERNAL,
org.talend.core.model.process.EParameterFieldType.MAPPING_TYPE,
org.talend.core.model.process.EParameterFieldType.IMAGE,
org.talend.core.model.process.EParameterFieldType.TNS_EDITOR,
org.talend.core.model.process.EParameterFieldType.WSDL2JAVA,
org.talend.core.model.process.EParameterFieldType.GENERATEGRAMMARCONTROLLER,
org.talend.core.model.process.EParameterFieldType.GENERATE_SURVIVORSHIP_RULES_CONTROLLER,
org.talend.core.model.process.EParameterFieldType.REFRESH_REPORTS,
org.talend.core.model.process.EParameterFieldType.BROWSE_REPORTS,
org.talend.core.model.process.EParameterFieldType.PALO_DIM_SELECTION,
org.talend.core.model.process.EParameterFieldType.GUESS_SCHEMA,
org.talend.core.model.process.EParameterFieldType.MATCH_RULE_IMEX_CONTROLLER,
org.talend.core.model.process.EParameterFieldType.MEMO_PERL,
org.talend.core.model.process.EParameterFieldType.DBTYPE_LIST,
org.talend.core.model.process.EParameterFieldType.VERSION,
org.talend.core.model.process.EParameterFieldType.TECHNICAL,
org.talend.core.model.process.EParameterFieldType.ICON_SELECTION,
org.talend.core.model.process.EParameterFieldType.JAVA_COMMAND,
org.talend.core.model.process.EParameterFieldType.TREE_TABLE,
org.talend.core.model.process.EParameterFieldType.VALIDATION_RULE_TYPE,
org.talend.core.model.process.EParameterFieldType.DCSCHEMA,
org.talend.core.model.process.EParameterFieldType.SURVIVOR_RELATION,
org.talend.core.model.process.EParameterFieldType.REST_RESPONSE_SCHEMA_TYPE,
org.talend.core.model.process.EParameterFieldType.BUTTON
)
);
for(org.talend.core.model.process.IElementParameter ep : org.talend.core.model.utils.NodeUtil.getDisplayedParameters(node)){
if(!ep.isLog4JEnabled() || ignoredParamsTypes.contains(ep.getFieldType())){
continue;
}
ElementParameter p = (ElementParameter)ep;
Object pluginValue = p.getTaggedValue("org.talend.sdk.component.source");
if(pluginValue != null && String.class.cast(pluginValue).equalsIgnoreCase("tacokit")) {
try {
if (!(Boolean) org.talend.core.runtime.IAdditionalInfo.class.cast(p).func("isPersisted")) {
continue;
}
} catch (Exception ex) {
//do nothing
}
%>
<%@ include file="./tacokit_runtime_log.javajet"%>
<%
continue;
}
String name = ep.getName();
java.util.Set<String> ignoredParamsNames = new java.util.HashSet<String>();
ignoredParamsNames.add("SQLPATTERN_VALUE");
ignoredParamsNames.add("ADDITIONAL_INSERT_COLUMNS");
ignoredParamsNames.add("ADDITIONAL_UPDATE_COLUMNS");
ignoredParamsNames.add("SELECTION_TABLE");
ignoredParamsNames.add("DIFFER_MESSAGE");
ignoredParamsNames.add("NO_DIFFER_MESSAGE");
if(ignoredParamsNames.contains(name)) {
//do nothing
} else if(org.talend.core.model.process.EParameterFieldType.PASSWORD.equals(ep.getFieldType())){
//not log password
}else{
String value = org.talend.core.model.utils.NodeUtil.getRuntimeParameterValue(node, ep);
%>
component_parameters.put("<%=name%>", String.valueOf(<%=value%>));
<%
}
}
%>
return component_parameters;
}
}
<%=jobCatcherNode.getUniqueName()%>.addComponentParameterMessage("<%=node.getUniqueName()%>", "<%=node.getComponent().getName()%>",
new ParameterUtil_<%=cid%>().getParameter());
<%=jobCatcherNode.getDesignSubjobStartNode().getUniqueName() %>Process(globalMap);
<%
}
if(logstashCurrent) {
for (INode jobStructureCatcher : jobCatcherNodes) {
String label = ElementParameterParser.getValue(node, "__LABEL__");
String nodeLabel = ((label==null || "__UNIQUE_NAME__".equals(label) || label.contains("\"")) ? node.getUniqueName() : label);
String nodeLabel = ((label==null || "__UNIQUE_NAME__".equals(label) || label.contains("\"")) ? node.getUniqueName() : label.trim());
%>
if(enableLogStash) {
<%=jobStructureCatcher.getUniqueName() %>.addCM("<%=node.getUniqueName()%>", "<%=nodeLabel%>", "<%=node.getComponent().getName()%>");

View File

@@ -24,6 +24,7 @@
org.talend.core.model.process.ProcessUtils
org.talend.core.model.components.IComponent
org.talend.core.model.components.EComponentType
org.talend.core.model.utils.NodeUtil
"
class="Footer"
skeleton="footer_java.skeleton"
@@ -125,8 +126,16 @@
boolean exist_tSCP = false;
List<INode> scpComponentsList = (List<INode>)process.getNodesOfType("tSCPConnection");
if (scpComponentsList.size() > 0) {
String parameterNames = "";
int scpsize = scpComponentsList.size();
if (scpsize > 0) {
exist_tSCP = true;
for (int i = 0; i < scpsize; i++) {
parameterNames += "\"conn_" + scpComponentsList.get(i).getUniqueName() + "\"";
if(i < scpsize-1){
parameterNames += ",";
}
}
}
boolean exist_tCassandra = false;
@@ -518,8 +527,63 @@
break;
}
}
if(jobCatcherNode!=null) {
String location = ElementParameterParser.getValue(jobCatcherNode, "__LOCATION__");
boolean enableLogStash = !Boolean.getBoolean("deactivate_extended_component_log") && (jobCatcherNode!=null);
boolean enable_runtime_lineage_log = NodeUtil.isJobUsingRuntimeLineage(process) && (jobCatcherNode!=null);
if(enable_runtime_lineage_log) {
%>
java.util.Properties p_<%=jobCatcherNode.getUniqueName()%> = new java.util.Properties();
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("root.logger", "runtime");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("encoding", "UTF-8");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("application.name", "Talend Studio");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("service.name", "Talend Studio Job");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("instance.name", "Talend Studio Job Instance");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("propagate.appender.exceptions", "none");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("log.appender", "file");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("appender.file.path", "runtime.json");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("appender.file.maxsize", "52428800");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("appender.file.maxbackup", "20");
p_<%=jobCatcherNode.getUniqueName()%>.setProperty("host", "false");
final String runtime_dir_<%=jobCatcherNode.getUniqueName()%> = System.getProperty("runtime.lineage.outputpath");
final String runtime_path_<%=jobCatcherNode.getUniqueName()%> = System.getProperty("runtime.lineage.appender.file.path");
if(runtime_path_<%=jobCatcherNode.getUniqueName()%>==null || runtime_path_<%=jobCatcherNode.getUniqueName()%>.isEmpty()) {
if(runtime_dir_<%=jobCatcherNode.getUniqueName()%>!=null && !runtime_dir_<%=jobCatcherNode.getUniqueName()%>.isEmpty()) {
System.setProperty("runtime.lineage.appender.file.path",
new StringBuilder().append(runtime_dir_<%=jobCatcherNode.getUniqueName()%>)
.append((runtime_dir_<%=jobCatcherNode.getUniqueName()%>.endsWith("/") || runtime_dir_<%=jobCatcherNode.getUniqueName()%>.endsWith("\\")) ? "" : java.io.File.separator)
.append(projectName)
.append(java.io.File.separatorChar)
.append(jobName)
.append(java.io.File.separatorChar)
.append(jobVersion)
.append(java.io.File.separatorChar)
.append("runtime_log_")
.append(new java.text.SimpleDateFormat("yyyyMMddHHmmss").format(new java.util.Date()))
.append(".json")
.toString()
);
}
}
System.getProperties().stringPropertyNames().stream()
.filter(it -> it.startsWith("runtime.lineage.") && !"runtime.lineage.outputpath".equals(it))
.forEach(key -> p_<%=jobCatcherNode.getUniqueName()%>.setProperty(key.substring("runtime.lineage.".length()), System.getProperty(key)));
<%if(isLog4j1Enabled) {%>
org.apache.log4j.Logger.getLogger(p_<%=jobCatcherNode.getUniqueName()%>.getProperty("root.logger")).setLevel(org.apache.log4j.Level.DEBUG);
<%}%>
<%if(isLog4j2Enabled) {%>
org.apache.logging.log4j.core.config.Configurator.setLevel(p_<%=jobCatcherNode.getUniqueName()%>.getProperty("root.logger"), org.apache.logging.log4j.Level.DEBUG);
<%}%>
runtime_lineage_logger_<%=jobCatcherNode.getUniqueName()%> = org.talend.job.audit.JobEventAuditLoggerFactory.createJobAuditLogger(p_<%=jobCatcherNode.getUniqueName()%>);
<%
}
if(enableLogStash) {
String location = ElementParameterParser.getValue(jobCatcherNode, "__LOCATION__");
%>
if(enableLogStash) {
java.util.Properties properties_<%=jobCatcherNode.getUniqueName()%> = new java.util.Properties();
@@ -550,7 +614,7 @@
}
<%
}
%>
%>
if(clientHost == null) {
clientHost = defaultClientHost;
@@ -586,6 +650,16 @@
}
%>
boolean inOSGi = routines.system.BundleUtils.inOSGi();
if (inOSGi) {
java.util.Dictionary<String, Object> jobProperties = routines.system.BundleUtils.getJobProperties(jobName);
if (jobProperties != null) {
contextStr = (String)jobProperties.get("context");
}
}
try {
//call job/subjob with an existing context, like: --context=production. if without this parameter, there will use the default context instead.
java.io.InputStream inContext = <%=className%>.class.getClassLoader().getResourceAsStream("<%=jobClassPackageFolder%>/contexts/" + contextStr + ".properties");
@@ -593,13 +667,15 @@
inContext = <%=className%>.class.getClassLoader().getResourceAsStream("config/contexts/" + contextStr + ".properties");
}
if (inContext != null) {
//defaultProps is in order to keep the original context value
if(context != null && context.isEmpty()) {
try {
//defaultProps is in order to keep the original context value
if(context != null && context.isEmpty()) {
defaultProps.load(inContext);
context = new ContextProperties(defaultProps);
}
} finally {
inContext.close();
}
inContext.close();
} else if (!isDefaultContext) {
//print info and job continue to run, for case: context_param is not empty.
System.err.println("Could not find the context " + contextStr);
@@ -665,34 +741,39 @@
<%
} else if(typeToGenerate.equals("java.util.Date")) {
%>
try{
String context_<%=ctxParam.getName()%>_value = context.getProperty("<%=ctxParam.getName()%>");
if (context_<%=ctxParam.getName()%>_value == null){
context_<%=ctxParam.getName()%>_value = "";
}
int context_<%=ctxParam.getName()%>_pos = context_<%=ctxParam.getName()%>_value.indexOf(";");
String context_<%=ctxParam.getName()%>_pattern = "yyyy-MM-dd HH:mm:ss";
if(context_<%=ctxParam.getName()%>_pos > -1){
context_<%=ctxParam.getName()%>_pattern = context_<%=ctxParam.getName()%>_value.substring(0, context_<%=ctxParam.getName()%>_pos);
context_<%=ctxParam.getName()%>_value = context_<%=ctxParam.getName()%>_value.substring(context_<%=ctxParam.getName()%>_pos + 1);
}
try{
if (context_<%=ctxParam.getName()%>_value == null){
context_<%=ctxParam.getName()%>_value = "";
}
int context_<%=ctxParam.getName()%>_pos = context_<%=ctxParam.getName()%>_value.indexOf(";");
String context_<%=ctxParam.getName()%>_pattern = "yyyy-MM-dd HH:mm:ss";
if(context_<%=ctxParam.getName()%>_pos > -1){
context_<%=ctxParam.getName()%>_pattern = context_<%=ctxParam.getName()%>_value.substring(0, context_<%=ctxParam.getName()%>_pos);
context_<%=ctxParam.getName()%>_value = context_<%=ctxParam.getName()%>_value.substring(context_<%=ctxParam.getName()%>_pos + 1);
}
context.<%=ctxParam.getName()%>=(java.util.Date)(new java.text.SimpleDateFormat(context_<%=ctxParam.getName()%>_pattern).parse(context_<%=ctxParam.getName()%>_value));
context.<%=ctxParam.getName()%>=(java.util.Date)(new java.text.SimpleDateFormat(context_<%=ctxParam.getName()%>_pattern).parse(context_<%=ctxParam.getName()%>_value));
} catch(ParseException e) {
} catch(ParseException e) {
try { <% /*try to check if date passed as long also*/ %>
long context_<%=ctxParam.getName()%>_longValue = Long.valueOf(context_<%=ctxParam.getName()%>_value);
context.<%=ctxParam.getName()%> = new java.util.Date(context_<%=ctxParam.getName()%>_longValue);
} catch (NumberFormatException cantParseToLongException) {
<%
if (isLog4jEnabled) {
if (isLog4jEnabled) {
%>
log.warn(String.format("<%=warningMessageFormat %>", "<%=ctxParam.getName() %>", e.getMessage()));
log.warn(String.format("<%=warningMessageFormat %>", "<%=ctxParam.getName() %>", "Can't parse date string: " + e.getMessage() + " and long: " + cantParseToLongException.getMessage()));
<%
} else {
} else {
%>
System.err.println(String.format("<%=warningMessageFormat %>", "<%=ctxParam.getName() %>", e.getMessage()));
System.err.println(String.format("<%=warningMessageFormat %>", "<%=ctxParam.getName() %>", "Can't parse date string: " + e.getMessage() + " and long: " + cantParseToLongException.getMessage()));
<%
}
%>
context.<%=ctxParam.getName()%>=null;
}
}
%>
context.<%=ctxParam.getName()%>=null;
}
<%
} else if(typeToGenerate.equals("Object")||typeToGenerate.equals("String")||typeToGenerate.equals("java.lang.String")) {
%>
@@ -946,7 +1027,7 @@ this.globalResumeTicket = true;//to run tPreJob
<%
}
if(jobCatcherNode!=null) {
if(enableLogStash) {
%>
if(enableLogStash) {
<%=jobCatcherNode.getUniqueName() %>.addJobStartMessage();
@@ -1099,7 +1180,7 @@ this.globalResumeTicket = true;//to run tPostJob
<%
}
if(jobCatcherNode!=null) {
if(enableLogStash) {
%>
if(enableLogStash) {
<%=jobCatcherNode.getUniqueName() %>.addJobEndMessage(startTime, end, status);
@@ -1117,9 +1198,12 @@ this.globalResumeTicket = true;//to run tPostJob
closeJmsConnections();
<% } %>
<% if (exist_tSCP) { %>
closeScpConnections();
<% } %>
<% if (exist_tSCP) {
%>
closeCloseableConnections(<%=parameterNames%>);
<%
}
%>
<%
if (stats) {
@@ -1138,6 +1222,24 @@ if (execStat) {
}
%>
int returnCode = 0;
<%
if (isRunInMultiThread) {
%>
Integer localErrorCode = (Integer)(((java.util.Map)threadLocal.get()).get("errorCode"));
String localStatus = (String)(((java.util.Map)threadLocal.get()).get("status"));
if (localErrorCode != null) {
if (errorCode == null || localErrorCode.compareTo(errorCode) > 0) {
errorCode = localErrorCode;
}
}
if (localStatus != null && !status.equals("failure")){
status = localStatus;
}
<%
}
%>
if(errorCode == null) {
returnCode = status != null && status.equals("failure") ? 1 : 0;
} else {
@@ -1155,7 +1257,7 @@ if (execStat) {
closeJmsConnections();
<% } %>
<% if(exist_tSCP) { %>
closeScpConnections();
closeCloseableConnections(<%=parameterNames%>);
<% } %>
<% if (exist_tSQLDB) { %>
closeSqlDbConnections();
@@ -1223,22 +1325,17 @@ if (execStat) {
<%
if(exist_tSCP) {
%>
private void closeScpConnections() {
try {
Object obj_conn;
<%
for (INode scpNode : scpComponentsList) {
%>
obj_conn = globalMap.remove("conn_<%=scpNode.getUniqueName() %>");
if (null != obj_conn) {
((ch.ethz.ssh2.Connection) obj_conn).close();
private void closeCloseableConnections(String... names) {
java.util.Arrays.stream(names).forEach(name-> {
try {
Object obj_conn = globalMap.remove(name);
if(obj_conn != null){
((java.io.Closeable)obj_conn).close();
}
} catch (IOException ioException) {
}
<%
}
%>
} catch (java.lang.Exception e) {
}
}
});
}
<%
}
%>

View File

@@ -56,10 +56,28 @@ if ((metadatas != null) && (metadatas.size() > 0)) { // metadata
// Set up the component definition, and the properties for all types of
// components.
List<? extends IConnection> allInLineJobConns = NodeUtil.getFirstIncomingLineConnectionsOfType(node, "tRESTRequestIn");
%>
boolean doesNodeBelongToRequest_<%=cid%> = <%= allInLineJobConns.size() %> == 0;
@SuppressWarnings("unchecked")
java.util.Map<String, Object> restRequest_<%=cid%> = (java.util.Map<String, Object>)globalMap.get("restRequest");
String currentTRestRequestOperation_<%=cid%> = (String)(restRequest_<%=cid%> != null ? restRequest_<%=cid%>.get("OPERATION") : null);
<%
for (IConnection inLineConn : allInLineJobConns) {
%>
if("<%= inLineConn.getName() %>".equals(currentTRestRequestOperation_<%=cid%>)) {
doesNodeBelongToRequest_<%=cid%> = true;
}
<%
}
%>
org.talend.components.api.component.ComponentDefinition def_<%=cid %> =
new <%= def.getClass().getName()%>();
org.talend.components.api.component.runtime.Writer writer_<%=cid%> = null;
org.talend.components.api.component.runtime.Reader reader_<%=cid%> = null;
<%
List<Component.CodegenPropInfo> propsToProcess = component.getCodegenPropInfos(componentProps);
%>
@@ -145,7 +163,7 @@ globalMap.put("TALEND_COMPONENTS_VERSION", "<%=component.getVersion()%>");
boolean isParallelize ="true".equalsIgnoreCase(ElementParameterParser.getValue(node, "__PARALLELIZE__"));
if (isParallelize) {
%>
final String buffersSizeKey_<%=cid%> = "buffersSizeKey_<%=cid%>_" + Thread.currentThread().getId();
final String buffersSizeKey_<%=cid%> = "buffersSizeKey_<%=cid%>_" + Thread.currentThread().getId();
<%
}
%>
@@ -215,9 +233,11 @@ if(componentRuntime_<%=cid%> instanceof org.talend.components.api.component.runt
org.talend.components.api.component.runtime.SourceOrSink sourceOrSink_<%=cid%> = null;
if(componentRuntime_<%=cid%> instanceof org.talend.components.api.component.runtime.SourceOrSink) {
sourceOrSink_<%=cid%> = (org.talend.components.api.component.runtime.SourceOrSink)componentRuntime_<%=cid%>;
org.talend.daikon.properties.ValidationResult vr_<%=cid%> = sourceOrSink_<%=cid%>.validate(container_<%=cid%>);
if (vr_<%=cid%>.getStatus() == org.talend.daikon.properties.ValidationResult.Result.ERROR ) {
throw new RuntimeException(vr_<%=cid%>.getMessage());
if (doesNodeBelongToRequest_<%=cid%>) {
org.talend.daikon.properties.ValidationResult vr_<%=cid%> = sourceOrSink_<%=cid%>.validate(container_<%=cid%>);
if (vr_<%=cid%>.getStatus() == org.talend.daikon.properties.ValidationResult.Result.ERROR ) {
throw new RuntimeException(vr_<%=cid%>.getMessage());
}
}
}
@@ -240,11 +260,11 @@ if(isTopologyNone) {
if (hasOutputOnly || asInputComponent) {
%>
org.talend.components.api.component.runtime.Source source_<%=cid%> =
(org.talend.components.api.component.runtime.Source)sourceOrSink_<%=cid%>;
org.talend.components.api.component.runtime.Reader reader_<%=cid%> =
source_<%=cid%>.createReader(container_<%=cid%>);
reader_<%=cid%> = new org.talend.codegen.flowvariables.runtime.FlowVariablesReader(reader_<%=cid%>, container_<%=cid%>);
if (sourceOrSink_<%=cid%> instanceof org.talend.components.api.component.runtime.Source) {
org.talend.components.api.component.runtime.Source source_<%=cid%> =
(org.talend.components.api.component.runtime.Source)sourceOrSink_<%=cid%>;
reader_<%=cid%> = source_<%=cid%>.createReader(container_<%=cid%>);
reader_<%=cid%> = new org.talend.codegen.flowvariables.runtime.FlowVariablesReader(reader_<%=cid%>, container_<%=cid%>);
<%
IConnection main = null;
@@ -266,19 +286,19 @@ if (hasOutputOnly || asInputComponent) {
IConnection schemaSourceConnector = (main!=null ? main : reject);
String schemaSourceConnectorName = schemaSourceConnector.getMetadataTable().getAttachedConnector();
%>
boolean multi_output_is_allowed_<%=cid%> = false;
boolean multi_output_is_allowed_<%=cid%> = false;
<% //take care SourceOrSink.validate will change the schema if it contains include-all-fields, so need to get design Avro schema before validate %>
org.talend.components.api.component.Connector c_<%=cid%> = null;
for (org.talend.components.api.component.Connector currentConnector : props_<%=cid %>.getAvailableConnectors(null, true)) {
if (currentConnector.getName().equals("<%=schemaSourceConnectorName%>")) {
c_<%=cid%> = currentConnector;
}
org.talend.components.api.component.Connector c_<%=cid%> = null;
for (org.talend.components.api.component.Connector currentConnector : props_<%=cid %>.getAvailableConnectors(null, true)) {
if (currentConnector.getName().equals("<%=schemaSourceConnectorName%>")) {
c_<%=cid%> = currentConnector;
}
if (currentConnector.getName().equals("REJECT")) {//it's better to move the code to javajet
multi_output_is_allowed_<%=cid%> = true;
if (currentConnector.getName().equals("REJECT")) {//it's better to move the code to javajet
multi_output_is_allowed_<%=cid%> = true;
}
}
}
org.apache.avro.Schema schema_<%=cid%> = props_<%=cid %>.getSchema(c_<%=cid%>, true);
org.apache.avro.Schema schema_<%=cid%> = props_<%=cid %>.getSchema(c_<%=cid%>, true);
<%
irToRow = new IndexedRecordToRowStructGenerator(cid, null, columnList);
@@ -286,117 +306,123 @@ if (hasOutputOnly || asInputComponent) {
}
%>
// Iterate through the incoming data.
boolean available_<%=cid%> = reader_<%=cid%>.start();
// Iterate through the incoming data.
boolean available_<%=cid%> = reader_<%=cid%>.start();
resourceMap.put("reader_<%=cid%>", reader_<%=cid%>);
resourceMap.put("reader_<%=cid%>", reader_<%=cid%>);
for (; available_<%=cid%>; available_<%=cid%> = reader_<%=cid%>.advance()) {
nb_line_<%=cid %>++;
for (; available_<%=cid%>; available_<%=cid%> = reader_<%=cid%>.advance()) {
nb_line_<%=cid %>++;
<%if(hasDataOutput) {%>
if (multi_output_is_allowed_<%=cid%>) {
<%if(main!=null){%>
<%=main.getName()%> = null;
<%}%>
<%if(hasDataOutput) {%>
if (multi_output_is_allowed_<%=cid%>) {
<%if(main!=null){%>
<%=main.getName()%> = null;
<%}%>
<%if(reject!=null){%>
<%=reject.getName()%> = null;
<%}%>
}
<%}%>
<%if(reject!=null){%>
<%=reject.getName()%> = null;
<%}%>
}
<%}%>
try {
Object data_<%=cid%> = reader_<%=cid%>.getCurrent();
<%
if (main != null) {
%>
try {
Object data_<%=cid%> = reader_<%=cid%>.getCurrent();
<%
if (main != null) {
%>
if(multi_output_is_allowed_<%=cid%>) {
<%=main.getName()%> = new <%=main.getName() %>Struct();
}
if(multi_output_is_allowed_<%=cid%>) {
<%=main.getName()%> = new <%=main.getName() %>Struct();
}
<%
irToRow.generateConvertRecord("data_" + cid, main.getName(), main.getMetadataTable().getListColumns());
}
%>
} catch (org.talend.components.api.exception.DataRejectException e_<%=cid%>) {
java.util.Map<String,Object> info_<%=cid%> = e_<%=cid%>.getRejectInfo();
<%
if (reject!=null) {
%>
Object data_<%=cid%> = info_<%=cid%>.get("talend_record");
<%
irToRow.generateConvertRecord("data_" + cid, main.getName(), main.getMetadataTable().getListColumns());
}
%>
} catch (org.talend.components.api.exception.DataRejectException e_<%=cid%>) {
java.util.Map<String,Object> info_<%=cid%> = e_<%=cid%>.getRejectInfo();
<%
if (reject!=null) {
%>
Object data_<%=cid%> = info_<%=cid%>.get("talend_record");
if (multi_output_is_allowed_<%=cid%>) {
<%=reject.getName()%> = new <%=reject.getName() %>Struct();
}
try{
<%
irToRow.generateConvertRecord("data_" + cid, reject.getName());
%>
}catch(java.lang.Exception e){
// do nothing
}
<%
Set<String> commonColumns = new HashSet<String>();
if (multi_output_is_allowed_<%=cid%>) {
<%=reject.getName()%> = new <%=reject.getName() %>Struct();
}
try{
<%
irToRow.generateConvertRecord("data_" + cid, reject.getName());
%>
}catch(java.lang.Exception e){
// do nothing
}
<%
Set<String> commonColumns = new HashSet<String>();
for (IMetadataColumn column : columnList) {
commonColumns.add(column.getLabel());
}
for (IMetadataColumn column : columnList) {
commonColumns.add(column.getLabel());
}
//pass error columns
List<IMetadataColumn> rejectColumns = reject.getMetadataTable().getListColumns();
for(IMetadataColumn column : rejectColumns) {
String columnName = column.getLabel();
//pass error columns
List<IMetadataColumn> rejectColumns = reject.getMetadataTable().getListColumns();
for(IMetadataColumn column : rejectColumns) {
String columnName = column.getLabel();
// JavaType javaType = JavaTypesManager.getJavaTypeFromId(column.getTalendType());
String typeToGenerate = JavaTypesManager.getTypeToGenerate(column.getTalendType(), column.isNullable());
// JavaType javaType = JavaTypesManager.getJavaTypeFromId(column.getTalendType());
String typeToGenerate = JavaTypesManager.getTypeToGenerate(column.getTalendType(), column.isNullable());
//error columns
if(!commonColumns.contains(columnName)) {
%>
<%=reject.getName()%>.<%=columnName%> = (<%=typeToGenerate%>)info_<%=cid%>.get("<%=columnName%>");
<%
}
}
} else {
%>
//TODO use a method instead of getting method by the special key "error/errorMessage"
Object errorMessage_<%=cid%> = null;
if(info_<%=cid%>.containsKey("error")){
errorMessage_<%=cid%> = info_<%=cid%>.get("error");
}else if(info_<%=cid%>.containsKey("errorMessage")){
errorMessage_<%=cid%> = info_<%=cid%>.get("errorMessage");
}else{
errorMessage_<%=cid%> = "Rejected but error message missing";
}
errorMessage_<%=cid%> = "Row "+ nb_line_<%=cid %> + ": "+errorMessage_<%=cid%>;
System.err.println(errorMessage_<%=cid%>);
<%
}
if (main != null) {
%>
// If the record is reject, the main line record should put NULL
<%=main.getName()%> = null;
<%
}
%>
}
//error columns
if(!commonColumns.contains(columnName)) {
%>
<%=reject.getName()%>.<%=columnName%> = (<%=typeToGenerate%>)info_<%=cid%>.get("<%=columnName%>");
<%
}
}
} else {
%>
//TODO use a method instead of getting method by the special key "error/errorMessage"
Object errorMessage_<%=cid%> = null;
if(info_<%=cid%>.containsKey("error")){
errorMessage_<%=cid%> = info_<%=cid%>.get("error");
}else if(info_<%=cid%>.containsKey("errorMessage")){
errorMessage_<%=cid%> = info_<%=cid%>.get("errorMessage");
}else{
errorMessage_<%=cid%> = "Rejected but error message missing";
}
errorMessage_<%=cid%> = "Row "+ nb_line_<%=cid %> + ": "+errorMessage_<%=cid%>;
System.err.println(errorMessage_<%=cid%>);
<%
}
if (main != null) {
%>
// If the record is reject, the main line record should put NULL
<%=main.getName()%> = null;
<%
}
%>
} // end of catch
<%
// The for loop around the incoming records from the reader is left open.
} else if (hasInput) {
%>
org.talend.components.api.component.runtime.Sink sink_<%=cid%> =
(org.talend.components.api.component.runtime.Sink)sourceOrSink_<%=cid%>;
org.talend.components.api.component.runtime.WriteOperation writeOperation_<%=cid%> = sink_<%=cid%>.createWriteOperation();
writeOperation_<%=cid%>.initialize(container_<%=cid%>);
org.talend.components.api.component.runtime.Writer writer_<%=cid%> = writeOperation_<%=cid%>.createWriter(container_<%=cid%>);
writer_<%=cid%>.open("<%=cid%>");
resourceMap.put("writer_<%=cid%>", writer_<%=cid%>);
org.talend.codegen.enforcer.IncomingSchemaEnforcer incomingEnforcer_<%=cid%> = null;
if (sourceOrSink_<%=cid%> instanceof org.talend.components.api.component.runtime.Sink) {
org.talend.components.api.component.runtime.Sink sink_<%=cid%> =
(org.talend.components.api.component.runtime.Sink)sourceOrSink_<%=cid%>;
org.talend.components.api.component.runtime.WriteOperation writeOperation_<%=cid%> = sink_<%=cid%>.createWriteOperation();
if (doesNodeBelongToRequest_<%=cid%>) {
writeOperation_<%=cid%>.initialize(container_<%=cid%>);
}
writer_<%=cid%> = writeOperation_<%=cid%>.createWriter(container_<%=cid%>);
if (doesNodeBelongToRequest_<%=cid%>) {
writer_<%=cid%>.open("<%=cid%>");
}
resourceMap.put("writer_<%=cid%>", writer_<%=cid%>);
} // end of "sourceOrSink_<%=cid%> instanceof ...Sink"
org.talend.components.api.component.Connector c_<%=cid%> = null;
for (org.talend.components.api.component.Connector currentConnector : props_<%=cid %>.getAvailableConnectors(null, false)) {
if (currentConnector.getName().equals("MAIN")) {
@@ -405,8 +431,7 @@ if (hasOutputOnly || asInputComponent) {
}
}
org.apache.avro.Schema designSchema_<%=cid%> = props_<%=cid %>.getSchema(c_<%=cid%>, false);
org.talend.codegen.enforcer.IncomingSchemaEnforcer incomingEnforcer_<%=cid%>
= new org.talend.codegen.enforcer.IncomingSchemaEnforcer(designSchema_<%=cid%>);
incomingEnforcer_<%=cid%> = new org.talend.codegen.enforcer.IncomingSchemaEnforcer(designSchema_<%=cid%>);
<%
List<? extends IConnection> outgoingConns = node.getOutgoingSortedConnections();
if (outgoingConns!=null){
@@ -442,7 +467,8 @@ if (hasOutputOnly || asInputComponent) {
}
}
}
%>
%>
java.lang.Iterable<?> outgoingMainRecordsList_<%=cid%> = new java.util.ArrayList<Object>();
java.util.Iterator outgoingMainRecordsIt_<%=cid%> = null;

View File

@@ -58,13 +58,24 @@ if(isTopologyNone) {
else if(hasOutputOnly || asInputComponent){
%>
} // while
reader_<%=cid%>.close();
final java.util.Map<String, Object> resultMap_<%=cid%> = reader_<%=cid%>.getReturnValues();
<%
if (hasOutputOnly || asInputComponent) {
%>
} // end of "if (sourceOrSink_<%=cid%> instanceof ...Source)"
<% } %>
java.util.Map<String, Object> resultMap_<%=cid%> = null;
if (reader_<%=cid%> != null) {
reader_<%=cid%>.close();
resultMap_<%=cid%> = reader_<%=cid%>.getReturnValues();
}
<%
}else if(hasInput){
%>
org.talend.components.api.component.runtime.Result resultObject_<%=cid%> = (org.talend.components.api.component.runtime.Result)writer_<%=cid%>.close();
final java.util.Map<String, Object> resultMap_<%=cid%> = writer_<%=cid%>.getWriteOperation().finalize(java.util.Arrays.<org.talend.components.api.component.runtime.Result>asList(resultObject_<%=cid%>), container_<%=cid%>);
java.util.Map<String, Object> resultMap_<%=cid%> = null;
if (writer_<%=cid%> != null) {
org.talend.components.api.component.runtime.Result resultObject_<%=cid%> = (org.talend.components.api.component.runtime.Result)writer_<%=cid%>.close();
resultMap_<%=cid%> = writer_<%=cid%>.getWriteOperation().finalize(java.util.Arrays.<org.talend.components.api.component.runtime.Result>asList(resultObject_<%=cid%>), container_<%=cid%>);
}
<%
} else {
return stringBuffer.toString();

View File

@@ -84,7 +84,7 @@ if(hasInput){
for (int i = 0; i < input_columnList.size(); i++) {
if(!input_columnList.get(i).getTalendType().equals("id_Dynamic")) {
%>
if (incomingEnforcer_<%=cid%>.getDesignSchema().getField("<%=input_columnList.get(i)%>") == null){
if (incomingEnforcer_<%=cid%> != null && incomingEnforcer_<%=cid%>.getDesignSchema().getField("<%=input_columnList.get(i)%>") == null){
incomingEnforcer_<%=cid%>.addIncomingNodeField("<%=input_columnList.get(i)%>", ((Object) <%=inputConn.getName()%>.<%=input_columnList.get(i)%>).getClass().getCanonicalName());
shouldCreateRuntimeSchemaForIncomingNode = true;
}
@@ -92,7 +92,7 @@ if(hasInput){
}
}
%>
if (shouldCreateRuntimeSchemaForIncomingNode){
if (shouldCreateRuntimeSchemaForIncomingNode && incomingEnforcer_<%=cid%> != null){
incomingEnforcer_<%=cid%>.createRuntimeSchema();
}
<%
@@ -111,7 +111,7 @@ if(hasInput){
if (dynamicPos != -1) {
%>
if (!incomingEnforcer_<%=cid%>.areDynamicFieldsInitialized()) {
if (incomingEnforcer_<%=cid%> != null && !incomingEnforcer_<%=cid%>.areDynamicFieldsInitialized()) {
// Initialize the dynamic columns when they are first encountered.
for (routines.system.DynamicMetadata dm_<%=cid%> : <%=inputConn.getName()%>.<%=input_columnList.get(dynamicPos).getLabel()%>.metadatas) {
incomingEnforcer_<%=cid%>.addDynamicField(
@@ -120,7 +120,8 @@ if(hasInput){
dm_<%=cid%>.getLogicalType(),
dm_<%=cid%>.getFormat(),
dm_<%=cid%>.getDescription(),
dm_<%=cid%>.isNullable());
dm_<%=cid%>.isNullable(),
dm_<%=cid%>.isKey());
}
incomingEnforcer_<%=cid%>.createRuntimeSchema();
}
@@ -128,22 +129,26 @@ if(hasInput){
}
%>
incomingEnforcer_<%=cid%>.createNewRecord();
if (incomingEnforcer_<%=cid%> != null) {
incomingEnforcer_<%=cid%>.createNewRecord();
}
<%
for (int i = 0; i < input_columnList.size(); i++) { // column
IMetadataColumn column = input_columnList.get(i);
if (dynamicPos != i) {
%>
//skip the put action if the input column doesn't appear in component runtime schema
if (incomingEnforcer_<%=cid%>.getRuntimeSchema().getField("<%=input_columnList.get(i)%>") != null){
if (incomingEnforcer_<%=cid%> != null && incomingEnforcer_<%=cid%>.getRuntimeSchema().getField("<%=input_columnList.get(i)%>") != null){
incomingEnforcer_<%=cid%>.put("<%=column.getLabel()%>", <%=inputConn.getName()%>.<%=column.getLabel()%>);
}
<%
} else {
%>
for (int i = 0; i < <%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnCount(); i++) {
incomingEnforcer_<%=cid%>.put(<%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnMetadata(i).getName(),
<%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnValue(i));
if (incomingEnforcer_<%=cid%> != null) {
for (int i = 0; i < <%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnCount(); i++) {
incomingEnforcer_<%=cid%>.put(<%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnMetadata(i).getName(),
<%=inputConn.getName()%>.<%=column.getLabel()%>.getColumnValue(i));
}
}
<%
}
@@ -177,7 +182,11 @@ if(hasInput){
} // propInfo
%>
org.apache.avro.generic.IndexedRecord data_<%=cid%> = incomingEnforcer_<%=cid%>.getCurrentRecord();
org.apache.avro.generic.IndexedRecord data_<%=cid%> = null;
if (incomingEnforcer_<%=cid%> != null) {
data_<%=cid%> = incomingEnforcer_<%=cid%>.getCurrentRecord();
}
<%
boolean isParallelize ="true".equalsIgnoreCase(ElementParameterParser.getValue(node, "__PARALLELIZE__"));
@@ -190,8 +199,9 @@ if(hasInput){
}
}
%>
writer_<%=cid%>.write(data_<%=cid%>);
if (writer_<%=cid%> != null && data_<%=cid%> != null) {
writer_<%=cid%>.write(data_<%=cid%>);
}
nb_line_<%=cid %>++;
<%if(hasMainOutput){

View File

@@ -73,6 +73,9 @@ import pigudf.<%=routine%>;
import routines.<%=routine%>;
<% }
}%>
<%for (String codesJar : CodeGeneratorRoutine.getRequiredCodesJarName(process)) {%>
import <%=codesJar%>;
<%}%>
import routines.system.*;
import routines.system.api.*;
import java.text.ParseException;
@@ -389,13 +392,26 @@ public <%=JavaTypesManager.getTypeToGenerate(ctxParam.getType(),true)%> get<%=Ch
break;
}
}
boolean enableLogStash = jobCatcherNode != null;
if (enableLogStash) {
boolean enableLogStash = !Boolean.getBoolean("deactivate_extended_component_log") && (jobCatcherNode!=null);
boolean enable_runtime_lineage_log = NodeUtil.isJobUsingRuntimeLineage(process) && (jobCatcherNode!=null);
if(jobCatcherNode!=null) {
%>
private final JobStructureCatcherUtils <%=jobCatcherNode.getUniqueName() %> = new JobStructureCatcherUtils(jobName, "<%=process.getId() %>", "<%=process.getVersion() %>");
<%
}
if(enable_runtime_lineage_log) {
%>
private org.talend.job.audit.JobAuditLogger runtime_lineage_logger_<%=jobCatcherNode.getUniqueName()%> = null;
<%
}
if (enableLogStash) {
%>
private org.talend.job.audit.JobAuditLogger auditLogger_<%=jobCatcherNode.getUniqueName()%> = null;
private RunStat runStat = new RunStat(<%=jobCatcherNode.getUniqueName() %>);
private RunStat runStat = new RunStat(<%=jobCatcherNode.getUniqueName() %>, System.getProperty("audit.interval"));
<%
} else if(stats) {
%>
@@ -424,6 +440,20 @@ private RunTrace runTrace = new RunTrace();
globalMap.put(KEY_DB_DATASOURCES, talendDataSources);
globalMap.put(KEY_DB_DATASOURCES_RAW, new java.util.HashMap<String, javax.sql.DataSource>(dataSources));
}
public void setDataSourceReferences(List serviceReferences) throws Exception{
java.util.Map<String, routines.system.TalendDataSource> talendDataSources = new java.util.HashMap<String, routines.system.TalendDataSource>();
java.util.Map<String, javax.sql.DataSource> dataSources = new java.util.HashMap<String, javax.sql.DataSource>();
for (java.util.Map.Entry<String, javax.sql.DataSource> entry : BundleUtils.getServices(serviceReferences, javax.sql.DataSource.class).entrySet()) {
dataSources.put(entry.getKey(), entry.getValue());
talendDataSources.put(entry.getKey(), new routines.system.TalendDataSource(entry.getValue()));
}
globalMap.put(KEY_DB_DATASOURCES, talendDataSources);
globalMap.put(KEY_DB_DATASOURCES_RAW, new java.util.HashMap<String, javax.sql.DataSource>(dataSources));
}
<%
for (INode logCatcher : process.getNodesOfType("tLogCatcher")) {

View File

@@ -12,6 +12,7 @@
org.talend.designer.codegen.config.CodeGeneratorArgument
org.talend.designer.codegen.config.NodesSubTree
org.talend.core.model.process.IProcess
org.talend.core.model.process.ProcessUtils
org.talend.core.model.utils.NodeUtil
org.talend.core.model.process.IContextParameter
java.util.List
@@ -28,7 +29,13 @@ INode node = (INode)codeGenArgument.getArgument();
boolean containsTPartitioner = node.getProcess().getNodesOfType("tPartitioner").size() > 0 ? true : false;
boolean isRunJob = "tRunJob".equals(node.getComponent().getName());
IProcess process = node.getProcess();
boolean isTestContainer=ProcessUtils.isTestContainer(process);
String className = process.getName();
if (isTestContainer) {
className = className + "Test";
}
NodesSubTree subTree = (NodesSubTree) codeGenArgument.getSubTree();
ECodePart codePart = codeGenArgument.getCodePart();
//boolean trace = codeGenArgument.isTrace();
@@ -78,10 +85,10 @@ for (IConnection iterateConn : iterateConnSet) { //1
@Override
public Object put(String key, Object value) {
<%if(!isRunInMultiThread){%>
synchronized (<%=process.getName()%>.this.obj) {
synchronized (<%=className%>.this.obj) {
<%}%>
super.put(key, value);
return <%=process.getName()%>.this.globalMap.put(key, value);
return <%=className%>.this.globalMap.put(key, value);
<%if(!isRunInMultiThread){%>
}
<%}%>
@@ -158,7 +165,7 @@ for (IConnection iterateConn : iterateConnSet) { //1
synchronized (globalMap) {
this.globalMap = java.util.Collections.synchronizedMap(new ThreadedMap(globalMap));
<%}else{%>
synchronized (<%=process.getName()%>.this.obj) {
synchronized (<%=className%>.this.obj) {
this.globalMap = new ThreadedMap(globalMap);
<%}%>
}

View File

@@ -44,7 +44,7 @@
INode startNode = subTree.getRootNode();
String startNodeId = startNode.getUniqueName();
if(startNodeId!=null && startNodeId.startsWith("tCollector")) {
if ("tCollector".equals( startNode.getComponent().getOriginalName() )) {
List<? extends INode> departitioners = startNode.getProcess().getNodesOfType("tDepartitioner");
if(departitioners!=null) {
for(INode departitioner : departitioners) {

View File

@@ -0,0 +1,121 @@
<%
//copy from configuration.javajet for tacokit
%>
<%
//TODO: modify this part for Maps and nested lists.
if (p.getFieldType() == EParameterFieldType.TABLE || p.getFieldType() == EParameterFieldType.TACOKIT_SUGGESTABLE_TABLE) {
java.util.List<java.util.Map<String, String>> tableValues = ElementParameterParser.createTableValues((java.util.List<java.util.Map<String, Object>>) p.getValue(), p);
String[] items = p.getListItemsDisplayCodeName();
String tableName = p.getName().replace('$', '.');
boolean primitiveTable = items.length == 1 && items[0].equals(tableName + "[]");
String tableNamePrefix = tableName + "[]";
for (int i = 0; i < tableValues.size(); i++) {
java.util.Map<String, String> lineValues = tableValues.get(i);
for (int j = 0; j < items.length; j++) {
String key = tableName + "[" + i + "]";
if (!primitiveTable) {
final String columnName = items[j].substring(tableNamePrefix.length(), items[j].length());
key = key + columnName;
}
String value = lineValues.get(items[j]);
if (!org.talend.core.model.utils.ContextParameterUtils.isDynamic(value)) {
value = org.talend.core.model.utils.TalendTextUtils.removeQuotes(value);
value = org.talend.core.model.utils.TalendTextUtils.addQuotes(value);
}
if(value==null || "null".equals(value.trim())) {
value = "(Object)null";
}
%>
component_parameters.put("<%=key%>",String.valueOf(<%=value%>));
<%
}
}
} else if(p.getFieldType() == EParameterFieldType.SCHEMA_TYPE) {
final String parameterName = p.getName();
IConnection connection = null;
final List<? extends IConnection> connections = NodeUtil.getOutgoingConnections(node, p.getContext());
if(connections != null && !connections.isEmpty()) {
connection = connections.get(0);
}
if(connection != null) {
IMetadataTable metaTable = connection.getMetadataTable();
List<IMetadataColumn> columns = metaTable.getListColumns();
for(int i = 0; i < columns.size(); i++) {
IMetadataColumn column = columns.get(i);
%>
component_parameters.put("<%=parameterName%>[<%=i%>]", "<%=column.getLabel()%>");
<%
}
}
} else if (p.getFieldType() == EParameterFieldType.TACOKIT_INPUT_SCHEMA) {
final String parameterName = p.getName();
IConnection connection = null;
final List<? extends IConnection> connections = NodeUtil.getIncomingConnections(node, p.getContext());
if(connections != null && !connections.isEmpty()) {
connection = connections.get(0);
}
if(connection != null) {
IMetadataTable metaTable = connection.getMetadataTable();
List<IMetadataColumn> columns = metaTable.getListColumns();
for(int i = 0; i < columns.size(); i++) {
IMetadataColumn column = columns.get(i);
%>
component_parameters.put("<%=parameterName%>[<%=i%>]", "<%=column.getLabel()%>");
<%
}
}
} else {
final String key;
if(!p.getName().contains("$")){
key = p.getName();
}else{
final StringBuilder keyBuilder = new StringBuilder();
for (String part : p.getName().split("\\.")) {
if (keyBuilder.length() != 0) {
keyBuilder.append(".");
}
if (part.contains("$") && !part.startsWith("$")) {
keyBuilder.append(part.replace("$", "."));
} else {
keyBuilder.append(part);
}
}
key = keyBuilder.toString();
}
String value = null;
if(p.getFieldType() == EParameterFieldType.PASSWORD) {
continue;
} else {
value = ElementParameterParser.getStringElementParameterValue(p);
if (!org.talend.core.model.utils.ContextParameterUtils.isDynamic(value)) {
value = org.talend.core.model.utils.TalendTextUtils.removeQuotes(value);
value = org.talend.core.model.utils.TalendTextUtils.addQuotes(value);
}
}
if (value != null) {
if(key.endsWith("$maxBatchSize")){
%>
<%
} else if(p.getFieldType() == EParameterFieldType.CLOSED_LIST) {
String valueTemp = org.talend.core.model.utils.TalendTextUtils.removeQuotes(value);
if ("".equals(valueTemp)) {
String[] listItemsDisplayCodeValue = p.getListItemsDisplayCodeName();
if(listItemsDisplayCodeValue != null && listItemsDisplayCodeValue.length > 0){
valueTemp = listItemsDisplayCodeValue[0];
value = org.talend.core.model.utils.TalendTextUtils.addQuotes(valueTemp);
}
}
}
if(value==null || "null".equals(value.trim())) {
value = "(Object)null";
}
%>
component_parameters.put("<%=key%>", String.valueOf(<%=value%>));
<%
} // else do not put value in configuration
}
%>

View File

@@ -68,6 +68,14 @@
id="org.talend.designer.components.model.UserComponentsProvider">
</ComponentsProvider>
</extension>
<extension
point="org.talend.core.components_provider">
<ComponentsProvider
class="org.talend.designer.codegen.components.model.SharedStudioUserComponentProvider"
folderName="user"
id="org.talend.designer.codegen.components.model.SharedStudioUserComponentProvider">
</ComponentsProvider>
</extension>
<extension
point="org.eclipse.core.runtime.preferences">
<initializer

View File

@@ -69,6 +69,15 @@ public class JavaRoutineSynchronizer extends AbstractRoutineSynchronizer {
syncRoutineItems(getRoutines(true), true);
}
@Override
public void syncAllInnerCodes() throws SystemException {
syncInnerCodeItems(false);
}
@Override
public void syncAllInnerCodesForLogOn() throws SystemException {
syncInnerCodeItems(true);
}
private void syncRoutineItems(Collection<RoutineItem> routineObjects, boolean forceUpdate) throws SystemException {
for (RoutineItem routineItem : routineObjects) {

View File

@@ -26,10 +26,8 @@ import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Optional;
import java.util.ResourceBundle;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
@@ -75,6 +73,7 @@ import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.model.components.IComponentsHandler;
import org.talend.core.model.components.filters.ComponentsFactoryProviderManager;
import org.talend.core.model.components.filters.IComponentFactoryFilter;
import org.talend.core.runtime.util.ComponentsLocationProvider;
import org.talend.core.ui.IJobletProviderService;
import org.talend.core.ui.ISparkJobletProviderService;
import org.talend.core.ui.ISparkStreamingJobletProviderService;
@@ -83,8 +82,6 @@ import org.talend.core.ui.images.CoreImageProvider;
import org.talend.core.utils.TalendCacheUtils;
import org.talend.designer.codegen.CodeGeneratorActivator;
import org.talend.designer.codegen.i18n.Messages;
import org.talend.designer.core.ITisLocalProviderService;
import org.talend.designer.core.ITisLocalProviderService.ResClassLoader;
import org.talend.designer.core.model.components.ComponentBundleToPath;
import org.talend.designer.core.model.components.ComponentFilesNaming;
import org.talend.designer.core.model.components.EmfComponent;
@@ -164,7 +161,11 @@ public class ComponentsFactory implements IComponentsFactory {
throw new RuntimeException(e);
}
isInitialising.set(true);
removeOldComponentsUserFolder(); // not used anymore
try {
removeOldComponentsUserFolder();
} catch (IOException ex) {
ExceptionHandler.process(ex);
} // not used anymore
long startTime = System.currentTimeMillis();
// TimeMeasure.display = true;
@@ -387,10 +388,12 @@ public class ComponentsFactory implements IComponentsFactory {
ComponentManager.saveResource();
}
private void removeOldComponentsUserFolder() {
private void removeOldComponentsUserFolder() throws IOException {
String userPath = IComponentsFactory.COMPONENTS_INNER_FOLDER + File.separatorChar
+ ComponentUtilities.getExtFolder(OLD_COMPONENTS_USER_INNER_FOLDER);
File componentsLocation = getComponentsLocation(userPath);
ComponentsProviderManager componentsProviderManager = ComponentsProviderManager.getInstance();
AbstractComponentsProvider componentsProvider = componentsProviderManager.loadUserComponentsProvidersFromExtension();
File componentsLocation = getComponentsLocation(componentsProvider, userPath);
if (componentsLocation != null && componentsLocation.exists()) {
FilesUtils.removeFolder(componentsLocation, true);
}
@@ -671,114 +674,38 @@ public class ComponentsFactory implements IComponentsFactory {
*
* @param currentFolder
* @return
* @throws IOException
* @throws BusinessException
*/
private File getComponentsLocation(String folder) {
String componentsPath = IComponentsFactory.COMPONENTS_LOCATION;
IBrandingService breaningService = (IBrandingService) GlobalServiceRegister.getDefault()
.getService(IBrandingService.class);
if (breaningService.isPoweredOnlyCamel()) {
componentsPath = IComponentsFactory.CAMEL_COMPONENTS_LOCATION;
}
Bundle b = Platform.getBundle(componentsPath);
File file = null;
try {
URL url = FileLocator.find(b, new Path(folder), null);
if (url == null) {
return null;
private File getComponentsLocation(AbstractComponentsProvider componentsProvider, String folder) throws IOException {
if (componentsProvider instanceof ComponentsLocationProvider) {
return componentsProvider.getInstallationFolder();
} else {
String componentsPath = IComponentsFactory.COMPONENTS_LOCATION;
IBrandingService breaningService = (IBrandingService) GlobalServiceRegister.getDefault()
.getService(IBrandingService.class);
if (breaningService.isPoweredOnlyCamel()) {
componentsPath = IComponentsFactory.CAMEL_COMPONENTS_LOCATION;
}
URL fileUrl = FileLocator.toFileURL(url);
file = new File(fileUrl.getPath());
} catch (Exception e) {
// e.printStackTrace();
ExceptionHandler.process(e);
}
Bundle b = Platform.getBundle(componentsPath);
return file;
}
private File getComponentsLocation(String folder, AbstractComponentsProvider provider) {
File file = null;
try {
if (provider != null) {
file = provider.getInstallationFolder();
} else {
String componentsPath = IComponentsFactory.COMPONENTS_LOCATION;
Bundle b = Platform.getBundle(componentsPath);
IBrandingService breaningService = (IBrandingService) GlobalServiceRegister.getDefault()
.getService(IBrandingService.class);
if (breaningService.isPoweredOnlyCamel()) {
componentsPath = IComponentsFactory.CAMEL_COMPONENTS_LOCATION;
}
File file = null;
try {
URL url = FileLocator.find(b, new Path(folder), null);
if (url == null) {
return null;
}
URL fileUrl = FileLocator.toFileURL(url);
file = new File(fileUrl.getPath());
} catch (Exception e) {
// e.printStackTrace();
ExceptionHandler.process(e);
}
} catch (Exception e) {
ExceptionHandler.process(e);
}
return file;
}
private ResourceBundle getComponentResourceBundle(IComponent currentComp, String source, String cachedPathSource,
AbstractComponentsProvider provider) {
try {
AbstractComponentsProvider currentProvider = provider;
if (currentProvider == null) {
ComponentsProviderManager componentsProviderManager = ComponentsProviderManager.getInstance();
Collection<AbstractComponentsProvider> providers = componentsProviderManager.getProviders();
for (AbstractComponentsProvider curProvider : providers) {
String path = new Path(curProvider.getInstallationFolder().toString()).toPortableString();
if (source.startsWith(path)) {
// fix for TDI-19889 and TDI-20507 to get the correct component provider
if (cachedPathSource != null) {
if (path.contains(cachedPathSource)) {
currentProvider = curProvider;
break;
}
} else {
currentProvider = curProvider;
break;
}
}
}
}
String installPath = currentProvider.getInstallationFolder().toString();
String label = ComponentFilesNaming.getInstance().getBundleName(currentComp.getName(),
installPath.substring(installPath.lastIndexOf(IComponentsFactory.COMPONENTS_INNER_FOLDER)));
if (currentProvider.isUseLocalProvider()) {
// if the component use local provider as storage (for user / ecosystem components)
// then get the bundle resource from the current main component provider.
// note: code here to review later, service like this shouldn't be used...
ResourceBundle bundle = null;
IBrandingService brandingService = (IBrandingService) GlobalServiceRegister.getDefault()
.getService(IBrandingService.class);
if (brandingService.isPoweredOnlyCamel()) {
bundle = currentProvider.getResourceBundle(label);
} else {
ITisLocalProviderService service = (ITisLocalProviderService) GlobalServiceRegister.getDefault()
.getService(ITisLocalProviderService.class);
bundle = service.getResourceBundle(label);
}
return bundle;
} else {
ResourceBundle bundle = ResourceBundle.getBundle(label, Locale.getDefault(),
new ResClassLoader(currentProvider.getClass().getClassLoader()));
return bundle;
}
} catch (IOException e) {
ExceptionHandler.process(e);
}
return null;
return file;
}
}
private String getCodeLanguageSuffix() {
@@ -1082,5 +1009,13 @@ public class ComponentsFactory implements IComponentsFactory {
public void setComponentsHandler(IComponentsHandler componentsHandler) {
this.componentsHandler = componentsHandler;
}
public String getCustomComponentBundlePath() {
ComponentsProviderManager componentsProviderManager = ComponentsProviderManager.getInstance();
AbstractComponentsProvider componentsProvider = componentsProviderManager.loadUserComponentsProvidersFromExtension();
String bundle = componentsProvider.getComponentsBundle();
return ComponentBundleToPath.getPathFromBundle(bundle);
}
}

View File

@@ -23,6 +23,7 @@ import org.eclipse.core.runtime.IExtensionRegistry;
import org.eclipse.core.runtime.Platform;
import org.talend.core.GlobalServiceRegister;
import org.talend.core.model.components.AbstractComponentsProvider;
import org.talend.core.runtime.util.SharedStudioInfoProvider;
import org.talend.core.ui.branding.IBrandingService;
import org.talend.designer.codegen.i18n.Messages;
@@ -69,6 +70,9 @@ public final class ComponentsProviderManager {
try {
AbstractComponentsProvider componentsProvider = (AbstractComponentsProvider) configurationElement
.createExecutableExtension("class"); //$NON-NLS-1$
if (componentsProvider instanceof SharedStudioInfoProvider && !((SharedStudioInfoProvider)componentsProvider).isSupportCurrentMode()) {
continue;
}
componentsProvider.setId(id);
componentsProvider.setFolderName(folderName);
componentsProvider.setContributer(contributerName);
@@ -81,15 +85,15 @@ public final class ComponentsProviderManager {
}
}
public AbstractComponentsProvider loadUserComponentsProvidersFromExtension() {
if (providers == null) {
loadComponentsProvidersFromExtension();
}
for (AbstractComponentsProvider provider : providers) {
if ("org.talend.designer.components.model.UserComponentsProvider".equals(provider.getId())) {
return provider;
}
}
return null;
}
public AbstractComponentsProvider loadUserComponentsProvidersFromExtension() {
if (providers == null) {
loadComponentsProvidersFromExtension();
}
for (AbstractComponentsProvider provider : providers) {
if (provider instanceof UserComponentsProvider) {
return provider;
}
}
return null;
}
}

View File

@@ -0,0 +1,61 @@
package org.talend.designer.codegen.components.model;
//============================================================================
//
//Copyright (C) 2006-2019 Talend Inc. - www.talend.com
//
//This source code is available under agreement available at
//%InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt
//
//You should have received a copy of the agreement
//along with this program; if not, write to Talend SA
//9 rue Pages 92150 Suresnes, France
//
//============================================================================
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.net.URLClassLoader;
import java.util.ResourceBundle;
import org.eclipse.core.runtime.IPath;
import org.eclipse.core.runtime.Path;
import org.eclipse.core.runtime.Platform;
import org.talend.core.model.components.ComponentUtilities;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.runtime.util.ComponentsLocationProvider;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.designer.core.model.components.ComponentBundleToPath;
public class SharedStudioUserComponentProvider extends UserComponentsProvider implements ComponentsLocationProvider{
@Override
public File getInstallationFolder() throws IOException {
File componentFolder = SharedStudioUtils.getSharedStudioComponentsParentFolder();
IPath path = new Path(IComponentsFactory.COMPONENTS_INNER_FOLDER);
path = path.append(IComponentsFactory.EXTERNAL_COMPONENTS_INNER_FOLDER).append(ComponentUtilities.getExtFolder(getFolderName()));
File installationFolder = new File (componentFolder, path.toOSString());
return installationFolder;
}
public String getComponentsBundle() {
return ComponentBundleToPath.SHARED_STUDIO_CUSTOM_COMPONENT_BUNDLE;
}
public boolean isSupportCurrentMode() {
if (SharedStudioUtils.isSharedStudioMode()) {
return true;
}
return false;
}
@Override
public ResourceBundle getResourceBundle(String label) {
URL configFolderUrl = Platform.getConfigurationLocation().getURL();
URLClassLoader urlLoader = new URLClassLoader(new java.net.URL[]{configFolderUrl});
java.util.ResourceBundle bundle = java.util.ResourceBundle.getBundle( label ,
java.util.Locale.getDefault(), urlLoader );
return bundle;
}
}

View File

@@ -34,13 +34,15 @@ import org.talend.core.model.components.ComponentUtilities;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.model.general.Project;
import org.talend.core.model.repository.ERepositoryObjectType;
import org.talend.core.runtime.util.SharedStudioInfoProvider;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.core.ui.branding.IBrandingService;
import org.talend.designer.codegen.CodeGeneratorActivator;
import org.talend.designer.codegen.components.ui.IComponentPreferenceConstant;
import org.talend.repository.ProjectManager;
/***/
public class UserComponentsProvider extends AbstractCustomComponentsProvider {
public class UserComponentsProvider extends AbstractCustomComponentsProvider implements SharedStudioInfoProvider{
@Override
protected File getExternalComponentsLocation() {
@@ -147,5 +149,11 @@ public class UserComponentsProvider extends AbstractCustomComponentsProvider {
public String getComponentsBundle() {
return IComponentsFactory.COMPONENTS_LOCATION;
}
public boolean isSupportCurrentMode() {
if (SharedStudioUtils.isSharedStudioMode()) {
return false;
}
return true;
}
}

View File

@@ -18,6 +18,7 @@ import java.util.Map;
import org.eclipse.core.runtime.Platform;
import org.talend.commons.exception.ExceptionHandler;
import org.talend.commons.utils.StringUtils;
import org.talend.designer.core.model.components.ComponentBundleToPath;
/**
* Jet container for a particular component.
@@ -213,8 +214,17 @@ public class JetBean {
if (pluginIdToBundle.containsKey(pluginId)) {
base = pluginIdToBundle.get(pluginId);
} else {
base = Platform.getBundle(pluginId).getEntry("/").toString(); //$NON-NLS-1$
pluginIdToBundle.put(pluginId, base);
if (ComponentBundleToPath.SHARED_STUDIO_CUSTOM_COMPONENT_BUNDLE.equals(pluginId)) {
base = ComponentBundleToPath.getPathFromBundle(pluginId);
if (!base.endsWith("/")) {
base = base + "/";
}
pluginIdToBundle.put(pluginId, base);
} else {
base = Platform.getBundle(pluginId).getEntry("/").toString(); //$NON-NLS-1$
pluginIdToBundle.put(pluginId, base);
}
}
String result = base + relativeUri;
return result;

View File

@@ -136,13 +136,11 @@ public class TalendJETCompiler extends JETCompiler {
// get the plugin name from fileURI
String refPluginName = matcher.group(1);
// retrieve the plugin URI by pluginName.
Bundle refBundle = Platform.getBundle(refPluginName);
if (refBundle != null) {
String realURI = TemplateUtil.getPlatformUrlOfBundle(refPluginName);
String realURI = TemplateUtil.getPlatformUrlOfBundle(refPluginName);
if (realURI != null) {
// replace the old fileURI to new one by pluginURI
String newFileURI = fileURI.replaceFirst(PLUGIN_VAR_PATTERN.pattern(), realURI);
return newFileURI;
}
}
}

View File

@@ -14,6 +14,7 @@ package org.talend.designer.codegen.config;
import org.eclipse.core.runtime.Platform;
import org.osgi.framework.Bundle;
import org.talend.designer.core.model.components.ComponentBundleToPath;
/**
* CodeGenerator Templates Ressources Utils.
@@ -161,10 +162,25 @@ public class TemplateUtil {
* @return
*/
public static String getPlatformUrlOfBundle(String bundleName) {
Bundle bundle = Platform.getBundle(bundleName);
if (bundle == null) {
return null;
}
return "platform:/plugin/" + bundle.getSymbolicName() + "_" + bundle.getVersion().toString() + "/";
if (ComponentBundleToPath.SHARED_STUDIO_CUSTOM_COMPONENT_BUNDLE.equals(bundleName)) {
String basePath = ComponentBundleToPath.getPathFromBundle(bundleName);
if (!basePath.endsWith("/")) {
basePath = basePath + "/";
}
return basePath;
} else {
Bundle bundle = Platform.getBundle(bundleName);
if (bundle == null) {
return null;
}
StringBuilder sb = new StringBuilder();
sb.append("platform:/plugin/");
sb.append(bundle.getSymbolicName());
sb.append("_");
sb.append(bundle.getVersion().toString());
sb.append("/");
return sb.toString();
}
}
}

View File

@@ -47,6 +47,7 @@ import org.talend.core.ui.component.ComponentsFactoryProvider;
import org.talend.designer.codegen.CodeGeneratorActivator;
import org.talend.designer.codegen.config.TemplateUtil;
import org.talend.designer.codegen.i18n.Messages;
import org.talend.designer.core.model.components.ComponentBundleToPath;
/**
* DOC xtan
@@ -256,10 +257,9 @@ public final class JetSkeletonManager {
};
for (TemplateUtil template : CodeGeneratorInternalTemplatesFactoryProvider.getInstance().getTemplates()) {
Bundle b = Platform.getBundle(template.getJetPluginRepository());
URL resourcesUrl = null;
try {
resourcesUrl = FileLocator.toFileURL(FileLocator.find(b, new Path(template.getTemplateRelativeUri()), null));
resourcesUrl = FileLocator.toFileURL(ComponentBundleToPath.findComponentsBundleURL(template.getJetPluginRepository(), new Path(template.getTemplateRelativeUri()), null));
} catch (IOException e) {
ExceptionHandler.process(e);
}

View File

@@ -8,6 +8,7 @@ Require-Bundle: org.eclipse.core.runtime,
org.eclipse.ui,
org.apache.log4j,
org.apache.commons.collections,
org.apache.commons.discovery,
org.apache.commons.logging,
org.apache.commons.beanutils,
org.apache.commons.io,
@@ -25,7 +26,6 @@ Require-Bundle: org.eclipse.core.runtime,
org.talend.repository,
org.talend.core.repository,
org.talend.updates.runtime,
org.apache.axis,
org.eclipse.ui.intro,
org.eclipse.ui.forms,
org.eclipse.jface.text

View File

@@ -9,6 +9,14 @@
id="org.talend.designer.components.exchange.ExchangeComponentsProvider">
</ComponentsProvider>
</extension>
<extension
point="org.talend.core.components_provider">
<ComponentsProvider
class="org.talend.designer.components.exchange.SharedStudioExchangeComponentsProvider"
folderName="exchange"
id="org.talend.designer.components.exchange.SharedStudioExchangeComponentsProvider">
</ComponentsProvider>
</extension>
<extension
point="org.talend.core.runtime.service">
<Service

View File

@@ -28,13 +28,15 @@ import org.talend.core.GlobalServiceRegister;
import org.talend.core.model.components.AbstractComponentsProvider;
import org.talend.core.model.components.ComponentUtilities;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.runtime.util.SharedStudioInfoProvider;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.core.ui.branding.IBrandingService;
import org.talend.designer.components.exchange.util.ExchangeUtils;
/**
* DOC hcyi class global comment. Detailled comment
*/
public class ExchangeComponentsProvider extends AbstractComponentsProvider {
public class ExchangeComponentsProvider extends AbstractComponentsProvider implements SharedStudioInfoProvider{
/**
* ExchangeComponentsProvider constructor.
@@ -184,4 +186,10 @@ public class ExchangeComponentsProvider extends AbstractComponentsProvider {
return IComponentsFactory.COMPONENTS_LOCATION;
}
public boolean isSupportCurrentMode() {
if (SharedStudioUtils.isSharedStudioMode()) {
return false;
}
return true;
}
}

View File

@@ -0,0 +1,59 @@
package org.talend.designer.components.exchange;
//============================================================================
//
//Copyright (C) 2006-2019 Talend Inc. - www.talend.com
//
//This source code is available under agreement available at
//%InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt
//
//You should have received a copy of the agreement
//along with this program; if not, write to Talend SA
//9 rue Pages 92150 Suresnes, France
//
//============================================================================
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.net.URLClassLoader;
import java.util.ResourceBundle;
import org.eclipse.core.runtime.IPath;
import org.eclipse.core.runtime.Path;
import org.eclipse.core.runtime.Platform;
import org.talend.core.model.components.ComponentUtilities;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.runtime.util.ComponentsLocationProvider;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.designer.core.model.components.ComponentBundleToPath;
public class SharedStudioExchangeComponentsProvider extends ExchangeComponentsProvider implements ComponentsLocationProvider{
@Override
public File getInstallationFolder() throws IOException {
File componentFolder = SharedStudioUtils.getSharedStudioComponentsParentFolder();
IPath path = new Path(IComponentsFactory.COMPONENTS_INNER_FOLDER);
path = path.append(IComponentsFactory.EXTERNAL_COMPONENTS_INNER_FOLDER).append(ComponentUtilities.getExtFolder(getFolderName()));
File installationFolder = new File (componentFolder, path.toOSString());
return installationFolder;
}
public String getComponentsBundle() {
return ComponentBundleToPath.SHARED_STUDIO_CUSTOM_COMPONENT_BUNDLE;
}
public boolean isSupportCurrentMode() {
if (SharedStudioUtils.isSharedStudioMode()) {
return true;
}
return false;
}
@Override
public ResourceBundle getResourceBundle(String label) {
URL configFolderUrl = Platform.getConfigurationLocation().getURL();
URLClassLoader urlLoader = new URLClassLoader(new java.net.URL[]{configFolderUrl});
java.util.ResourceBundle bundle = java.util.ResourceBundle.getBundle( label ,
java.util.Locale.getDefault(), urlLoader );
return bundle;
}
}

View File

@@ -1,66 +0,0 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt
//
// You should have received a copy of the agreement
// along with this program; if not, write to Talend SA
// 9 rue Pages 92150 Suresnes, France
//
// ============================================================================
package org.talend.designer.components.exchange.proxy;
import org.apache.commons.lang.StringUtils;
/**
*
* DOC hcyi class global comment. Detailled comment
*/
public class DefaultHTTPSTransportClientProperties extends DefaultHTTPTransportClientProperties {
/**
* @see org.apache.axis.components.net.TransportClientProperties#getProxyHost()
*/
@Override
public String getProxyHost() {
return StringUtils.trimToEmpty(System.getProperty("https.proxyHost")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getNonProxyHosts()
*/
@Override
public String getNonProxyHosts() {
return StringUtils.trimToEmpty(System.getProperty("https.nonProxyHosts")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getPort()
*/
@Override
public String getProxyPort() {
return StringUtils.trimToEmpty(System.getProperty("https.proxyPort")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getUser()
*/
@Override
public String getProxyUser() {
return StringUtils.trimToEmpty(System.getProperty("https.proxyUser")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getPassword()
*/
@Override
public String getProxyPassword() {
return StringUtils.trimToEmpty(System.getProperty("https.proxyPassword")); //$NON-NLS-1$
}
}

View File

@@ -1,58 +0,0 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt
//
// You should have received a copy of the agreement
// along with this program; if not, write to Talend SA
// 9 rue Pages 92150 Suresnes, France
//
// ============================================================================
package org.talend.designer.components.exchange.proxy;
import org.apache.axis.components.net.TransportClientProperties;
import org.apache.commons.lang.StringUtils;
/**
*
* DOC hcyi class global comment. Detailled comment
*/
public class DefaultHTTPTransportClientProperties implements TransportClientProperties {
/**
* @see org.apache.axis.components.net.TransportClientProperties#getProxyHost()
*/
public String getProxyHost() {
return StringUtils.trimToEmpty(System.getProperty("http.proxyHost")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getNonProxyHosts()
*/
public String getNonProxyHosts() {
return StringUtils.trimToEmpty(System.getProperty("http.nonProxyHosts")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getPort()
*/
public String getProxyPort() {
return StringUtils.trimToEmpty(System.getProperty("http.proxyPort")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getProxyUser()
*/
public String getProxyUser() {
return StringUtils.trimToEmpty(System.getProperty("http.proxyUser")); //$NON-NLS-1$
}
/**
* @see org.apache.axis.components.net.TransportClientProperties#getProxyPassword()
*/
public String getProxyPassword() {
return StringUtils.trimToEmpty(System.getProperty("http.proxyPassword")); //$NON-NLS-1$
}
}

View File

@@ -52,6 +52,7 @@ import org.talend.core.download.IDownloadHelper;
import org.talend.core.model.components.ComponentManager;
import org.talend.core.model.components.IComponent;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.core.ui.component.ComponentPaletteUtilities;
import org.talend.core.ui.component.ComponentsFactoryProvider;
import org.talend.designer.codegen.ICodeGeneratorService;
@@ -312,51 +313,54 @@ public class DownloadComponenentsAction extends Action implements IIntroAction {
protected void afterDownload(IProgressMonitor monitor, ComponentExtension extension, File localZipFile) throws Exception {
if (UpdatesHelper.isComponentUpdateSite(localZipFile)) {
final File workFolder = org.talend.utils.files.FileUtils.createTmpFolder("downloadedComponents", ""); //$NON-NLS-1$ //$NON-NLS-2$
if (!SharedStudioUtils.isSharedStudioMode()) {
final File workFolder = org.talend.utils.files.FileUtils.createTmpFolder("downloadedComponents", ""); //$NON-NLS-1$ //$NON-NLS-2$
try {
FilesUtils.copyFile(localZipFile, new File(workFolder, localZipFile.getName()));
try {
FilesUtils.copyFile(localZipFile, new File(workFolder, localZipFile.getName()));
ComponentsInstallComponent component = LocalComponentInstallHelper.getComponent();
if (component != null) {
try {
component.setComponentFolder(workFolder);
if (component.install()) {
ComponentsInstallComponent component = LocalComponentInstallHelper.getComponent();
if (component != null) {
try {
component.setComponentFolder(workFolder);
if (component.install()) {
if (component.needRelaunch()) {
askReboot();
} else {
MessageDialog.openInformation(DisplayUtils.getDefaultShell(),
Messages.getString("DownloadComponenentsAction.installComponentsTitle"),
component.getInstalledMessages());
if (component.needRelaunch()) {
askReboot();
} else {
MessageDialog.openInformation(DisplayUtils.getDefaultShell(),
Messages.getString("DownloadComponenentsAction.installComponentsTitle"),
component.getInstalledMessages());
}
} else {// install failure
MessageDialog.openWarning(DisplayUtils.getDefaultShell(),
Messages.getString("DownloadComponenentsAction_failureTitle"), //$NON-NLS-1$
Messages.getString("DownloadComponenentsAction_failureMessage", extension.getLabel())); //$NON-NLS-1$
}
} else {// install failure
MessageDialog.openWarning(DisplayUtils.getDefaultShell(),
} finally {
// after install, clear the setting for service.
component.setComponentFolder(null);
}
}
} catch (Exception e) {
// Popup dialog to user to waring install failed.
Display.getDefault().syncExec(new Runnable() {
@Override
public void run() {
MessageDialog.openError(DisplayUtils.getDefaultShell(false),
Messages.getString("DownloadComponenentsAction_failureTitle"), //$NON-NLS-1$
Messages.getString("DownloadComponenentsAction_failureMessage", extension.getLabel())); //$NON-NLS-1$
}
} finally {
// after install, clear the setting for service.
component.setComponentFolder(null);
}
});
throw e;
} finally {
FilesUtils.deleteFolder(workFolder, true);
}
} catch (Exception e) {
// Popup dialog to user to waring install failed.
Display.getDefault().syncExec(new Runnable() {
@Override
public void run() {
MessageDialog.openError(DisplayUtils.getDefaultShell(false),
Messages.getString("DownloadComponenentsAction_failureTitle"), //$NON-NLS-1$
Messages.getString("DownloadComponenentsAction_failureMessage", extension.getLabel())); //$NON-NLS-1$
}
});
throw e;
} finally {
FilesUtils.deleteFolder(workFolder, true);
}
monitor.done();
ExchangeManager.getInstance().saveDownloadedExtensionsToFile(extension);
monitor.done();
ExchangeManager.getInstance().saveDownloadedExtensionsToFile(extension);
}
} else {
File installedLocation = ComponentInstaller.unzip(localZipFile.getAbsolutePath(), getComponentsFolder()
.getAbsolutePath());

View File

@@ -37,6 +37,7 @@ import org.eclipse.swt.widgets.Shell;
import org.eclipse.ui.PlatformUI;
import org.talend.commons.ui.runtime.exception.ExceptionHandler;
import org.talend.core.download.DownloadHelper;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.designer.components.exchange.i18n.Messages;
import org.talend.designer.components.exchange.model.Category;
import org.talend.designer.components.exchange.model.VersionRevision;
@@ -105,7 +106,7 @@ public class ImportExchangeDialog extends Dialog {
@Override
protected void okPressed() {
IPath tempPath = new Path(System.getProperty("user.dir")).append("temp"); //$NON-NLS-1$ //$NON-NLS-2$
IPath tempPath = SharedStudioUtils.getTempFolderPath();
File pathFile = tempPath.toFile();
if (downloadproperty.getFileName() == null || downloadproperty.getFileName() == null) {
MessageBox box = new MessageBox(Display.getCurrent().getActiveShell(), SWT.ICON_WARNING | SWT.OK);

View File

@@ -25,11 +25,9 @@ import java.util.Map;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.axis.components.net.TransportClientProperties;
import org.apache.axis.components.net.TransportClientPropertiesFactory;
import org.apache.commons.beanutils.BeanUtils;
import org.apache.commons.collections.map.MultiValueMap;
import org.apache.commons.discovery.tools.ManagedProperties;
import org.apache.commons.httpclient.HostConfiguration;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.NameValuePair;
@@ -55,6 +53,7 @@ import org.talend.core.language.ECodeLanguage;
import org.talend.core.language.LanguageManager;
import org.talend.core.model.components.IComponentsFactory;
import org.talend.core.model.general.Project;
import org.talend.core.runtime.util.SharedStudioUtils;
import org.talend.core.ui.component.ComponentPaletteUtilities;
import org.talend.core.ui.component.ComponentsFactoryProvider;
import org.talend.designer.components.exchange.ExchangePlugin;
@@ -150,14 +149,17 @@ public class ExchangeUtils {
public static String sendGetRequest(String urlAddress) throws Exception {
HttpClient httpclient = new HttpClient();
GetMethod getMethod = new GetMethod(urlAddress);
TransportClientProperties tcp = TransportClientPropertiesFactory.create("http");
if (tcp.getProxyHost().length() != 0) {
String proxyUser = ManagedProperties.getProperty("http.proxyUser");
String proxyPassword = ManagedProperties.getProperty("http.proxyPassword");
String proxyHost = ManagedProperties.getProperty("http.proxyHost");
proxyHost = proxyHost != null ? proxyHost : "";
String proxyPort = ManagedProperties.getProperty("http.proxyPort");
if (proxyHost.length() != 0) {
UsernamePasswordCredentials creds = new UsernamePasswordCredentials(
tcp.getProxyUser() != null ? tcp.getProxyUser() : "",
tcp.getProxyPassword() != null ? tcp.getProxyUser() : "");
proxyUser != null ? proxyUser : "", proxyPassword != null ? proxyPassword : "");
httpclient.getState().setProxyCredentials(AuthScope.ANY, creds);
HostConfiguration hcf = new HostConfiguration();
hcf.setProxy(tcp.getProxyHost(), Integer.parseInt(tcp.getProxyPort()));
hcf.setProxy(proxyHost, Integer.parseInt(proxyPort));
httpclient.executeMethod(hcf, getMethod);
} else {
httpclient.executeMethod(getMethod);
@@ -205,14 +207,19 @@ public class ExchangeUtils {
* @return
*/
public static File getComponentFolder(String componentfolder) {
URL url = FileLocator.find(ExchangePlugin.getDefault().getBundle(), new Path(componentfolder), null);
try {
URL fileUrl = FileLocator.toFileURL(url);
return new File(fileUrl.getPath());
} catch (Exception e) {
ExceptionHandler.process(e);
}
return null;
if (SharedStudioUtils.isSharedStudioMode()) {
File componentFolder = SharedStudioUtils.getSharedStudioComponentsExtFolder();
return new File (componentFolder, componentfolder);
} else {
URL url = FileLocator.find(ExchangePlugin.getDefault().getBundle(), new Path(componentfolder), null);
try {
URL fileUrl = FileLocator.toFileURL(url);
return new File(fileUrl.getPath());
} catch (Exception e) {
ExceptionHandler.process(e);
}
return null;
}
}
/**

View File

@@ -11,7 +11,7 @@
<!-- modification 2: compile classpath -->
<path id="compile.classpath">
<pathelement location="../../../../../../tcommon-studio-se/main/plugins/org.talend.libraries.dom4j-jaxen/lib/dom4j-1.6.1.jar" />
<pathelement location="../../../../../../tcommon-studio-se/main/plugins/org.talend.libraries.dom4j-jaxen/lib/dom4j-2.1.3.jar" />
<pathelement location="../../../../../../tcommon-studio-se/main/plugins/org.talend.libraries.apache.common/lib/commons-lang-2.6.jar" />
</path>

View File

@@ -2,7 +2,6 @@
<project name="org.talend.designer.components.libs" default="buildall" basedir=".">
<target name="buildall">
<ant antfile="talend_file_enhanced_20070724/build.xml" target="process" inheritall="no" />
<ant antfile="sugarCRMManagement/build.xml" target="process" inheritall="no" />
<ant antfile="TalendSAX/build.xml" target="process" inheritall="no" />
</target>

View File

@@ -3,11 +3,32 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.components.lib</groupId>
<artifactId>commons-net-ftps-proxy</artifactId>
<version>3.6.1-talend-20190819</version>
<version>3.6.1-talend-20200902</version>
<name>commons-net-talend</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
<slf4.version>1.7.25</slf4.version>
<lombok.version>1.18.12</lombok.version>
</properties>
<dependencies>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>commons-net</groupId>
<artifactId>commons-net</artifactId>
@@ -15,10 +36,6 @@
</dependency>
</dependencies>
<properties>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
</properties>
<distributionManagement>
<snapshotRepository>
<id>talend_nexus_deployment</id>

View File

@@ -11,8 +11,10 @@ import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSessionContext;
import javax.net.ssl.SSLSocket;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.net.ftp.FTPSClient;
@Slf4j
public class SSLSessionReuseFTPSClient extends FTPSClient {
public SSLSessionReuseFTPSClient(boolean isImplicit, SSLContext context) {
@@ -24,6 +26,12 @@ public class SSLSessionReuseFTPSClient extends FTPSClient {
if (socket instanceof SSLSocket) {
final SSLSession session = ((SSLSocket) _socket_).getSession();
final SSLSessionContext context = session.getSessionContext();
if (context == null) {
// TDI-44654 (may be reproduced with Syncplify server)
log.info("SSL Session Context is null. SSL Session was re-initialized.");
return;
}
try {
final Field sessionHostPortCache = context.getClass().getDeclaredField("sessionHostPortCache");
sessionHostPortCache.setAccessible(true);
@@ -32,10 +40,10 @@ public class SSLSessionReuseFTPSClient extends FTPSClient {
putMethod.setAccessible(true);
InetAddress address = socket.getInetAddress();
int port = socket.getPort();
String key = String.format("%s:%s", address.getHostName(), String.valueOf(port)).toLowerCase(Locale.ROOT);
putMethod.invoke(cache, key, session);
key = String.format("%s:%s", address.getHostAddress(), String.valueOf(port)).toLowerCase(Locale.ROOT);
putMethod.invoke(cache, key, session);
} catch (Exception e) {

View File

@@ -2,9 +2,9 @@
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.libraries</groupId>
<groupId>org.talend.components</groupId>
<artifactId>filecopy</artifactId>
<version>2.0.0</version>
<version>2.0.3</version>
<packaging>jar</packaging>
<name>talend-copy</name>
@@ -14,6 +14,7 @@
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
<java.source.version>1.8</java.source.version>
<junit5.version>5.4.2</junit5.version>
<slf4j.version>1.7.28</slf4j.version>
</properties>
<distributionManagement>
@@ -52,7 +53,12 @@
<version>${junit5.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<plugins>

View File

@@ -15,13 +15,21 @@ package org.talend;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.nio.file.attribute.FileTime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* DOC Administrator class global comment. Detailled comment
*/
public class FileCopy {
static Logger logger = LoggerFactory.getLogger(Object.class);
/** Private constructor, only static methods */
private FileCopy() {
}
@@ -34,16 +42,57 @@ public class FileCopy {
* @param delSrc : true if delete source.
* @throws IOException : if IO pb.
*/
public static void copyFile(String srcFileName, String desFileName, boolean delSrc) throws IOException {
final File source = new File(srcFileName);
final File destination = new File(desFileName);
public static void copyFile(String srcFileName, String desFileName, boolean delSrc, boolean keepModified)
throws IOException {
final Path source = Paths.get(srcFileName);
final Path destination = Paths.get(desFileName);
FileTime lastModifiedTime = null;
try {
lastModifiedTime = Files.getLastModifiedTime(source);
} catch (IOException e) {
logger.warn(e.getLocalizedMessage());
}
if (delSrc) {
// move : more efficient if in same FS and mustr delete existing file.
Files.move(source.toPath(), destination.toPath(), StandardCopyOption.REPLACE_EXISTING);
Files.move(source, destination, StandardCopyOption.REPLACE_EXISTING);
} else {
Files.copy(source.toPath(), destination.toPath(), StandardCopyOption.REPLACE_EXISTING);
Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING);
}
if(keepModified){
try {
Files.setLastModifiedTime(destination,lastModifiedTime);
} catch (IOException e) {
logger.warn(e.getLocalizedMessage());
}
}
}
public static void copyFile(String srcFileName, String desFileName, boolean delSrc ) throws IOException {
copyFile(srcFileName,desFileName,delSrc,true);
}
/**
* Force Copy and Delete files.
*
* @param srcFileName : file name for source file.
* @param desFileName : file name for destination file.
* @throws IOException : if IO pb.
*/
public static void forceCopyAndDelete(String srcFileName, String desFileName, boolean keepModified) throws IOException {
final Path source = Paths.get(srcFileName);
final Path destination = Paths.get(desFileName);
final long lastModifiedTime = new File(srcFileName).lastModified();
Files.copy(source, destination, StandardCopyOption.REPLACE_EXISTING);
Files.delete(source);
if(keepModified){
destination.toFile().setLastModified(lastModifiedTime);
}
}
public static void forceCopyAndDelete(String srcFileName, String desFileName) throws IOException {
forceCopyAndDelete(srcFileName,desFileName,true);
}
}

View File

@@ -100,6 +100,44 @@ class FileCopyTest {
Assertions.assertEquals(referenceSize, copy.length(), "Size error");
}
@Test
void testForceCopyWithDelete() throws Exception {
final URL repCopy = Thread.currentThread().getContextClassLoader().getResource("copy");
File file = this.buildFile("fileToDelete.txt", 10L * 1024L);
file.deleteOnExit();
File copy = new File(repCopy.getPath(), "fileToDelete.txt");
long referenceSize = file.length();
if (!copy.exists()) {
copy.createNewFile();
}
copy.deleteOnExit();
FileCopy.forceCopyAndDelete(file.getPath(), copy.getPath());
Assertions.assertFalse(file.exists(), "file not delete");
Assertions.assertTrue(copy.exists(), "small file : original file deleted");
Assertions.assertEquals(referenceSize, copy.length(), "Size error");
}
@Test
void testLastModifiedTime() throws Exception {
final URL repCopy = Thread.currentThread().getContextClassLoader().getResource("copy");
File file = this.buildFile("fileLMT.txt", 10L * 1024L);
file.deleteOnExit();
long referencceTime = 324723894L;
file.setLastModified(referencceTime);
File copy = new File(repCopy.getPath(), "fileLMTDestination.txt");
if (copy.exists()) {
copy.delete();
}
copy.deleteOnExit();
FileCopy.copyFile(file.getPath(), copy.getPath(), true);
Assertions.assertEquals(referencceTime, copy.lastModified(), "modified time is not idential");
}
/**
* Generate a new file for testing.
*
@@ -125,4 +163,22 @@ class FileCopyTest {
return generatedFile;
}
@Test
void testKeepLastModifiedTime() throws Exception {
final URL repCopy = Thread.currentThread().getContextClassLoader().getResource("copy");
File file = this.buildFile("fileLMT.txt", 10L * 1024L);
file.deleteOnExit();
long referencceTime = 324723894L;
file.setLastModified(referencceTime);
File copy = new File(repCopy.getPath(), "fileLMTDestination.txt");
if (copy.exists()) {
copy.delete();
}
copy.deleteOnExit();
FileCopy.copyFile(file.getPath(), copy.getPath(), true,true);
Assertions.assertEquals(referencceTime, copy.lastModified(), "modified time is not idential");
}
}

View File

@@ -11,7 +11,7 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<cxf.version>3.1.2</cxf.version>
<cxf.version>3.3.10</cxf.version>
</properties>
<build>

View File

@@ -13,9 +13,9 @@
<optional>true</optional>
</dependency>
<dependency>
<groupId>dom4j</groupId>
<groupId>org.dom4j</groupId>
<artifactId>dom4j</artifactId>
<version>1.6.1</version>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>commons-lang</groupId>

View File

@@ -2,9 +2,9 @@
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.libraries</groupId>
<artifactId>simpleexcel-2.2-20190722</artifactId>
<version>6.0.0</version>
<groupId>org.talend.components</groupId>
<artifactId>simpleexcel</artifactId>
<version>2.4-20200923</version>
<packaging>jar</packaging>
<name>simpleexcel</name>
@@ -13,7 +13,7 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
<java.source.version>1.6</java.source.version>
<java.source.version>1.8</java.source.version>
</properties>
<distributionManagement>
@@ -43,48 +43,30 @@
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>4.1.0</version>
<version>4.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-scratchpad</artifactId>
<version>4.1.0</version>
<version>4.1.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.poi/poi-ooxml -->
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>4.1.0</version>
<version>4.1.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.poi/poi-ooxml-schemas -->
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml-schemas</artifactId>
<version>4.1.0</version>
<version>4.1.2</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.geronimo.specs/geronimo-stax-api_1.0_spec -->
<dependency>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-stax-api_1.0_spec</artifactId>
<version>1.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/dom4j/dom4j -->
<dependency>
<groupId>dom4j</groupId>
<artifactId>dom4j</artifactId>
<version>1.6.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.xmlbeans/xmlbeans -->
<dependency>
<groupId>org.apache.xmlbeans</groupId>
<artifactId>xmlbeans</artifactId>
<version>3.1.0</version>
</dependency>
</dependencies>
<build>
<resources>

View File

@@ -1,6 +1,6 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
// Copyright (C) 2006-2020 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt

View File

@@ -1,6 +1,6 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
// Copyright (C) 2006-2020 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt

View File

@@ -1,6 +1,6 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
// Copyright (C) 2006-2020 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt

View File

@@ -1,6 +1,6 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
// Copyright (C) 2006-2020 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt

View File

@@ -1,6 +1,6 @@
// ============================================================================
//
// Copyright (C) 2006-2019 Talend Inc. - www.talend.com
// Copyright (C) 2006-2020 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt

View File

@@ -1,16 +1,15 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.libraries</groupId>
<artifactId>talend-soap</artifactId>
<version>2.1-20190716</version>
<groupId>org.talend.components</groupId>
<artifactId>components-soap</artifactId>
<version>2.3-20200918</version>
<packaging>jar</packaging>
<name>talend-soap</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<cxf.version>3.1.1</cxf.version>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
</properties>
@@ -46,29 +45,24 @@
<systemPath>${java.home}/lib/rt.jar</systemPath>
</dependency>
<dependency>
<groupId>jdom</groupId>
<artifactId>jdom</artifactId>
<version>1.1</version>
<groupId>org.dom4j</groupId>
<artifactId>dom4j</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>com.sun.xml.messaging.saaj</groupId>
<artifactId>saaj-impl</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>javax.activation</groupId>
<artifactId>activation</artifactId>
<version>1.1</version>
<groupId>com.sun.xml.messaging.saaj</groupId>
<artifactId>saaj-impl</artifactId>
<version>1.5.2</version>
</dependency>
<dependency>
<groupId>xerces</groupId>
<artifactId>xercesImpl</artifactId>
<version>2.6.2</version>
<version>2.12.0</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.9</version>
<version>1.14</version>
</dependency>
</dependencies>
<build>
@@ -108,4 +102,4 @@
</plugin>
</plugins>
</build>
</project>
</project>

View File

@@ -32,8 +32,7 @@ import javax.xml.transform.stream.StreamResult;
import javax.xml.transform.stream.StreamSource;
import org.apache.commons.codec.binary.Base64;
import org.jdom.input.DOMBuilder;
import org.jdom.output.XMLOutputter;
import org.dom4j.io.DOMReader;
import org.talend.soap.sun.SunNtlmAuthenticationUpdater;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
@@ -45,8 +44,6 @@ public class SOAPUtil {
private static final String vmVendor = System.getProperty("java.vendor.url");
private static final String ibmVmVendor = "http://www.ibm.com/";
private static final String sunVmVendor = "http://java.sun.com/";
private static final String oracleVmVendor = "http://java.oracle.com/";
@@ -140,12 +137,7 @@ public class SOAPUtil {
StreamSource preppedMsgSrc = new StreamSource(stream);
soapPart.setContent(preppedMsgSrc);
// InputStream stream = new FileInputStream(new File("d://soap.txt"));
// StreamSource preppedMsgSrc = new StreamSource(stream);
// soapPart.setContent(preppedMsgSrc);
message.saveChanges();
// Send the message
SOAPMessage reply = connection.call(message, destination);
@@ -226,7 +218,7 @@ public class SOAPUtil {
Node content;
Element headerRootElem = document.createElement("Header");
Iterator childElements = header.getChildElements();
Iterator<javax.xml.soap.Node> childElements = header.getChildElements();
org.w3c.dom.Node domNode = null;
while (childElements.hasNext()) {
domNode = (org.w3c.dom.Node) childElements.next();
@@ -245,12 +237,11 @@ public class SOAPUtil {
return reHeaderMessage;
}
private String Doc2StringWithoutDeclare(Document doc) {
DOMBuilder builder = new DOMBuilder();
org.jdom.Document jdomDoc = builder.build(doc);
XMLOutputter outputter = new XMLOutputter();
return outputter.outputString(jdomDoc.getRootElement());
}
private String Doc2StringWithoutDeclare(Document doc) {
DOMReader reader = new DOMReader();
org.dom4j.Document document = reader.read(doc);
return document.getRootElement().asXML();
}
/**
* invoke soap and return the response document
@@ -363,4 +354,4 @@ public class SOAPUtil {
headers.setHeader("Authorization", "Basic " + encodeUserInfo);
}
}
}

View File

@@ -0,0 +1,66 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.components.lib</groupId>
<artifactId>talend-aws</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>talend-aws</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
<java.source.version>1.8</java.source.version>
</properties>
<distributionManagement>
<snapshotRepository>
<id>talend_nexus_deployment</id>
<url>${talend.nexus.url}/nexus/content/repositories/TalendOpenSourceSnapshot/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</snapshotRepository>
<repository>
<id>talend_nexus_deployment</id>
<url>${talend.nexus.url}/nexus/content/repositories/TalendOpenSourceRelease/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
</distributionManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.848</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/java</directory>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>${java.source.version}</source>
<target>${java.source.version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@@ -0,0 +1,277 @@
package org.talend.aws;
import static com.amazonaws.event.SDKProgressPublisher.publishProgress;
import java.util.Collection;
import java.util.LinkedList;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.event.ProgressEventType;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.event.ProgressListenerChain;
import com.amazonaws.services.s3.model.LegacyS3ProgressListener;
import com.amazonaws.services.s3.transfer.Transfer;
import com.amazonaws.services.s3.transfer.TransferProgress;
import com.amazonaws.services.s3.transfer.internal.TransferMonitor;
import com.amazonaws.services.s3.transfer.internal.TransferStateChangeListener;
/**
* Abstract transfer implementation.
*/
public abstract class AbstractTransfer implements Transfer {
/** The current state of this transfer. */
protected volatile TransferState state = TransferState.Waiting;
protected TransferMonitor monitor;
/** The progress of this transfer. */
private final TransferProgress transferProgress;
private final String description;
/** Hook for adding/removing more progress listeners. */
protected final ProgressListenerChain listenerChain;
/** Collection of listeners to be notified for changes to the state of this transfer via setState() */
protected final Collection<TransferStateChangeListener> stateChangeListeners = new LinkedList<TransferStateChangeListener>();
AbstractTransfer(String description, TransferProgress transferProgress, ProgressListenerChain progressListenerChain) {
this(description, transferProgress, progressListenerChain, null);
}
AbstractTransfer(String description, TransferProgress transferProgress,
ProgressListenerChain progressListenerChain, TransferStateChangeListener stateChangeListener) {
this.description = description;
this.listenerChain = progressListenerChain;
this.transferProgress = transferProgress;
addStateChangeListener(stateChangeListener);
}
/**
* Returns whether or not the transfer is finished (i.e. completed successfully,
* failed, or was canceled). This method should never block.
*
* @return Returns <code>true</code> if this transfer is finished (i.e. completed successfully,
* failed, or was canceled). Returns <code>false</code> if otherwise.
*/
public final synchronized boolean isDone() {
return (state == TransferState.Failed ||
state == TransferState.Completed ||
state == TransferState.Canceled);
}
/**
* Waits for this transfer to complete. This is a blocking call; the current
* thread is suspended until this transfer completes.
*
* @throws AmazonClientException
* If any errors were encountered in the client while making the
* request or handling the response.
* @throws AmazonServiceException
* If any errors occurred in Amazon S3 while processing the
* request.
* @throws InterruptedException
* If this thread is interrupted while waiting for the transfer
* to complete.
*/
public void waitForCompletion()
throws AmazonClientException, AmazonServiceException, InterruptedException {
try {
Object result = null;
while (!monitor.isDone() || result == null) {
Future<?> f = monitor.getFuture();
result = f.get();
}
} catch (ExecutionException e) {
rethrowExecutionException(e);
}
}
/**
* Waits for this transfer to finish and returns any error that occurred, or
* returns <code>null</code> if no errors occurred.
* This is a blocking call; the current thread
* will be suspended until this transfer either fails or completes
* successfully.
*
* @return Any error that occurred while processing this transfer.
* Otherwise returns <code>null</code> if no errors occurred.
*
* @throws InterruptedException
* If this thread is interrupted while waiting for the transfer
* to complete.
*/
public AmazonClientException waitForException() throws InterruptedException {
try {
/**
* Do not remove the while loop. We need this as the future returned by
* monitor.getFuture() is set two times during the upload and copy operations.
*/
while (!monitor.isDone()) {
monitor.getFuture().get();
}
monitor.getFuture().get();
return null;
} catch (ExecutionException e) {
return unwrapExecutionException(e);
}
}
/**
* Returns a human-readable description of this transfer.
*
* @return A human-readable description of this transfer.
*/
public String getDescription() {
return description;
}
/**
* Returns the current state of this transfer.
*
* @return The current state of this transfer.
*/
public synchronized TransferState getState() {
return state;
}
/**
* Sets the current state of this transfer.
*/
public void setState(TransferState state) {
synchronized (this) {
this.state = state;
}
for ( TransferStateChangeListener listener : stateChangeListeners ) {
listener.transferStateChanged(this, state);
}
}
/**
* Notifies all the registered state change listeners of the state update.
*/
public void notifyStateChangeListeners(TransferState state) {
for ( TransferStateChangeListener listener : stateChangeListeners ) {
listener.transferStateChanged(this, state);
}
}
/**
* Adds the specified progress listener to the list of listeners
* receiving updates about this transfer's progress.
*
* @param listener
* The progress listener to add.
*/
public synchronized void addProgressListener(ProgressListener listener) {
listenerChain.addProgressListener(listener);
}
/**
* Removes the specified progress listener from the list of progress
* listeners receiving updates about this transfer's progress.
*
* @param listener
* The progress listener to remove.
*/
public synchronized void removeProgressListener(ProgressListener listener) {
listenerChain.removeProgressListener(listener);
}
/**
* @deprecated Replaced by {@link #addProgressListener(ProgressListener)}
*/
@Deprecated
public synchronized void addProgressListener(com.amazonaws.services.s3.model.ProgressListener listener) {
listenerChain.addProgressListener(new LegacyS3ProgressListener(listener));
}
/**
* @deprecated Replaced by {@link #removeProgressListener(ProgressListener)}
*/
@Deprecated
public synchronized void removeProgressListener(com.amazonaws.services.s3.model.ProgressListener listener) {
listenerChain.removeProgressListener(new LegacyS3ProgressListener(listener));
}
/**
* Adds the given state change listener to the collection of listeners.
*/
public synchronized void addStateChangeListener(TransferStateChangeListener listener) {
if ( listener != null )
stateChangeListeners.add(listener);
}
/**
* Removes the given state change listener from the collection of listeners.
*/
public synchronized void removeStateChangeListener(TransferStateChangeListener listener) {
if ( listener != null )
stateChangeListeners.remove(listener);
}
/**
* Returns progress information about this transfer.
*
* @return The progress information about this transfer.
*/
public TransferProgress getProgress() {
return transferProgress;
}
/**
* Sets the monitor used to poll for transfer completion.
*/
public void setMonitor(TransferMonitor monitor) {
this.monitor = monitor;
}
public TransferMonitor getMonitor() {
return monitor;
}
protected void fireProgressEvent(final ProgressEventType eventType) {
publishProgress(listenerChain, eventType);
}
/**
* Examines the cause of the specified ExecutionException and either
* rethrows it directly (if it's a type of AmazonClientException) or wraps
* it in an AmazonClientException and rethrows it.
*
* @param e
* The execution exception to examine.
*/
protected void rethrowExecutionException(ExecutionException e) {
throw unwrapExecutionException(e);
}
/**
* Unwraps the root exception that caused the specified ExecutionException
* and returns it. If it was not an instance of AmazonClientException, it is
* wrapped as an AmazonClientException.
*
* @param e
* The ExecutionException to unwrap.
*
* @return The root exception that caused the specified ExecutionException.
*/
protected AmazonClientException unwrapExecutionException(ExecutionException e) {
Throwable t = e;
while (t.getCause() != null && t instanceof ExecutionException) {
t = t.getCause();
}
if (t instanceof AmazonClientException) {
return (AmazonClientException) t;
}
return new AmazonClientException("Unable to complete transfer: " + t.getMessage(), t);
}
}

View File

@@ -0,0 +1,39 @@
package org.talend.aws;
import com.amazonaws.annotation.SdkInternalApi;
import com.amazonaws.services.s3.internal.ServiceUtils;
import com.amazonaws.services.s3.transfer.Transfer;
import java.io.File;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.Future;
/**
* Helper class to merge all the individual part files into a destinationFile.
*/
@SdkInternalApi
public class CompleteMultipartDownload implements Callable<File> {
private final List<Future<File>> partFiles;
private final File destinationFile;
private final DownloadImpl download;
private Integer currentPartNumber;
public CompleteMultipartDownload(List<Future<File>> files, File destinationFile, DownloadImpl download, Integer currentPartNumber) {
this.partFiles = files;
this.destinationFile = destinationFile;
this.download = download;
this.currentPartNumber = currentPartNumber;
}
@Override
public File call() throws Exception {
for (Future<File> file : partFiles) {
ServiceUtils.appendFile(file.get(), destinationFile);
download.updatePersistableTransfer(currentPartNumber++);
}
download.setState(Transfer.TransferState.Completed);
return destinationFile;
}
}

View File

@@ -0,0 +1,60 @@
package org.talend.aws;
import java.io.IOException;
import com.amazonaws.services.s3.model.CryptoMode;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.transfer.Transfer;
import com.amazonaws.services.s3.transfer.exception.PauseException;
/**
* Represents an asynchronous download from Amazon S3.
*/
public interface Download extends Transfer {
/**
* Returns the ObjectMetadata for the object being downloaded.
*
* @return The ObjectMetadata for the object being downloaded.
*/
public ObjectMetadata getObjectMetadata();
/**
* The name of the bucket where the object is being downloaded from.
*
* @return The name of the bucket where the object is being downloaded from.
*/
public String getBucketName();
/**
* The key under which this object was stored in Amazon S3.
*
* @return The key under which this object was stored in Amazon S3.
*/
public String getKey();
/**
* Cancels this download.
*
* @throws IOException
*/
public void abort() throws IOException;
/**
* Pause the current download operation and returns the information that can
* be used to resume the download at a later time.
*
* Resuming a download would not perform ETag check as range get is
* performed for downloading the object's remaining contents.
*
* Resuming a download for an object encrypted using
* {@link CryptoMode#StrictAuthenticatedEncryption} would result in
* AmazonClientException as authenticity cannot be guaranteed for a range
* get operation.
*
* @throws PauseException
* If any errors were encountered while trying to pause the
* download.
*/
public PersistableDownload pause() throws PauseException;
}

View File

@@ -0,0 +1,312 @@
package org.talend.aws;
import java.io.File;
import java.io.RandomAccessFile;
import java.net.SocketException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import javax.net.ssl.SSLProtocolException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import com.amazonaws.AmazonClientException;
import com.amazonaws.SdkClientException;
import com.amazonaws.annotation.SdkInternalApi;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.internal.FileLocks;
import com.amazonaws.services.s3.internal.ServiceUtils;
import com.amazonaws.services.s3.internal.ServiceUtils.RetryableS3DownloadTask;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.transfer.Transfer.TransferState;
import com.amazonaws.services.s3.transfer.exception.FileLockException;
import com.amazonaws.util.IOUtils;
@SdkInternalApi
final class DownloadCallable implements Callable<File> {
private static final Log LOG = LogFactory.getLog(DownloadCallable.class);
private final AmazonS3 s3;
private final CountDownLatch latch;
private final GetObjectRequest req;
private final boolean resumeExistingDownload;
private final DownloadImpl download;
private final File dstfile;
private final long origStartingByte;
private final long timeout;
private final ScheduledExecutorService timedExecutor;
/** The thread pool in which parts are downloaded downloaded. */
private final ExecutorService executor;
private final List<Future<File>> futureFiles;
private final boolean isDownloadParallel;
private Integer lastFullyMergedPartNumber;
private final boolean resumeOnRetry;
private long expectedFileLength;
DownloadCallable(AmazonS3 s3, CountDownLatch latch,
GetObjectRequest req, boolean resumeExistingDownload,
DownloadImpl download, File dstfile, long origStartingByte,
long expectedFileLength, long timeout,
ScheduledExecutorService timedExecutor,
ExecutorService executor,
Integer lastFullyDownloadedPartNumber, boolean isDownloadParallel, boolean resumeOnRetry)
{
if (s3 == null || latch == null || req == null || dstfile == null || download == null)
throw new IllegalArgumentException();
this.s3 = s3;
this.latch = latch;
this.req = req;
this.resumeExistingDownload = resumeExistingDownload;
this.download = download;
this.dstfile = dstfile;
this.origStartingByte = origStartingByte;
this.expectedFileLength = expectedFileLength;
this.timeout = timeout;
this.timedExecutor = timedExecutor;
this.executor = executor;
this.futureFiles = new ArrayList<Future<File>>();
this.lastFullyMergedPartNumber = lastFullyDownloadedPartNumber;
this.isDownloadParallel = isDownloadParallel;
this.resumeOnRetry = resumeOnRetry;
}
/**
* This method must return a non-null object, or else the existing
* implementation in {@link AbstractTransfer#waitForCompletion()}
* would block forever.
*
* @return the downloaded file
*/
@Override
public File call() throws Exception {
try {
latch.await();
if (isTimeoutEnabled()) {
timedExecutor.schedule(new Runnable() {
public void run() {
try {
if (download.getState() != TransferState.Completed) {
download.abort();
}
} catch(Exception e) {
throw new SdkClientException(
"Unable to abort download after timeout", e);
}
}
}, timeout, TimeUnit.MILLISECONDS);
}
download.setState(TransferState.InProgress);
ServiceUtils.createParentDirectoryIfNecessary(dstfile);
if (isDownloadParallel) {
downloadInParallel(ServiceUtils.getPartCount(req, s3));
} else {
S3Object s3Object = retryableDownloadS3ObjectToFile(dstfile,
new DownloadTaskImpl(s3, download, req));
updateDownloadStatus(s3Object);
}
return dstfile;
} catch (Throwable t) {
// Cancel all the futures
for (Future<File> f : futureFiles) {
f.cancel(true);
}
// Downloads aren't allowed to move from canceled to failed
if (download.getState() != TransferState.Canceled) {
download.setState(TransferState.Failed);
}
if (t instanceof Exception)
throw (Exception) t;
else
throw (Error) t;
}
}
/**
* Takes the result from serial download,
* updates the transfer state and monitor in downloadImpl object
* based on the result.
*/
private void updateDownloadStatus(S3Object result) {
if (result == null) {
download.setState(TransferState.Canceled);
download.setMonitor(new DownloadMonitor(download, null));
} else {
download.setState(TransferState.Completed);
}
}
/**
* Downloads each part of the object into a separate file synchronously and
* combines all the files into a single file.
*/
private void downloadInParallel(int partCount) throws Exception {
if (lastFullyMergedPartNumber == null) {
lastFullyMergedPartNumber = 0;
}
for (int i = lastFullyMergedPartNumber + 1; i <= partCount; i++) {
GetObjectRequest getPartRequest = new GetObjectRequest(req.getBucketName(), req.getKey(),
req.getVersionId()).withUnmodifiedSinceConstraint(req.getUnmodifiedSinceConstraint())
.withModifiedSinceConstraint(req.getModifiedSinceConstraint())
.withResponseHeaders(req.getResponseHeaders()).withSSECustomerKey(req.getSSECustomerKey())
.withGeneralProgressListener(req.getGeneralProgressListener());
getPartRequest.setMatchingETagConstraints(req.getMatchingETagConstraints());
getPartRequest.setNonmatchingETagConstraints(req.getNonmatchingETagConstraints());
getPartRequest.setRequesterPays(req.isRequesterPays());
futureFiles.add(
executor.submit(new DownloadPartCallable(s3, getPartRequest.withPartNumber(i), dstfile)));
}
truncateDestinationFileIfNecessary();
Future<File> future = executor.submit(new CompleteMultipartDownload(futureFiles, dstfile, download, ++lastFullyMergedPartNumber));
((DownloadMonitor) download.getMonitor()).setFuture(future);
}
/**
* If only partial part object is merged into the dstFile(due to pause
* operation), adjust the file length so that the part starts writing from
* the correct position.
*/
private void truncateDestinationFileIfNecessary() {
RandomAccessFile raf = null;
if (!FileLocks.lock(dstfile)) {
throw new FileLockException("Fail to lock " + dstfile);
}
try {
raf = new RandomAccessFile(dstfile, "rw");
if (lastFullyMergedPartNumber == 0) {
raf.setLength(0);
} else {
long lastByte = ServiceUtils.getLastByteInPart(s3, req, lastFullyMergedPartNumber);
if (dstfile.length() < lastByte) {
throw new SdkClientException(
"File " + dstfile.getAbsolutePath() + " has been modified since last pause.");
}
raf.setLength(lastByte + 1);
download.getProgress().updateProgress(lastByte + 1);
}
} catch (Exception e) {
throw new SdkClientException("Unable to append part file to dstfile " + e.getMessage(), e);
} finally {
IOUtils.closeQuietly(raf, LOG);
FileLocks.unlock(dstfile);
}
}
/**
* This method is called only if it is a resumed download.
*
* Adjust the range of the get request, and the expected (ie current) file
* length of the destination file to append to.
*/
private void adjustRequest(GetObjectRequest req) {
long[] range = req.getRange();
long lastByte = range[1];
long totalBytesToDownload = lastByte - this.origStartingByte + 1;
if (dstfile.exists()) {
if (!FileLocks.lock(dstfile)) {
throw new FileLockException("Fail to lock " + dstfile
+ " for range adjustment");
}
try {
expectedFileLength = dstfile.length();
long startingByte = this.origStartingByte + expectedFileLength;
LOG.info("Adjusting request range from " + Arrays.toString(range)
+ " to "
+ Arrays.toString(new long[] { startingByte, lastByte })
+ " for file " + dstfile);
req.setRange(startingByte, lastByte);
totalBytesToDownload = lastByte - startingByte + 1;
} finally {
FileLocks.unlock(dstfile);
}
}
if (totalBytesToDownload < 0) {
throw new IllegalArgumentException(
"Unable to determine the range for download operation. lastByte="
+ lastByte + ", origStartingByte=" + origStartingByte
+ ", expectedFileLength=" + expectedFileLength
+ ", totalBytesToDownload=" + totalBytesToDownload);
}
}
private S3Object retryableDownloadS3ObjectToFile(File file,
RetryableS3DownloadTask retryableS3DownloadTask) {
boolean hasRetried = false;
S3Object s3Object;
for (;;) {
final boolean appendData = resumeExistingDownload || (resumeOnRetry && hasRetried);
if (appendData && hasRetried) {
// Need to adjust the get range or else we risk corrupting the downloaded file
adjustRequest(req);
}
s3Object = retryableS3DownloadTask.getS3ObjectStream();
if (s3Object == null)
return null;
try {
if (testing && resumeExistingDownload && !hasRetried) {
throw new SdkClientException("testing");
}
ServiceUtils.downloadToFile(s3Object, file,
retryableS3DownloadTask.needIntegrityCheck(),
appendData, expectedFileLength);
return s3Object;
} catch (AmazonClientException ace) {
if (!ace.isRetryable())
throw ace;
// Determine whether an immediate retry is needed according to the captured SdkClientException.
// (There are three cases when downloadObjectToFile() throws SdkClientException:
// 1) SocketException or SSLProtocolException when writing to disk (e.g. when user aborts the download)
// 2) Other IOException when writing to disk
// 3) MD5 hashes don't match
// For 1) If SocketException is the result of the client side resetting the connection, this is retried
// Cases 2) and 3) will always be retried
final Throwable cause = ace.getCause();
if ((cause instanceof SocketException && !cause.getMessage().equals("Connection reset"))
|| (cause instanceof SSLProtocolException)) {
throw ace;
} else {
if (hasRetried)
throw ace;
else {
LOG.info("Retry the download of object " + s3Object.getKey() + " (bucket " + s3Object.getBucketName() + ")", ace);
hasRetried = true;
}
}
} finally {
s3Object.getObjectContent().abort();
}
}
}
private boolean isTimeoutEnabled() {
return timeout > 0;
}
private static boolean testing;
/**
* Used for testing purpose only.
*/
static void setTesting(boolean b) {
testing = b;
}
}

View File

@@ -0,0 +1,202 @@
package org.talend.aws;
import java.io.File;
import java.io.IOException;
import com.amazonaws.annotation.SdkInternalApi;
import com.amazonaws.event.ProgressEventType;
import com.amazonaws.event.ProgressListenerChain;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.transfer.TransferProgress;
import com.amazonaws.services.s3.transfer.exception.PauseException;
import com.amazonaws.services.s3.transfer.internal.S3ProgressPublisher;
import com.amazonaws.services.s3.transfer.internal.TransferManagerUtils;
import com.amazonaws.services.s3.transfer.internal.TransferStateChangeListener;
public class DownloadImpl extends AbstractTransfer implements Download {
private S3Object s3Object;
/**
* Information to resume if the download is paused.
*/
private PersistableDownload persistableDownload;
/**
* The last part that has been successfully written into the downloaded file.
*/
private Integer lastFullyDownloadedPartNumber;
private final GetObjectRequest getObjectRequest;
private final File file;
private final ObjectMetadata objectMetadata;
private final ProgressListenerChain progressListenerChain;
@Deprecated
public DownloadImpl(String description, TransferProgress transferProgress,
ProgressListenerChain progressListenerChain, S3Object s3Object, TransferStateChangeListener listener,
GetObjectRequest getObjectRequest, File file) {
this(description, transferProgress, progressListenerChain, s3Object, listener,
getObjectRequest, file, null, false);
}
public DownloadImpl(String description, TransferProgress transferProgress,
ProgressListenerChain progressListenerChain, S3Object s3Object, TransferStateChangeListener listener,
GetObjectRequest getObjectRequest, File file,
ObjectMetadata objectMetadata, boolean isDownloadParallel) {
super(description, transferProgress, progressListenerChain, listener);
this.s3Object = s3Object;
this.objectMetadata = objectMetadata;
this.getObjectRequest = getObjectRequest;
this.file = file;
this.progressListenerChain = progressListenerChain;
this.persistableDownload = captureDownloadState(getObjectRequest, file);
S3ProgressPublisher.publishTransferPersistable(progressListenerChain, persistableDownload);
}
/**
* Returns the ObjectMetadata for the object being downloaded.
*
* @return The ObjectMetadata for the object being downloaded.
*/
public synchronized ObjectMetadata getObjectMetadata() {
if (s3Object != null) {
return s3Object.getObjectMetadata();
}
return objectMetadata;
}
/**
* The name of the bucket where the object is being downloaded from.
*
* @return The name of the bucket where the object is being downloaded from.
*/
public String getBucketName() {
return getObjectRequest.getBucketName();
}
/**
* The key under which this object was stored in Amazon S3.
*
* @return The key under which this object was stored in Amazon S3.
*/
public String getKey() {
return getObjectRequest.getKey();
}
/**
* Only for internal use.
* For parallel downloads, Updates the persistableTransfer each time a
* part is successfully merged into download file.
* Then notify the listeners that new persistableTransfer is available.
*/
@SdkInternalApi
public void updatePersistableTransfer(Integer lastFullyDownloadedPartNumber) {
synchronized (this) {
this.lastFullyDownloadedPartNumber = lastFullyDownloadedPartNumber;
}
persistableDownload = captureDownloadState(getObjectRequest, file);
S3ProgressPublisher.publishTransferPersistable(progressListenerChain, persistableDownload);
}
/**
* For parallel downloads, returns the last part number that was
* successfully written into the download file.
* Returns null for serial downloads.
*/
public synchronized Integer getLastFullyDownloadedPartNumber() {
return lastFullyDownloadedPartNumber;
}
/**
* Cancels this download.
*
* @throws IOException
*/
public synchronized void abort() throws IOException {
this.monitor.getFuture().cancel(true);
if ( s3Object != null ) {
s3Object.getObjectContent().abort();
}
setState(TransferState.Canceled);
}
/**
* Cancels this download, but skip notifying the state change listeners.
*
* @throws IOException
*/
public synchronized void abortWithoutNotifyingStateChangeListener() throws IOException {
this.monitor.getFuture().cancel(true);
this.state = TransferState.Canceled;
}
/**
* Set the S3 object to download.
*/
public synchronized void setS3Object(S3Object s3Object) {
this.s3Object = s3Object;
}
/**
* This method is also responsible for firing COMPLETED signal to the
* listeners.
*/
@Override
public void setState(TransferState state) {
super.setState(state);
switch (state) {
case Completed :
fireProgressEvent(ProgressEventType.TRANSFER_COMPLETED_EVENT);
break;
case Canceled:
fireProgressEvent(ProgressEventType.TRANSFER_CANCELED_EVENT);
break;
case Failed:
fireProgressEvent(ProgressEventType.TRANSFER_FAILED_EVENT);
break;
default:
break;
}
}
/**
* Returns the captured state of the download; or null if it should not be
* captured (for security reason).
*/
private PersistableDownload captureDownloadState(
final GetObjectRequest getObjectRequest, final File file) {
if (getObjectRequest.getSSECustomerKey() == null) {
return new PersistableDownload(
getObjectRequest.getBucketName(), getObjectRequest.getKey(),
getObjectRequest.getVersionId(), getObjectRequest.getRange(),
getObjectRequest.getResponseHeaders(), getObjectRequest.isRequesterPays(),
file.getAbsolutePath(), getLastFullyDownloadedPartNumber(),
getObjectMetadata().getLastModified().getTime());
}
return null;
}
/*
* (non-Javadoc)
*
* @see com.amazonaws.services.s3.transfer.Download#pause()
*/
@Override
public PersistableDownload pause() throws PauseException {
boolean forceCancel = true;
TransferState currentState = getState();
this.monitor.getFuture().cancel(true);
if (persistableDownload == null) {
throw new PauseException(TransferManagerUtils.determinePauseStatus(
currentState, forceCancel));
}
return persistableDownload;
}
}

View File

@@ -0,0 +1,30 @@
package org.talend.aws;
import com.amazonaws.services.s3.transfer.internal.TransferMonitor;
import java.util.concurrent.Future;
public class DownloadMonitor implements TransferMonitor {
private Future<?> future;
private final DownloadImpl download;
public DownloadMonitor(DownloadImpl download, Future<?> future) {
this.download = download;
this.future = future;
}
@Override
public synchronized Future<?> getFuture() {
return future;
}
public synchronized void setFuture(Future<?> future) {
this.future = future;
}
@Override
public boolean isDone() {
return download.isDone();
}
}

View File

@@ -0,0 +1,52 @@
package org.talend.aws;
import com.amazonaws.util.StringUtils;
import java.io.File;
import java.util.UUID;
import java.util.concurrent.Callable;
import com.amazonaws.SdkClientException;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.GetObjectRequest;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* Helper class to get a part from s3,
* write the part data to a temporary file and
* return the temporary file.
*/
public class DownloadPartCallable implements Callable<File> {
private static final Log LOG = LogFactory.getLog(DownloadPartCallable.class);
private static final String TEMP_FILE_MIDDLE_NAME = ".part.";
private final AmazonS3 s3;
private final GetObjectRequest getPartRequest;
private final File destinationFile;
private final String destinationFilePath;
public DownloadPartCallable(AmazonS3 s3, GetObjectRequest getPartRequest, File destinationFile) {
this.s3 = s3;
this.getPartRequest = getPartRequest;
this.destinationFile = destinationFile;
this.destinationFilePath = destinationFile.getAbsolutePath();
}
public File call() throws Exception {
final File partFile = File.createTempFile(
UUID.nameUUIDFromBytes(destinationFile.getName().getBytes(StringUtils.UTF8)).toString(),
TEMP_FILE_MIDDLE_NAME + getPartRequest.getPartNumber().toString(),
new File(destinationFilePath.substring(0, destinationFilePath.lastIndexOf(File.separator))));
try {
partFile.deleteOnExit();
} catch (SecurityException exception) {
LOG.warn("SecurityException denied delete access to file " + partFile.getAbsolutePath());
}
if (s3.getObject(getPartRequest, partFile) == null) {
throw new SdkClientException(
"There is no object in S3 satisfying this request. The getObject method returned null");
}
return partFile;
}
}

View File

@@ -0,0 +1,37 @@
package org.talend.aws;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Encryption;
import com.amazonaws.services.s3.internal.ServiceUtils;
import com.amazonaws.services.s3.internal.SkipMd5CheckStrategy;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
final class DownloadTaskImpl implements
ServiceUtils.RetryableS3DownloadTask
{
private final AmazonS3 s3;
private final DownloadImpl download;
private final GetObjectRequest getObjectRequest;
private final SkipMd5CheckStrategy skipMd5CheckStrategy = SkipMd5CheckStrategy.INSTANCE;
DownloadTaskImpl(AmazonS3 s3, DownloadImpl download,
GetObjectRequest getObjectRequest) {
this.s3 = s3;
this.download = download;
this.getObjectRequest = getObjectRequest;
}
@Override
public S3Object getS3ObjectStream() {
S3Object s3Object = s3.getObject(getObjectRequest);
download.setS3Object(s3Object);
return s3Object;
}
@Override
public boolean needIntegrityCheck() {
// Don't perform the integrity check if the checksum won't matchup.
return !(s3 instanceof AmazonS3Encryption) && !skipMd5CheckStrategy.skipClientSideValidationPerRequest(getObjectRequest);
}
}

View File

@@ -0,0 +1,159 @@
package org.talend.aws;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.transfer.PersistableTransfer;
import com.fasterxml.jackson.annotation.JsonProperty;
/**
* An opaque token that holds some private state and can be used to resume a
* paused download operation.
*/
public final class PersistableDownload extends PersistableTransfer {
static final String TYPE = "download";
@JsonProperty
private final String pauseType = TYPE;
/** The bucket name in Amazon S3 from where the object has to be downloaded. */
@JsonProperty
private final String bucketName;
/** The name of the object in Amazon S3 that has to be downloaded. */
@JsonProperty
private final String key;
/** The version id of the object in Amazon S3 to download. */
@JsonProperty
private final String versionId;
/** Optional member indicating the byte range of data to retrieve */
@JsonProperty
private final long[] range;
/**
* Optional field that overrides headers on the response.
*/
@JsonProperty
private final ResponseHeaderOverrides responseHeaders;
/**
* If enabled, the requester is charged for downloading the data from
* Requester Pays Buckets.
*/
@JsonProperty
private final boolean isRequesterPays;
/**
* File where the downloaded data is written.
*/
@JsonProperty
private final String file;
/**
* The last part that has been successfully written into the downloaded file.
*/
@JsonProperty
private final Integer lastFullyDownloadedPartNumber;
/**
* Last Modified/created time on Amazon S3 for this object.
*/
@JsonProperty
private final long lastModifiedTime;
public PersistableDownload() {
this(null, null, null, null, null, false, null, null, 0L);
}
public PersistableDownload(
@JsonProperty(value = "bucketName") String bucketName,
@JsonProperty(value = "key") String key,
@JsonProperty(value = "versionId") String versionId,
@JsonProperty(value = "range") long[] range,
@JsonProperty(value = "responseHeaders") ResponseHeaderOverrides responseHeaders,
@JsonProperty(value = "isRequesterPays") boolean isRequesterPays,
@JsonProperty(value = "file") String file,
@JsonProperty(value = "lastFullyDownloadedPartNumber") Integer lastFullyDownloadedPartNumber,
@JsonProperty(value = "lastModifiedTime") long lastModifiedTime) {
this.bucketName = bucketName;
this.key = key;
this.versionId = versionId;
this.range = range == null ? null : range.clone();
this.responseHeaders = responseHeaders;
this.isRequesterPays = isRequesterPays;
this.file = file;
this.lastFullyDownloadedPartNumber = lastFullyDownloadedPartNumber;
this.lastModifiedTime = lastModifiedTime;
}
/**
* Returns the name of the bucket.
*/
String getBucketName() {
return bucketName;
}
/**
* Returns the name of the object.
*/
String getKey() {
return key;
}
/**
* Returns the version id of the object.
*/
String getVersionId() {
return versionId;
}
/**
* Returns the byte range of the object to download.
*/
long[] getRange() {
return range == null ? null : range.clone();
}
/**
* Returns the optional response headers.
*/
ResponseHeaderOverrides getResponseHeaders() {
return responseHeaders;
}
/**
* Returns true if RequesterPays is enabled on the Amazon S3 bucket else
* false.
*/
boolean isRequesterPays() {
return isRequesterPays;
}
/**
* Returns the file where the object is to be downloaded.
*/
String getFile() {
return file;
}
String getPauseType() {
return pauseType;
}
/**
* Returns the last part number that was successfully written into the downloaded file.
*/
Integer getLastFullyDownloadedPartNumber() {
return lastFullyDownloadedPartNumber;
}
/**
* Returns the last modified/created time of the object represented by
* the bucketName and key.
*/
Long getlastModifiedTime() {
return lastModifiedTime;
}
}

View File

@@ -0,0 +1,17 @@
package org.talend.aws;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressEventFilter;
import com.amazonaws.event.ProgressEventType;
final class TransferCompletionFilter implements ProgressEventFilter {
@Override
public ProgressEvent filter(ProgressEvent progressEvent) {
// Block COMPLETE events from the low-level GetObject operation,
// but we still want to keep the BytesTransferred
return progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT
? null // discard this event
: progressEvent
;
}
}

View File

@@ -0,0 +1,233 @@
package org.talend.aws;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonWebServiceRequest;
import com.amazonaws.event.ProgressListenerChain;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.internal.FileLocks;
import com.amazonaws.services.s3.internal.RequestCopyUtils;
import com.amazonaws.services.s3.internal.ServiceUtils;
import com.amazonaws.services.s3.model.GetObjectMetadataRequest;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.transfer.TransferManagerConfiguration;
import com.amazonaws.services.s3.transfer.TransferProgress;
import com.amazonaws.services.s3.transfer.exception.FileLockException;
import com.amazonaws.services.s3.transfer.internal.S3ProgressListener;
import com.amazonaws.services.s3.transfer.internal.S3ProgressListenerChain;
import com.amazonaws.services.s3.transfer.internal.TransferManagerUtils;
import com.amazonaws.services.s3.transfer.internal.TransferStateChangeListener;
import com.amazonaws.services.s3.transfer.internal.TransferProgressUpdatingListener;
import com.amazonaws.util.VersionInfoUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import java.io.File;
import java.util.Date;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
public class TransferManager {
private static final Log log = LogFactory.getLog(TransferManager.class);
private final AmazonS3 s3;
private final ExecutorService executorService;
private final TransferManagerConfiguration configuration;
private final boolean shutDownThreadPools;
public TransferManager(AmazonS3 s3) {
this.s3 = s3;
this.executorService = TransferManagerUtils.createDefaultExecutorService();
this.configuration = resolveConfiguration();
this.shutDownThreadPools = true;
}
private TransferManagerConfiguration resolveConfiguration() {
TransferManagerConfiguration configuration = new TransferManagerConfiguration();
configuration.setDisableParallelDownloads(false);
return configuration;
}
public Download download(GetObjectRequest getObjectRequest, File file, S3ProgressListener progressListener,
long timeoutMillis, boolean resumeOnRetry) {
return doDownload(getObjectRequest, file, null, progressListener, ServiceUtils.OVERWRITE_MODE, timeoutMillis, null, 0L,
resumeOnRetry);
}
private Download doDownload(final GetObjectRequest getObjectRequest,
final File file, final TransferStateChangeListener stateListener,
final S3ProgressListener s3progressListener,
final boolean resumeExistingDownload,
final long timeoutMillis,
final Integer lastFullyDownloadedPart,
final long lastModifiedTimeRecordedDuringPause,
final boolean resumeOnRetry)
{
assertParameterNotNull(getObjectRequest,
"A valid GetObjectRequest must be provided to initiate download");
assertParameterNotNull(file,
"A valid file must be provided to download into");
appendSingleObjectUserAgent(getObjectRequest);
String description = "Downloading from " + getObjectRequest.getBucketName() + "/" + getObjectRequest.getKey();
TransferProgress transferProgress = new TransferProgress();
// S3 progress listener to capture the persistable transfer when available
S3ProgressListenerChain listenerChain = new S3ProgressListenerChain(
// The listener for updating transfer progress
new TransferProgressUpdatingListener(transferProgress),
getObjectRequest.getGeneralProgressListener(),
s3progressListener); // Listeners included in the original request
// The listener chain used by the low-level GetObject request.
// This listener chain ignores any COMPLETE event, so that we could
// delay firing the signal until the high-level download fully finishes.
getObjectRequest
.setGeneralProgressListener(new ProgressListenerChain(new TransferCompletionFilter(), listenerChain));
GetObjectMetadataRequest getObjectMetadataRequest = RequestCopyUtils.createGetObjectMetadataRequestFrom(getObjectRequest);
final ObjectMetadata objectMetadata = s3.getObjectMetadata(getObjectMetadataRequest);
// Used to check if the object is modified between pause and resume
long lastModifiedTime = objectMetadata.getLastModified().getTime();
long startingByte = 0;
long lastByte;
long[] range = getObjectRequest.getRange();
if (range != null && range.length == 2) {
startingByte = range[0];
lastByte = range[1];
} else {
lastByte = objectMetadata.getContentLength() - 1;
}
final long origStartingByte = startingByte;
final boolean isDownloadParallel = !configuration.isDisableParallelDownloads()
&& TransferManagerUtils.isDownloadParallelizable(s3, getObjectRequest, ServiceUtils.getPartCount(getObjectRequest, s3));
// We still pass the unfiltered listener chain into DownloadImpl
final DownloadImpl download = new DownloadImpl(description, transferProgress, listenerChain, null,
stateListener, getObjectRequest, file, objectMetadata, isDownloadParallel);
long totalBytesToDownload = lastByte - startingByte + 1;
transferProgress.setTotalBytesToTransfer(totalBytesToDownload);
// Range information is needed for auto retry of downloads so a retry
// request can start at the last downloaded location in the range.
//
// For obvious reasons, setting a Range header only makes sense if the
// object actually has content because it's inclusive, otherwise S3
// responds with 4xx
//
// In addition, we only set the range if the download was *NOT*
// determined to be parallelizable above. One of the conditions for
// parallel downloads is that getRange() returns null so preserve that.
if (totalBytesToDownload > 0 && !isDownloadParallel) {
getObjectRequest.withRange(startingByte, lastByte);
}
long fileLength = -1;
if (resumeExistingDownload) {
if (isS3ObjectModifiedSincePause(lastModifiedTime, lastModifiedTimeRecordedDuringPause)) {
throw new AmazonClientException("The requested object in bucket " + getObjectRequest.getBucketName()
+ " with key " + getObjectRequest.getKey() + " is modified on Amazon S3 since the last pause.");
}
// There's still a chance the object is modified while the request
// is in flight. Set this header so S3 fails the request if this happens.
getObjectRequest.setUnmodifiedSinceConstraint(new Date(lastModifiedTime));
if (!isDownloadParallel) {
if (!FileLocks.lock(file)) {
throw new FileLockException("Fail to lock " + file + " for resume download");
}
try {
if (file.exists()) {
fileLength = file.length();
startingByte = startingByte + fileLength;
getObjectRequest.setRange(startingByte, lastByte);
transferProgress.updateProgress(Math.min(fileLength, totalBytesToDownload));
totalBytesToDownload = lastByte - startingByte + 1;
if (log.isDebugEnabled()) {
log.debug("Resume download: totalBytesToDownload=" + totalBytesToDownload
+ ", origStartingByte=" + origStartingByte + ", startingByte=" + startingByte
+ ", lastByte=" + lastByte + ", numberOfBytesRead=" + fileLength + ", file: "
+ file);
}
}
} finally {
FileLocks.unlock(file);
}
}
}
if (totalBytesToDownload < 0) {
throw new IllegalArgumentException(
"Unable to determine the range for download operation.");
}
final CountDownLatch latch = new CountDownLatch(1);
Future<?> future = executorService.submit(
new DownloadCallable(s3, latch,
getObjectRequest, resumeExistingDownload,
download, file, origStartingByte, fileLength, timeoutMillis, timedThreadPool,
executorService, lastFullyDownloadedPart, isDownloadParallel, resumeOnRetry));
download.setMonitor(new DownloadMonitor(download, future));
latch.countDown();
return download;
}
public void shutdownNow(boolean shutDownS3Client) {
if (shutDownThreadPools) {
executorService.shutdownNow();
timedThreadPool.shutdownNow();
}
if (shutDownS3Client) {
s3.shutdown();
}
}
private void assertParameterNotNull(Object parameterValue, String errorMessage) {
if (parameterValue == null) throw new IllegalArgumentException(errorMessage);
}
public static <X extends AmazonWebServiceRequest> X appendSingleObjectUserAgent(X request) {
request.getRequestClientOptions().appendUserAgent(USER_AGENT);
return request;
}
private static final String USER_AGENT = TransferManager.class.getName() + "/" + VersionInfoUtils.getVersion();
private boolean isS3ObjectModifiedSincePause(final long lastModifiedTimeRecordedDuringResume,
long lastModifiedTimeRecordedDuringPause) {
return lastModifiedTimeRecordedDuringResume != lastModifiedTimeRecordedDuringPause;
}
private final ScheduledExecutorService timedThreadPool = new ScheduledThreadPoolExecutor(1, daemonThreadFactory);
private static final ThreadFactory daemonThreadFactory = new ThreadFactory() {
final AtomicInteger threadCount = new AtomicInteger( 0 );
public Thread newThread(Runnable r) {
int threadNumber = threadCount.incrementAndGet();
Thread thread = new Thread(r);
thread.setDaemon(true);
thread.setName("S3TransferManagerTimedThread-" + threadNumber);
return thread;
}
};
@Override
protected void finalize() throws Throwable {
shutdownThreadPools();
}
private void shutdownThreadPools() {
if (shutDownThreadPools) {
executorService.shutdown();
timedThreadPool.shutdown();
}
}
}

View File

@@ -7,21 +7,21 @@
<groupId>org.talend.libraries</groupId>
<artifactId>talend-codegen-utils</artifactId>
<!-- release for revert version of library -->
<version>0.28.0</version>
<version>0.30.0</version>
<packaging>jar</packaging>
<properties>
<avro.version>1.8.0</avro.version>
<components.version>0.25.0-SNAPSHOT</components.version>
<daikon.version>0.26.0-SNAPSHOT</daikon.version>
<components.version>0.30.0</components.version>
<daikon.version>0.31.11</daikon.version>
<hamcrest.version>1.3</hamcrest.version>
<junit.version>4.12</junit.version>
<java-formatter.plugin.version>0.1.0</java-formatter.plugin.version>
<formatter.plugin.version>1.6.0-SNAPSHOT</formatter.plugin.version>
<mockito.version>2.2.15</mockito.version>
<jacoco.plugin.version>0.7.8</jacoco.plugin.version>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
</properties>

View File

@@ -24,8 +24,10 @@ import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TimeZone;
import org.apache.avro.Schema;
@@ -33,9 +35,11 @@ import org.apache.avro.Schema.Field;
import org.apache.avro.SchemaBuilder;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.IndexedRecord;
import org.apache.avro.SchemaParseException;
import org.talend.codegen.DiSchemaConstants;
import org.talend.daikon.avro.AvroUtils;
import org.talend.daikon.avro.LogicalTypeUtils;
import org.talend.daikon.avro.NameUtil;
import org.talend.daikon.avro.SchemaConstants;
/**
@@ -133,6 +137,7 @@ public class IncomingSchemaEnforcer {
}
}
//TODO remove this method as no place use it now in javajet
/**
* Take all of the parameters from the dynamic metadata and adapt it to a field for the runtime Schema.
*
@@ -144,6 +149,62 @@ public class IncomingSchemaEnforcer {
addDynamicField(name, type, null, format, description, isNullable);
}
private Set<String> existNames;
private Map<String, String> unvalidName2ValidName;
private int index = 0;
/**
* Recreates dynamic field from parameters retrieved from DI dynamic metadata
*
* @param name dynamic field name
* @param diType di column type
* @param logicalType dynamic field logical type; could be null
* @param fieldPattern dynamic field date format
* @param description dynamic field description
* @param isNullable defines whether dynamic field may contain <code>null</code> value
* @param isKey defines whether dynamic field is key field
*/
public void addDynamicField(String name, String diType, String logicalType, String fieldPattern, String description,
boolean isNullable, boolean isKey) {
if (!needsInitDynamicColumns())
return;
Schema fieldSchema = diToAvro(diType, logicalType);
if (isNullable) {
fieldSchema = SchemaBuilder.nullable().type(fieldSchema);
}
Schema.Field field;
try {
field = new Schema.Field(name, fieldSchema, description, (Object) null);
} catch (SchemaParseException e) {
//if the name contains special char which can't pass avro name check like $ and #,
//but uniode like Japanese which can pass too though that is not expected
if (existNames == null) {
existNames = new HashSet<>();
unvalidName2ValidName = new HashMap<>();
}
String validName = NameUtil.correct(name, index++, existNames);
existNames.add(validName);
unvalidName2ValidName.put(name, validName);
field = new Schema.Field(validName, fieldSchema, description, (Object) null);
field.addProp(SchemaConstants.TALEND_COLUMN_DB_COLUMN_NAME, name);
}
// Set pattern for date type
if ("id_Date".equals(diType) && fieldPattern != null) {
field.addProp(SchemaConstants.TALEND_COLUMN_PATTERN, fieldPattern);
}
if (isKey) {
field.addProp(SchemaConstants.TALEND_COLUMN_IS_KEY, "true");
}
dynamicFields.add(field);
}
/**
* Recreates dynamic field from parameters retrieved from DI dynamic metadata
*
@@ -154,21 +215,10 @@ public class IncomingSchemaEnforcer {
* @param description dynamic field description
* @param isNullable defines whether dynamic field may contain <code>null</code> value
*/
@Deprecated
public void addDynamicField(String name, String diType, String logicalType, String fieldPattern, String description,
boolean isNullable) {
if (!needsInitDynamicColumns())
return;
Schema fieldSchema = diToAvro(diType, logicalType);
if (isNullable) {
fieldSchema = SchemaBuilder.nullable().type(fieldSchema);
}
Schema.Field field = new Schema.Field(name, fieldSchema, description, (Object) null);
// Set pattern for date type
if ("id_Date".equals(diType) && fieldPattern != null) {
field.addProp(SchemaConstants.TALEND_COLUMN_PATTERN, fieldPattern);
}
dynamicFields.add(field);
addDynamicField(name, diType, logicalType, fieldPattern, description, isNullable, false);
}
public void addIncomingNodeField(String name, String className) {
@@ -250,6 +300,8 @@ public class IncomingSchemaEnforcer {
fieldSchema = AvroUtils._decimal();
} else if ("id_Date".equals(diType)) {
fieldSchema = AvroUtils._date();
} else if ("id_byte[]".equals(diType)) {
fieldSchema = AvroUtils._bytes();
} else {
throw new UnsupportedOperationException("Unrecognized type " + diType);
}
@@ -369,6 +421,9 @@ public class IncomingSchemaEnforcer {
return designSchema;
}
//here we do special process for dynamic input name, but in fact,
//we have issue which support Japanese char or special char as label for basic talend column too,
//so not only dynamic columns may have special name, but also basic may have, but here, we don't consider that, that's TODO
/**
* Converts DI data value to Avro format and put it into record by field name
*
@@ -376,9 +431,16 @@ public class IncomingSchemaEnforcer {
* @param diValue data value
*/
public void put(String name, Object diValue) {
if (unvalidName2ValidName != null) {
String validName = unvalidName2ValidName.get(name);
if (validName != null) {
name = validName;
}
}
put(columnToFieldIndex.get(name), diValue);
}
//TODO make it private, no place to call it except current class?
/**
* Converts DI data value to Avro format and put it into record by field index
*

View File

@@ -52,6 +52,8 @@ public class IncomingSchemaEnforcerTest {
*/
private IndexedRecord componentRecord;
private IndexedRecord componentRecordWithSpecialName;
@Rule
public ExpectedException thrown = ExpectedException.none();
@@ -72,9 +74,29 @@ public class IncomingSchemaEnforcerTest {
componentRecord.put(3, true);
componentRecord.put(4, "Main Street");
componentRecord.put(5, "This is a record with six columns.");
Schema componentSchemaWithSpecialName = SchemaBuilder.builder().record("Record").fields() //
.name("id").type().intType().noDefault() //
.name("name").type().stringType().noDefault() //
.name("age").type().intType().noDefault() //
.name("性别").type().booleanType().noDefault() //why this don't store the origin name, as it can pass the avro name check, it's a avro bug
.name("address_").prop(SchemaConstants.TALEND_COLUMN_DB_COLUMN_NAME, "address#").type().stringType().noDefault() //
.name("comment_").prop(SchemaConstants.TALEND_COLUMN_DB_COLUMN_NAME, "comment$").type().stringType().noDefault() //
.endRecord();
componentRecordWithSpecialName = new GenericData.Record(componentSchemaWithSpecialName);
componentRecordWithSpecialName.put(0, 1);
componentRecordWithSpecialName.put(1, "User");
componentRecordWithSpecialName.put(2, 100);
componentRecordWithSpecialName.put(3, true);
componentRecordWithSpecialName.put(4, "Main Street");
componentRecordWithSpecialName.put(5, "This is a record with six columns.");
}
private void checkEnforcerWithComponentRecordData(IncomingSchemaEnforcer enforcer) {
checkEnforcerWithComponentRecordData(enforcer, false);
}
private void checkEnforcerWithComponentRecordData(IncomingSchemaEnforcer enforcer, boolean specialName) {
// The enforcer must be ready to receive values.
assertThat(enforcer.needsInitDynamicColumns(), is(false));
@@ -88,15 +110,25 @@ public class IncomingSchemaEnforcerTest {
IndexedRecord adapted = enforcer.createIndexedRecord();
// Ensure that the result is the same as the expected component record.
assertThat(adapted, is(componentRecord));
if (specialName) {
assertThat(adapted, is(componentRecordWithSpecialName));
} else {
assertThat(adapted, is(componentRecord));
}
// Ensure that we create a new instance when we give it another value.
enforcer.put("id", 2);
enforcer.put("name", "User2");
enforcer.put("age", 200);
enforcer.put("valid", false);
enforcer.put("address", "2 Main Street");
enforcer.put("comment", "2 This is a record with six columns.");
if (specialName) {
enforcer.put("性别", false);
enforcer.put("address#", "2 Main Street");
enforcer.put("comment$", "2 This is a record with six columns.");
} else {
enforcer.put("valid", false);
enforcer.put("address", "2 Main Street");
enforcer.put("comment", "2 This is a record with six columns.");
}
IndexedRecord adapted2 = enforcer.createIndexedRecord();
// It should have the same schema, but not be the same instance.
@@ -392,6 +424,39 @@ public class IncomingSchemaEnforcerTest {
checkEnforcerWithComponentRecordData(enforcer);
}
@Test
public void testDynamicColumnWithSpecialName() {
Schema designSchema = SchemaBuilder.builder().record("Record") //
.prop(DiSchemaConstants.TALEND6_DYNAMIC_COLUMN_POSITION, "3") //
.prop(SchemaConstants.INCLUDE_ALL_FIELDS, "true") //
.fields() //
.name("id").type().intType().noDefault() //
.name("name").type().stringType().noDefault() //
.name("age").type().intType().noDefault() //
.endRecord();
IncomingSchemaEnforcer enforcer = new IncomingSchemaEnforcer(designSchema);
// The enforcer isn't usable yet.
assertThat(enforcer.getDesignSchema(), is(designSchema));
assertFalse(enforcer.areDynamicFieldsInitialized());
assertThat(enforcer.getRuntimeSchema(), nullValue());
enforcer.addDynamicField("性别", "id_Boolean", null, null, null, false, false);
enforcer.addDynamicField("address#", "id_String", null, null, null, false, false);
enforcer.addDynamicField("comment$", "id_String", null, null, null, false, false);
assertFalse(enforcer.areDynamicFieldsInitialized());
enforcer.createRuntimeSchema();
assertTrue(enforcer.areDynamicFieldsInitialized());
// Check the run-time schema was created.
assertThat(enforcer.getDesignSchema(), is(designSchema));
assertThat(enforcer.getRuntimeSchema(), not(nullValue()));
// Put values into the enforcer and get them as an IndexedRecord.
checkEnforcerWithComponentRecordData(enforcer, true);
}
@Test
public void testTypeConversion_toDate() {
// The expected schema after enforcement.
@@ -699,6 +764,28 @@ public class IncomingSchemaEnforcerTest {
assertThat(record.get(1), is((Object) new Date(1234567891011L)));
}
/**
* Checks key field setting
*/
@Test
public void testAddDynamicFieldKey() {
Schema expectedRuntimeSchema = SchemaBuilder.builder().record("Record").fields().name("id")
.prop(SchemaConstants.TALEND_COLUMN_IS_KEY, "true").type().intType().noDefault().endRecord();
Schema designSchema = SchemaBuilder.builder().record("Record").prop(SchemaConstants.INCLUDE_ALL_FIELDS, "true")
.prop(DiSchemaConstants.TALEND6_DYNAMIC_COLUMN_POSITION, "0").fields().endRecord();
IncomingSchemaEnforcer enforcer = new IncomingSchemaEnforcer(designSchema);
enforcer.addDynamicField("id", "id_Integer", null, null, null, false, true);
enforcer.createRuntimeSchema();
assertTrue(enforcer.areDynamicFieldsInitialized());
Schema actualRuntimeSchema = enforcer.getRuntimeSchema();
assertEquals(expectedRuntimeSchema, actualRuntimeSchema);
}
/**
* Checks {@link IncomingSchemaEnforcer#put()} converts string value to date according pattern specified in dynamic field
* TODO (iv.gonchar): this is incorrect behavior, because avro record should not contain java.util.Date value. It should store

View File

@@ -4,7 +4,7 @@
<groupId>org.talend</groupId>
<artifactId>talend-httputil</artifactId>
<name>talend-httputil</name>
<version>1.0.5</version>
<version>1.0.6</version>
<properties>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
@@ -20,7 +20,7 @@
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.10.1</version>
<version>2.11.4</version>
</dependency>
<dependency>

View File

@@ -6,11 +6,11 @@
<groupId>org.talend.components.lib</groupId>
<artifactId>job-audit</artifactId>
<version>1.2</version>
<version>1.4</version>
<properties>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
<daikon.audit.version>1.16.0</daikon.audit.version>
<daikon.audit.version>1.16.1</daikon.audit.version>
</properties>
<repositories>

View File

@@ -24,5 +24,11 @@ public interface JobAuditLogger extends EventAuditLogger {
@AuditEvent(category = "flowexecution", message = "connection : {connection_name}, row : {rows}, cost : {duration}", level = LogLevel.INFO)
void flowExecution(Context context);
@AuditEvent(category = "componentparameters", message = "Component {connector_id} parameters", level = LogLevel.INFO)
void componentParameters(Context context);
@AuditEvent(category = "schema", message = "{connection_name} : {schema} from {source_id} to {target_id}", level = LogLevel.INFO)
void schema(Context context);
}

View File

@@ -15,6 +15,11 @@ public class JobContextBuilder {
return new JobContextBuilder(ContextBuilder.create());
}
public JobContextBuilder custom(String key, String value) {
builder.with(key, value);
return this;
}
public JobContextBuilder jobName(String job_name) {
builder.with("job_name", job_name);
return this;
@@ -161,4 +166,14 @@ public class JobContextBuilder {
return this;
}
public JobContextBuilder connectorParameters(String connector_parameters) {
builder.with("connector_parameters", connector_parameters);
return this;
}
public JobContextBuilder schema(String schema) {
builder.with("schema", schema);
return this;
}
}

View File

@@ -10,7 +10,6 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<cxf.version>3.1.1</cxf.version>
<odata.version>4.3.0</odata.version>
<slf4j.version>1.7.12</slf4j.version>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>

View File

@@ -0,0 +1,98 @@
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.talend.components</groupId>
<artifactId>talend-parquet</artifactId>
<version>1.0</version>
<properties>
<parquet.version>1.10.1</parquet.version>
<hadoop.version>3.2.2</hadoop.version>
<jodd.version>6.0.1</jodd.version>
<hamcrest.version>1.3</hamcrest.version>
<junit.version>4.13.2</junit.version>
<talend.nexus.url>https://artifacts-oss.talend.com</talend.nexus.url>
</properties>
<distributionManagement>
<snapshotRepository>
<id>talend_nexus_deployment</id>
<url>${talend.nexus.url}/nexus/content/repositories/TalendOpenSourceSnapshot/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>false</enabled>
</releases>
</snapshotRepository>
<repository>
<id>talend_nexus_deployment</id>
<url>${talend.nexus.url}/nexus/content/repositories/TalendOpenSourceRelease/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
</distributionManagement>
<dependencies>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-hadoop</artifactId>
<version>${parquet.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.jodd</groupId>
<artifactId>jodd-util</artifactId>
<version>${jodd.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
<version>${hamcrest.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
<version>${hamcrest.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@@ -0,0 +1,141 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data;
import org.talend.parquet.data.simple.NanoTime;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
abstract public class Group extends GroupValueSource {
private static final Logger LOG = LoggerFactory.getLogger(Group.class);
public void add(String field, int value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, long value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, float value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, double value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, String value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, NanoTime value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, boolean value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, Binary value) {
add(getType().getFieldIndex(field), value);
}
public void add(String field, Group value) {
add(getType().getFieldIndex(field), value);
}
public Group addGroup(String field) {
if (LOG.isDebugEnabled()) {
LOG.debug("add group {} to {}", field, getType().getName());
}
return addGroup(getType().getFieldIndex(field));
}
@Override
public Group getGroup(String field, int index) {
return getGroup(getType().getFieldIndex(field), index);
}
abstract public void add(int fieldIndex, int value);
abstract public void add(int fieldIndex, long value);
abstract public void add(int fieldIndex, String value);
abstract public void add(int fieldIndex, boolean value);
abstract public void add(int fieldIndex, NanoTime value);
abstract public void add(int fieldIndex, Binary value);
abstract public void add(int fieldIndex, float value);
abstract public void add(int fieldIndex, double value);
abstract public void add(int fieldIndex, Group value);
abstract public Group addGroup(int fieldIndex);
@Override
abstract public Group getGroup(int fieldIndex, int index);
public Group asGroup() {
return this;
}
public Group append(String fieldName, int value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, float value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, double value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, long value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, NanoTime value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, String value) {
add(fieldName, Binary.fromString(value));
return this;
}
public Group append(String fieldName, boolean value) {
add(fieldName, value);
return this;
}
public Group append(String fieldName, Binary value) {
add(fieldName, value);
return this;
}
abstract public void writeValue(int field, int index, RecordConsumer recordConsumer);
}

View File

@@ -0,0 +1,19 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data;
abstract public class GroupFactory {
abstract public Group newGroup();
}

View File

@@ -0,0 +1,83 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.schema.GroupType;
abstract public class GroupValueSource {
public int getFieldRepetitionCount(String field) {
return getFieldRepetitionCount(getType().getFieldIndex(field));
}
public GroupValueSource getGroup(String field, int index) {
return getGroup(getType().getFieldIndex(field), index);
}
public String getString(String field, int index) {
return getString(getType().getFieldIndex(field), index);
}
public int getInteger(String field, int index) {
return getInteger(getType().getFieldIndex(field), index);
}
public long getLong(String field, int index) {
return getLong(getType().getFieldIndex(field), index);
}
public double getDouble(String field, int index) {
return getDouble(getType().getFieldIndex(field), index);
}
public float getFloat(String field, int index) {
return getFloat(getType().getFieldIndex(field), index);
}
public boolean getBoolean(String field, int index) {
return getBoolean(getType().getFieldIndex(field), index);
}
public Binary getBinary(String field, int index) {
return getBinary(getType().getFieldIndex(field), index);
}
public Binary getInt96(String field, int index) {
return getInt96(getType().getFieldIndex(field), index);
}
abstract public int getFieldRepetitionCount(int fieldIndex);
abstract public GroupValueSource getGroup(int fieldIndex, int index);
abstract public String getString(int fieldIndex, int index);
abstract public Integer getInteger(int fieldIndex, int index);
abstract public Long getLong(int fieldIndex, int index);
abstract public Double getDouble(int fieldIndex, int index);
abstract public Float getFloat(int fieldIndex, int index);
abstract public Boolean getBoolean(int fieldIndex, int index);
abstract public Binary getBinary(int fieldIndex, int index);
abstract public Binary getInt96(int fieldIndex, int index);
abstract public String getValueToString(int fieldIndex, int index);
abstract public GroupType getType();
}

View File

@@ -0,0 +1,56 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data;
import org.apache.parquet.io.api.RecordConsumer;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.Type;
public class GroupWriter {
private final RecordConsumer recordConsumer;
private final GroupType schema;
public GroupWriter(RecordConsumer recordConsumer, GroupType schema) {
this.recordConsumer = recordConsumer;
this.schema = schema;
}
public void write(Group group) {
recordConsumer.startMessage();
writeGroup(group, schema);
recordConsumer.endMessage();
}
private void writeGroup(Group group, GroupType type) {
int fieldCount = type.getFieldCount();
for (int field = 0; field < fieldCount; ++field) {
int valueCount = group.getFieldRepetitionCount(field);
if (valueCount > 0) {
Type fieldType = type.getType(field);
String fieldName = fieldType.getName();
recordConsumer.startField(fieldName, field);
for (int index = 0; index < valueCount; ++index) {
if (fieldType.isPrimitive()) {
group.writeValue(field, index, recordConsumer);
} else {
recordConsumer.startGroup();
writeGroup(group.getGroup(field, index), fieldType.asGroupType());
recordConsumer.endGroup();
}
}
recordConsumer.endField(fieldName, field);
}
}
}
}

View File

@@ -0,0 +1,45 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
public class BinaryValue extends Primitive {
private final Binary binary;
public BinaryValue(Binary binary) {
this.binary = binary;
}
@Override
public Binary getBinary() {
return binary;
}
@Override
public String getString() {
return binary.toStringUsingUTF8();
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addBinary(binary);
}
@Override
public String toString() {
return getString();
}
}

View File

@@ -0,0 +1,39 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.RecordConsumer;
public class BooleanValue extends Primitive {
private final boolean bool;
public BooleanValue(boolean bool) {
this.bool = bool;
}
@Override
public String toString() {
return String.valueOf(bool);
}
@Override
public boolean getBoolean() {
return bool;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addBoolean(bool);
}
}

View File

@@ -0,0 +1,39 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.RecordConsumer;
public class DoubleValue extends Primitive {
private final double value;
public DoubleValue(double value) {
this.value = value;
}
@Override
public double getDouble() {
return value;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addDouble(value);
}
@Override
public String toString() {
return String.valueOf(value);
}
}

View File

@@ -0,0 +1,39 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.RecordConsumer;
public class FloatValue extends Primitive {
private final float value;
public FloatValue(float value) {
this.value = value;
}
@Override
public float getFloat() {
return value;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addFloat(value);
}
@Override
public String toString() {
return String.valueOf(value);
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
public class Int96Value extends Primitive {
private final Binary value;
public Int96Value(Binary value) {
this.value = value;
}
@Override
public Binary getInt96() {
return value;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addBinary(value);
}
@Override
public String toString() {
return "Int96Value{" + String.valueOf(value) + "}";
}
}

View File

@@ -0,0 +1,39 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.RecordConsumer;
public class IntegerValue extends Primitive {
private final int value;
public IntegerValue(int value) {
this.value = value;
}
@Override
public String toString() {
return String.valueOf(value);
}
@Override
public int getInteger() {
return value;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addInteger(value);
}
}

View File

@@ -0,0 +1,39 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.RecordConsumer;
public class LongValue extends Primitive {
private final long value;
public LongValue(long value) {
this.value = value;
}
@Override
public String toString() {
return String.valueOf(value);
}
@Override
public long getLong() {
return value;
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addLong(value);
}
}

View File

@@ -0,0 +1,74 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import org.apache.parquet.Preconditions;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
public class NanoTime extends Primitive {
private final int julianDay;
private final long timeOfDayNanos;
public static NanoTime fromBinary(Binary bytes) {
Preconditions.checkArgument(bytes.length() == 12, "Must be 12 bytes");
ByteBuffer buf = bytes.toByteBuffer();
buf.order(ByteOrder.LITTLE_ENDIAN);
long timeOfDayNanos = buf.getLong();
int julianDay = buf.getInt();
return new NanoTime(julianDay, timeOfDayNanos);
}
public static NanoTime fromInt96(Int96Value int96) {
ByteBuffer buf = int96.getInt96().toByteBuffer();
return new NanoTime(buf.getInt(), buf.getLong());
}
public NanoTime(int julianDay, long timeOfDayNanos) {
this.julianDay = julianDay;
this.timeOfDayNanos = timeOfDayNanos;
}
public int getJulianDay() {
return julianDay;
}
public long getTimeOfDayNanos() {
return timeOfDayNanos;
}
public Binary toBinary() {
ByteBuffer buf = ByteBuffer.allocate(12);
buf.order(ByteOrder.LITTLE_ENDIAN);
buf.putLong(timeOfDayNanos);
buf.putInt(julianDay);
buf.flip();
return Binary.fromConstantByteBuffer(buf);
}
public Int96Value toInt96() {
return new Int96Value(toBinary());
}
@Override
public void writeValue(RecordConsumer recordConsumer) {
recordConsumer.addBinary(toBinary());
}
@Override
public String toString() {
return "NanoTime{julianDay=" + julianDay + ", timeOfDayNanos=" + timeOfDayNanos + "}";
}
}

View File

@@ -0,0 +1,54 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
public abstract class Primitive {
public String getString() {
throw new UnsupportedOperationException();
}
public int getInteger() {
throw new UnsupportedOperationException();
}
public long getLong() {
throw new UnsupportedOperationException();
}
public boolean getBoolean() {
throw new UnsupportedOperationException();
}
public Binary getBinary() {
throw new UnsupportedOperationException();
}
public Binary getInt96() {
throw new UnsupportedOperationException();
}
public float getFloat() {
throw new UnsupportedOperationException();
}
public double getDouble() {
throw new UnsupportedOperationException();
}
abstract public void writeValue(RecordConsumer recordConsumer);
}

View File

@@ -0,0 +1,274 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import java.util.ArrayList;
import java.util.List;
import org.talend.parquet.data.Group;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.RecordConsumer;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.Type;
public class SimpleGroup extends Group {
private final GroupType schema;
private final List<Object>[] data;
@SuppressWarnings("unchecked")
public SimpleGroup(GroupType schema) {
this.schema = schema;
this.data = new List[schema.getFields().size()];
for (int i = 0; i < schema.getFieldCount(); i++) {
this.data[i] = new ArrayList<>();
}
}
@Override
public String toString() {
return toString("");
}
private StringBuilder appendToString(StringBuilder builder, String indent) {
int i = 0;
for (Type field : schema.getFields()) {
String name = field.getName();
List<Object> values = data[i];
++i;
if (values != null && !values.isEmpty()) {
for (Object value : values) {
builder.append(indent).append(name);
if (value == null) {
builder.append(": NULL\n");
} else if (value instanceof Group) {
builder.append('\n');
((SimpleGroup) value).appendToString(builder, indent + " ");
} else {
builder.append(": ").append(value.toString()).append('\n');
}
}
}
}
return builder;
}
public String toString(String indent) {
StringBuilder builder = new StringBuilder();
appendToString(builder, indent);
return builder.toString();
}
@Override
public Group addGroup(int fieldIndex) {
SimpleGroup g = new SimpleGroup(schema.getType(fieldIndex).asGroupType());
add(fieldIndex, g);
return g;
}
@Override
public Group getGroup(int fieldIndex, int index) {
return (Group) getValue(fieldIndex, index);
}
private Object getValue(int fieldIndex, int index) {
List<Object> list;
try {
list = data[fieldIndex];
} catch (IndexOutOfBoundsException e) {
throw new RuntimeException(
"not found " + fieldIndex + "(" + schema.getFieldName(fieldIndex) + ") in group:\n" + this);
}
try {
if(list == null || list.isEmpty()) {
return null;
}
return list.get(index);
} catch (IndexOutOfBoundsException e) {
throw new RuntimeException("not found " + fieldIndex + "(" + schema.getFieldName(fieldIndex)
+ ") element number " + index + " in group:\n" + this);
}
}
private void add(int fieldIndex, Primitive value) {
Type type = schema.getType(fieldIndex);
List<Object> list = data[fieldIndex];
if (!type.isRepetition(Type.Repetition.REPEATED) && !list.isEmpty()) {
throw new IllegalStateException(
"field " + fieldIndex + " (" + type.getName() + ") can not have more than one value: " + list);
}
list.add(value);
}
@Override
public int getFieldRepetitionCount(int fieldIndex) {
List<Object> list = data[fieldIndex];
return list == null ? 0 : list.size();
}
@Override
public String getValueToString(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return String.valueOf(value);
}
@Override
public String getString(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((BinaryValue) value).getString();
}
@Override
public Integer getInteger(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((IntegerValue)value).getInteger();
}
@Override
public Long getLong(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((LongValue)value).getLong();
}
@Override
public Double getDouble(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((DoubleValue)value).getDouble();
}
@Override
public Float getFloat(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((FloatValue)value).getFloat();
}
@Override
public Boolean getBoolean(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((BooleanValue) value).getBoolean();
}
@Override
public Binary getBinary(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((BinaryValue) value).getBinary();
}
public NanoTime getTimeNanos(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return NanoTime.fromInt96((Int96Value) value);
}
@Override
public Binary getInt96(int fieldIndex, int index) {
Object value = getValue(fieldIndex, index);
if(value == null) {
return null;
}
return ((Int96Value) value).getInt96();
}
@Override
public void add(int fieldIndex, int value) {
add(fieldIndex, new IntegerValue(value));
}
@Override
public void add(int fieldIndex, long value) {
add(fieldIndex, new LongValue(value));
}
@Override
public void add(int fieldIndex, String value) {
add(fieldIndex, new BinaryValue(Binary.fromString(value)));
}
@Override
public void add(int fieldIndex, NanoTime value) {
add(fieldIndex, value.toInt96());
}
@Override
public void add(int fieldIndex, boolean value) {
add(fieldIndex, new BooleanValue(value));
}
@Override
public void add(int fieldIndex, Binary value) {
switch (getType().getType(fieldIndex).asPrimitiveType().getPrimitiveTypeName()) {
case BINARY:
case FIXED_LEN_BYTE_ARRAY:
add(fieldIndex, new BinaryValue(value));
break;
case INT96:
add(fieldIndex, new Int96Value(value));
break;
default:
throw new UnsupportedOperationException(
getType().asPrimitiveType().getName() + " not supported for Binary");
}
}
@Override
public void add(int fieldIndex, float value) {
add(fieldIndex, new FloatValue(value));
}
@Override
public void add(int fieldIndex, double value) {
add(fieldIndex, new DoubleValue(value));
}
@Override
public void add(int fieldIndex, Group value) {
data[fieldIndex].add(value);
}
@Override
public GroupType getType() {
return schema;
}
@Override
public void writeValue(int field, int index, RecordConsumer recordConsumer) {
((Primitive) getValue(field, index)).writeValue(recordConsumer);
}
}

View File

@@ -0,0 +1,32 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.GroupFactory;
import org.apache.parquet.schema.MessageType;
public class SimpleGroupFactory extends GroupFactory {
private final MessageType schema;
public SimpleGroupFactory(MessageType schema) {
this.schema = schema;
}
@Override
public Group newGroup() {
return new SimpleGroup(schema);
}
}

View File

@@ -0,0 +1,51 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple.convert;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.simple.SimpleGroupFactory;
import org.apache.parquet.io.api.GroupConverter;
import org.apache.parquet.io.api.RecordMaterializer;
import org.apache.parquet.schema.MessageType;
public class GroupRecordConverter extends RecordMaterializer<Group> {
private final SimpleGroupFactory simpleGroupFactory;
private SimpleGroupConverter root;
public GroupRecordConverter(MessageType schema) {
this.simpleGroupFactory = new SimpleGroupFactory(schema);
this.root = new SimpleGroupConverter(null, 0, schema) {
@Override
public void start() {
this.current = simpleGroupFactory.newGroup();
}
@Override
public void end() {
}
};
}
@Override
public Group getCurrentRecord() {
return root.getCurrentRecord();
}
@Override
public GroupConverter getRootConverter() {
return root;
}
}

View File

@@ -0,0 +1,61 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple.convert;
import org.talend.parquet.data.Group;
import org.apache.parquet.io.api.Converter;
import org.apache.parquet.io.api.GroupConverter;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.Type;
class SimpleGroupConverter extends GroupConverter {
private final SimpleGroupConverter parent;
private final int index;
protected Group current;
private Converter[] converters;
SimpleGroupConverter(SimpleGroupConverter parent, int index, GroupType schema) {
this.parent = parent;
this.index = index;
converters = new Converter[schema.getFieldCount()];
for (int i = 0; i < converters.length; i++) {
final Type type = schema.getType(i);
if (type.isPrimitive()) {
converters[i] = new SimplePrimitiveConverter(this, i);
} else {
converters[i] = new SimpleGroupConverter(this, i, type.asGroupType());
}
}
}
@Override
public void start() {
current = parent.getCurrentRecord().addGroup(index);
}
@Override
public Converter getConverter(int fieldIndex) {
return converters[fieldIndex];
}
@Override
public void end() {
}
public Group getCurrentRecord() {
return current;
}
}

View File

@@ -0,0 +1,88 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.data.simple.convert;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.io.api.PrimitiveConverter;
class SimplePrimitiveConverter extends PrimitiveConverter {
private final SimpleGroupConverter parent;
private final int index;
SimplePrimitiveConverter(SimpleGroupConverter parent, int index) {
this.parent = parent;
this.index = index;
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addBinary(Binary)
*/
@Override
public void addBinary(Binary value) {
parent.getCurrentRecord().add(index, value);
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addBoolean(boolean)
*/
@Override
public void addBoolean(boolean value) {
parent.getCurrentRecord().add(index, value);
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addDouble(double)
*/
@Override
public void addDouble(double value) {
parent.getCurrentRecord().add(index, value);
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addFloat(float)
*/
@Override
public void addFloat(float value) {
parent.getCurrentRecord().add(index, value);
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addInt(int)
*/
@Override
public void addInt(int value) {
parent.getCurrentRecord().add(index, value);
}
/**
* {@inheritDoc}
*
* @see org.apache.parquet.io.api.PrimitiveConverter#addLong(long)
*/
@Override
public void addLong(long value) {
parent.getCurrentRecord().add(index, value);
}
}

View File

@@ -0,0 +1,40 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.hadoop;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.parquet.hadoop.api.ReadSupport;
import org.apache.parquet.io.api.RecordMaterializer;
import org.apache.parquet.schema.MessageType;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.simple.convert.GroupRecordConverter;
public class TalendGroupReadSupport extends ReadSupport<Group> {
@Override
public org.apache.parquet.hadoop.api.ReadSupport.ReadContext init(Configuration configuration,
Map<String, String> keyValueMetaData, MessageType fileSchema) {
String partialSchemaString = configuration.get(ReadSupport.PARQUET_READ_SCHEMA);
MessageType requestedProjection = getSchemaForRead(fileSchema, partialSchemaString);
return new ReadContext(requestedProjection);
}
@Override
public RecordMaterializer<Group> prepareForRead(Configuration configuration, Map<String, String> keyValueMetaData,
MessageType fileSchema, org.apache.parquet.hadoop.api.ReadSupport.ReadContext readContext) {
return new GroupRecordConverter(readContext.getRequestedSchema());
}
}

View File

@@ -0,0 +1,81 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.hadoop;
import static org.apache.parquet.schema.MessageTypeParser.parseMessageType;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import org.apache.hadoop.conf.Configuration;
import org.apache.parquet.hadoop.api.WriteSupport;
import org.apache.parquet.io.api.RecordConsumer;
import org.apache.parquet.schema.MessageType;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.GroupWriter;
public class TalendGroupWriteSupport extends WriteSupport<Group> {
public static final String PARQUET_SCHEMA = "parquet.talend.schema";
public static void setSchema(MessageType schema, Configuration configuration) {
configuration.set(PARQUET_SCHEMA, schema.toString());
}
public static MessageType getSchema(Configuration configuration) {
return parseMessageType(Objects.requireNonNull(configuration.get(PARQUET_SCHEMA), PARQUET_SCHEMA));
}
private MessageType schema;
private GroupWriter groupWriter;
private Map<String, String> extraMetaData;
public TalendGroupWriteSupport() {
this(null, new HashMap<String, String>());
}
TalendGroupWriteSupport(MessageType schema) {
this(schema, new HashMap<String, String>());
}
TalendGroupWriteSupport(MessageType schema, Map<String, String> extraMetaData) {
this.schema = schema;
this.extraMetaData = extraMetaData;
}
@Override
public String getName() {
return "Talend";
}
@Override
public org.apache.parquet.hadoop.api.WriteSupport.WriteContext init(Configuration configuration) {
// if present, prefer the schema passed to the constructor
if (schema == null) {
schema = getSchema(configuration);
}
return new WriteContext(schema, this.extraMetaData);
}
@Override
public void prepareForWrite(RecordConsumer recordConsumer) {
groupWriter = new GroupWriter(recordConsumer, schema);
}
@Override
public void write(Group record) {
groupWriter.write(record);
}
}

View File

@@ -0,0 +1,30 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.hadoop;
import org.apache.parquet.hadoop.ParquetInputFormat;
import org.talend.parquet.data.Group;
/**
* Example input format to read Parquet files
*
* This Input format uses a rather inefficient data model but works
* independently of higher level abstractions.
*/
public class TalendInputFormat extends ParquetInputFormat<Group> {
public TalendInputFormat() {
super(TalendGroupReadSupport.class);
}
}

View File

@@ -0,0 +1,54 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.hadoop;
import org.apache.hadoop.mapreduce.Job;
import org.apache.parquet.hadoop.ParquetOutputFormat;
import org.apache.parquet.hadoop.util.ContextUtil;
import org.apache.parquet.schema.MessageType;
import org.talend.parquet.data.Group;
/**
* An example output format
*
* must be provided the schema up front
*
* @see TalendOutputFormat#setSchema(Job, MessageType)
* @see TalendGroupWriteSupport#PARQUET_SCHEMA
*/
public class TalendOutputFormat extends ParquetOutputFormat<Group> {
/**
* set the schema being written to the job conf
*
* @param job a job
* @param schema the schema of the data
*/
public static void setSchema(Job job, MessageType schema) {
TalendGroupWriteSupport.setSchema(schema, ContextUtil.getConfiguration(job));
}
/**
* retrieve the schema from the conf
*
* @param job a job
* @return the schema
*/
public static MessageType getSchema(Job job) {
return TalendGroupWriteSupport.getSchema(ContextUtil.getConfiguration(job));
}
public TalendOutputFormat() {
super(new TalendGroupWriteSupport());
}
}

View File

@@ -0,0 +1,108 @@
/*
* Copyright (C) 2006-2021 Talend Inc. - www.talend.com
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package org.talend.parquet.hadoop;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.ParquetProperties;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.api.WriteSupport;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.parquet.io.OutputFile;
import org.apache.parquet.schema.MessageType;
import org.talend.parquet.data.Group;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
/**
* An example file writer class.
*/
public class TalendParquetWriter extends ParquetWriter<Group> {
/**
* Creates a Builder for configuring ParquetWriter with the example object
*
* @param file the output file to create
* @return a {@link Builder} to create a {@link ParquetWriter}
*/
public static Builder builder(Path file) {
return new Builder(file);
}
/**
* Creates a Builder for configuring ParquetWriter with the example object
*
* @param file the output file to create
* @return a {@link Builder} to create a {@link ParquetWriter}
*/
public static Builder builder(OutputFile file) {
return new Builder(file);
}
/**
* Create a new {@link TalendParquetWriter}.
*
* @param file The file name to write to.
* @param writeSupport The schema to write with.
* @param compressionCodecName Compression code to use, or
* CompressionCodecName.UNCOMPRESSED
* @param blockSize the block size threshold.
* @param pageSize See parquet write up. Blocks are subdivided into
* pages for alignment and other purposes.
* @param enableDictionary Whether to use a dictionary to compress columns.
* @param conf The Configuration to use.
* @throws IOException
*/
TalendParquetWriter(Path file, WriteSupport<Group> writeSupport, CompressionCodecName compressionCodecName,
int blockSize, int pageSize, boolean enableDictionary, boolean enableValidation,
ParquetProperties.WriterVersion writerVersion, Configuration conf) throws IOException {
super(file, writeSupport, compressionCodecName, blockSize, pageSize, pageSize, enableDictionary,
enableValidation, writerVersion, conf);
}
public static class Builder extends ParquetWriter.Builder<Group, Builder> {
private MessageType type = null;
private Map<String, String> extraMetaData = new HashMap<String, String>();
private Builder(Path file) {
super(file);
}
private Builder(OutputFile file) {
super(file);
}
public Builder withType(MessageType type) {
this.type = type;
return this;
}
public Builder withExtraMetaData(Map<String, String> extraMetaData) {
this.extraMetaData = extraMetaData;
return this;
}
@Override
protected Builder self() {
return this;
}
@Override
protected WriteSupport<Group> getWriteSupport(Configuration conf) {
return new TalendGroupWriteSupport(type, extraMetaData);
}
}
}

View File

@@ -0,0 +1,77 @@
package org.talend.parquet.utils;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.util.Calendar;
import java.util.TimeZone;
import org.talend.parquet.data.simple.NanoTime;
import jodd.time.JulianDate;
public class NanoTimeUtils {
static final long NANOS_PER_HOUR = java.util.concurrent.TimeUnit.HOURS.toNanos(1);
static final long NANOS_PER_MINUTE = java.util.concurrent.TimeUnit.MINUTES.toNanos(1);
static final long NANOS_PER_SECOND = java.util.concurrent.TimeUnit.SECONDS.toNanos(1);
static final long NANOS_PER_DAY = java.util.concurrent.TimeUnit.DAYS.toNanos(1);
private static final ThreadLocal<java.util.Calendar> parquetGMTCalendar = new ThreadLocal<Calendar>();
private static final ThreadLocal<Calendar> parquetLocalCalendar = new ThreadLocal<Calendar>();
private static Calendar getGMTCalendar() {
// Calendar.getInstance calculates the current-time needlessly, so cache
// an instance.
if (parquetGMTCalendar.get() == null) {
parquetGMTCalendar.set(Calendar.getInstance(TimeZone.getTimeZone("GMT")));
}
return parquetGMTCalendar.get();
}
private static Calendar getLocalCalendar() {
if (parquetLocalCalendar.get() == null) {
parquetLocalCalendar.set(Calendar.getInstance());
}
return parquetLocalCalendar.get();
}
private static Calendar getCalendar(boolean skipConversion) {
Calendar calendar = skipConversion ? getLocalCalendar() : getGMTCalendar();
calendar.clear();
return calendar;
}
public static Timestamp getTimestamp(NanoTime nt, boolean skipConversion) {
int julianDay = nt.getJulianDay();
long nanosOfDay = nt.getTimeOfDayNanos();
long remainder = nanosOfDay;
julianDay += remainder / NANOS_PER_DAY;
remainder %= NANOS_PER_DAY;
if (remainder < 0) {
remainder += NANOS_PER_DAY;
julianDay--;
}
JulianDate jDateTime = new JulianDate((double) julianDay);
LocalDateTime datetime = jDateTime.toLocalDateTime();
Calendar calendar = getCalendar(skipConversion);
calendar.set(Calendar.YEAR, datetime.getYear());
calendar.set(Calendar.MONTH, datetime.getMonthValue() - 1);
calendar.set(Calendar.DAY_OF_MONTH, datetime.getYear());
int hour = (int) (remainder / (NANOS_PER_HOUR));
remainder = remainder % (NANOS_PER_HOUR);
int minutes = (int) (remainder / (NANOS_PER_MINUTE));
remainder = remainder % (NANOS_PER_MINUTE);
int seconds = (int) (remainder / (NANOS_PER_SECOND));
long nanos = remainder % NANOS_PER_SECOND;
calendar.set(Calendar.HOUR_OF_DAY, hour);
calendar.set(Calendar.MINUTE, minutes);
calendar.set(Calendar.SECOND, seconds);
Timestamp ts = new Timestamp(calendar.getTimeInMillis());
ts.setNanos((int) nanos);
return ts;
}
}

View File

@@ -0,0 +1,225 @@
package org.talend.parquet.utils;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.schema.DecimalMetadata;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.OriginalType;
import org.apache.parquet.schema.PrimitiveType;
import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName;
import org.apache.parquet.schema.Type;
import org.apache.parquet.schema.Type.Repetition;
import org.apache.parquet.schema.Types;
import org.apache.parquet.schema.Types.GroupBuilder;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.simple.NanoTime;
public class TalendParquetUtils {
public static final String ARRAY_FIELD_NAME = "array";
public static PrimitiveType createPrimitiveType(String fieldName, boolean nullable, String primitiveType,
String originalTypeName) {
OriginalType originalType = null;
if (originalTypeName != null) {
originalType = OriginalType.valueOf(originalTypeName);
}
return new PrimitiveType((nullable ? Repetition.OPTIONAL : Repetition.REQUIRED),
PrimitiveTypeName.valueOf(primitiveType), fieldName, originalType);
}
public static PrimitiveType createDecimalType(String fieldName, boolean nullable, int precision, int scale) {
DecimalMetadata decimalMetadata = new DecimalMetadata(precision, scale);
return new PrimitiveType((nullable ? Repetition.OPTIONAL : Repetition.REQUIRED),
PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY, 16, fieldName, OriginalType.DECIMAL, decimalMetadata, null);
}
public static Type createGroupElementType(String fieldName, Object element) {
if (element == null) {
return Types.repeated(PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(fieldName);
}
if (String.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(fieldName);
} else if (Double.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.DOUBLE).named(fieldName);
} else if (Float.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.FLOAT).named(fieldName);
} else if (Byte.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.INT32).as(OriginalType.INT_8).named(fieldName);
} else if (Short.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.INT32).as(OriginalType.INT_16).named(fieldName);
} else if (Integer.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.INT32).named(fieldName);
} else if (Long.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.INT64).named(fieldName);
} else if (Boolean.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.BOOLEAN).named(fieldName);
} else if (Date.class.isInstance(element)) {
return Types.repeated(PrimitiveTypeName.INT64).as(OriginalType.TIMESTAMP_MILLIS).named(fieldName);
} else if (Group.class.isInstance(element)) {
return ((Group) element).getType();
} else {
throw new IllegalArgumentException("Unsupported type: " + element.getClass().getCanonicalName()
+ " for group type field'" + fieldName + "'");
}
}
public static GroupType createGroupType(String fieldName, boolean nullable, Object element) {
GroupBuilder<GroupType> builder = null;
if (nullable) {
builder = Types.optionalGroup();
} else {
builder = Types.requiredGroup();
}
return builder.as(OriginalType.LIST).addField(createGroupElementType("array", element)).named(fieldName);
}
/*
* Here group only support List value with one field
*/
public static List<Object> groupFieldValueToList(Group group) {
if (group == null) {
return null;
}
List<Object> values = new ArrayList<>();
int listSize = group.getFieldRepetitionCount(0);
for (int elementIndex = 0; elementIndex < listSize; elementIndex++) {
Type elelemntType = group.getType().getType(0);
if (elelemntType.isPrimitive()) {
PrimitiveType pType = elelemntType.asPrimitiveType();
switch (pType.getPrimitiveTypeName()) {
case INT64:
if (OriginalType.TIMESTAMP_MILLIS == elelemntType.getOriginalType()) {
values.add(new Date(group.getLong(0, elementIndex)));
} else {
values.add(group.getLong(0, elementIndex));
}
break;
case INT32:
values.add(group.getInteger(0, elementIndex));
break;
case BOOLEAN:
values.add(group.getBoolean(0, elementIndex));
break;
case INT96:
Binary value = group.getInt96(0, elementIndex);
if (value != null) {
NanoTime nanoTime = NanoTime.fromBinary(value);
values.add(new Date(NanoTimeUtils.getTimestamp(nanoTime, false).getTime()));
} else {
values.add(value);
}
break;
case FLOAT:
values.add(group.getFloat(0, elementIndex));
break;
case DOUBLE:
values.add(group.getDouble(0, elementIndex));
break;
default:
values.add(group.getValueToString(0, elementIndex));
}
} else {
values.add(groupFieldValueToList(group.getGroup(0, elementIndex)));
}
}
return values;
}
public static void writeGroupField(Group nestGroup, List<?> values) {
if (values == null || values.isEmpty()) {
return;
}
// only support one field currently
for (int i = 0; i < values.size(); i++) {
Object element = values.get(i);
if (String.class.isInstance(element)) {
nestGroup.add(0, (String) element);
} else if (Double.class.isInstance(element)) {
nestGroup.add(0, (Double) element);
} else if (Float.class.isInstance(element)) {
nestGroup.add(0, (Float) element);
} else if (Byte.class.isInstance(element)) {
nestGroup.add(0, (Byte) element);
} else if (Short.class.isInstance(element)) {
nestGroup.add(0, (Short) element);
} else if (Integer.class.isInstance(element)) {
nestGroup.add(0, (Integer) element);
} else if (Long.class.isInstance(element)) {
nestGroup.add(0, (Long) element);
} else if (Boolean.class.isInstance(element)) {
nestGroup.add(0, (Boolean) element);
} else if (Date.class.isInstance(element)) {
nestGroup.add(0, ((Date) element).getTime());
} else if (Group.class.isInstance(element)) {
nestGroup.add(0, (Group) element);
} else {
throw new IllegalArgumentException("Unsupported type: " + element.getClass().getCanonicalName()
+ " for group type field'" + nestGroup + "'");
}
}
}
public static BigDecimal binaryToDecimal(Binary value, int precision, int scale) {
/*
* Precision <= 18 checks for the max number of digits for an unscaled long,
* else treat with big integer conversion
*/
if (precision <= 18) {
ByteBuffer buffer = value.toByteBuffer();
byte[] bytes = buffer.array();
int start = buffer.arrayOffset() + buffer.position();
int end = buffer.arrayOffset() + buffer.limit();
long unscaled = 0L;
int i = start;
while (i < end) {
unscaled = (unscaled << 8 | bytes[i] & 0xff);
i++;
}
int bits = 8 * (end - start);
long unscaledNew = (unscaled << (64 - bits)) >> (64 - bits);
if (scale == 0 || unscaledNew <= -Math.pow(10, 18) || unscaledNew >= Math.pow(10, 18)) {
return new BigDecimal(unscaledNew);
} else {
return BigDecimal.valueOf(unscaledNew / Math.pow(10, scale));
}
} else {
return new BigDecimal(new BigInteger(value.getBytes()), scale);
}
}
public static Binary decimalToBinary(BigDecimal decimalValue, int scale) {
// First we need to make sure the BigDecimal matches our schema scale:
decimalValue = decimalValue.setScale(scale, RoundingMode.HALF_UP);
// Next we get the decimal value as one BigInteger (like there was no decimal
// point)
BigInteger unscaledDecimalValue = decimalValue.unscaledValue();
// Finally we serialize the integer
byte[] decimalBytes = unscaledDecimalValue.toByteArray();
byte[] decimalBuffer = new byte[16];
if (decimalBuffer.length >= decimalBytes.length) {
// Because we set our fixed byte array size as 16 bytes, we need to
// pad-left our original value's bytes with zeros
int decimalBufferIndex = decimalBuffer.length - 1;
for (int i = decimalBytes.length - 1; i >= 0; i--) {
decimalBuffer[decimalBufferIndex] = decimalBytes[i];
decimalBufferIndex--;
}
} else {
throw new IllegalArgumentException(String.format("Decimal size: %d was greater than the allowed max: %d",
decimalBytes.length, decimalBuffer.length));
}
return Binary.fromReusedByteArray(decimalBuffer);
}
}

View File

@@ -0,0 +1,86 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.talend.parquet;
import java.io.IOException;
import java.util.concurrent.Callable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.statistics.Statistics;
import org.hamcrest.CoreMatchers;
import org.junit.Assert;
public class TestUtils {
public static void enforceEmptyDir(Configuration conf, Path path) throws IOException {
FileSystem fs = path.getFileSystem(conf);
if (fs.exists(path)) {
if (!fs.delete(path, true)) {
throw new IOException("can not delete path " + path);
}
}
if (!fs.mkdirs(path)) {
throw new IOException("can not create path " + path);
}
}
/**
* A convenience method to avoid a large number of @Test(expected=...) tests
*
* @param message A String message to describe this assertion
* @param expected An Exception class that the Runnable should throw
* @param callable A Callable that is expected to throw the exception
*/
public static void assertThrows(String message, Class<? extends Exception> expected, Callable callable) {
try {
callable.call();
Assert.fail("No exception was thrown (" + message + "), expected: " + expected.getName());
} catch (Exception actual) {
try {
Assert.assertEquals(message, expected, actual.getClass());
} catch (AssertionError e) {
e.addSuppressed(actual);
throw e;
}
}
}
public static void assertStatsValuesEqual(Statistics<?> stats1, Statistics<?> stats2) {
assertStatsValuesEqual(null, stats1, stats2);
}
// To be used to assert that the values (min, max, num-of-nulls) equals. It
// might be used in cases when creating
// Statistics object for the proper Type would require too much work/code
// duplications etc.
public static void assertStatsValuesEqual(String message, Statistics<?> expected, Statistics<?> actual) {
if (expected == actual) {
return;
}
if (expected == null || actual == null) {
Assert.assertEquals(expected, actual);
}
Assert.assertThat(actual, CoreMatchers.instanceOf(expected.getClass()));
Assert.assertArrayEquals(message, expected.getMaxBytes(), actual.getMaxBytes());
Assert.assertArrayEquals(message, expected.getMinBytes(), actual.getMinBytes());
Assert.assertEquals(message, expected.getNumNulls(), actual.getNumNulls());
}
}

View File

@@ -0,0 +1,63 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.talend.parquet.hadoop;
import static org.junit.Assert.assertEquals;
import java.util.HashMap;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.parquet.hadoop.api.ReadSupport;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.MessageTypeParser;
import org.junit.Test;
import org.talend.parquet.hadoop.TalendGroupReadSupport;
public class TalendGroupReadSupportTest {
private String fullSchemaStr = "message example {\n" + "required int32 line;\n" + "optional binary content;\n"
+ "}";
private String partialSchemaStr = "message example {\n" + "required int32 line;\n" + "}";
@Test
public void testInitWithoutSpecifyingRequestSchema() throws Exception {
TalendGroupReadSupport s = new TalendGroupReadSupport();
Configuration configuration = new Configuration();
Map<String, String> keyValueMetaData = new HashMap<String, String>();
MessageType fileSchema = MessageTypeParser.parseMessageType(fullSchemaStr);
ReadSupport.ReadContext context = s.init(configuration, keyValueMetaData, fileSchema);
assertEquals(context.getRequestedSchema(), fileSchema);
}
@Test
public void testInitWithPartialSchema() {
TalendGroupReadSupport s = new TalendGroupReadSupport();
Configuration configuration = new Configuration();
Map<String, String> keyValueMetaData = new HashMap<String, String>();
MessageType fileSchema = MessageTypeParser.parseMessageType(fullSchemaStr);
MessageType partialSchema = MessageTypeParser.parseMessageType(partialSchemaStr);
configuration.set(ReadSupport.PARQUET_READ_SCHEMA, partialSchemaStr);
ReadSupport.ReadContext context = s.init(configuration, keyValueMetaData, fileSchema);
assertEquals(context.getRequestedSchema(), partialSchema);
}
}

View File

@@ -0,0 +1,169 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.talend.parquet.hadoop;
import static java.util.Arrays.asList;
import static org.apache.parquet.column.Encoding.DELTA_BYTE_ARRAY;
import static org.apache.parquet.column.Encoding.PLAIN;
import static org.apache.parquet.column.Encoding.PLAIN_DICTIONARY;
import static org.apache.parquet.column.Encoding.RLE_DICTIONARY;
import static org.apache.parquet.format.converter.ParquetMetadataConverter.NO_FILTER;
import static org.apache.parquet.hadoop.ParquetFileReader.readFooter;
import static org.apache.parquet.hadoop.metadata.CompressionCodecName.UNCOMPRESSED;
import static org.apache.parquet.schema.MessageTypeParser.parseMessageType;
import static org.apache.parquet.schema.Type.Repetition.REQUIRED;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Callable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.Encoding;
import org.apache.parquet.column.ParquetProperties;
import org.apache.parquet.column.ParquetProperties.WriterVersion;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.example.data.simple.SimpleGroupFactory;
import org.apache.parquet.hadoop.ParquetReader;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.example.ExampleParquetWriter;
import org.apache.parquet.hadoop.example.GroupReadSupport;
import org.apache.parquet.hadoop.example.GroupWriteSupport;
import org.apache.parquet.hadoop.metadata.BlockMetaData;
import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;
import org.apache.parquet.hadoop.metadata.ParquetMetadata;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.InvalidSchemaException;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.Types;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.talend.parquet.TestUtils;
public class TestParquetWriter {
@Test
public void test() throws Exception {
Configuration conf = new Configuration();
Path root = new Path("target/tests/TestParquetWriter/");
TestUtils.enforceEmptyDir(conf, root);
MessageType schema = parseMessageType(
"message test { "
+ "required binary binary_field; "
+ "required int32 int32_field; "
+ "required int64 int64_field; "
+ "required boolean boolean_field; "
+ "required float float_field; "
+ "required double double_field; "
+ "required fixed_len_byte_array(3) flba_field; "
+ "required int96 int96_field; "
+ "} ");
GroupWriteSupport.setSchema(schema, conf);
SimpleGroupFactory f = new SimpleGroupFactory(schema);
Map<String, Encoding> expected = new HashMap<String, Encoding>();
expected.put("10-" + ParquetProperties.WriterVersion.PARQUET_1_0, PLAIN_DICTIONARY);
expected.put("1000-" + ParquetProperties.WriterVersion.PARQUET_1_0, PLAIN);
expected.put("10-" + ParquetProperties.WriterVersion.PARQUET_2_0, RLE_DICTIONARY);
expected.put("1000-" + ParquetProperties.WriterVersion.PARQUET_2_0, DELTA_BYTE_ARRAY);
for (int modulo : asList(10, 1000)) {
for (WriterVersion version : WriterVersion.values()) {
Path file = new Path(root, version.name() + "_" + modulo);
ParquetWriter<Group> writer = new ParquetWriter<Group>(
file,
new GroupWriteSupport(),
UNCOMPRESSED, 1024, 1024, 512, true, false, version, conf);
for (int i = 0; i < 1000; i++) {
writer.write(
f.newGroup()
.append("binary_field", "test" + (i % modulo))
.append("int32_field", 32)
.append("int64_field", 64l)
.append("boolean_field", true)
.append("float_field", 1.0f)
.append("double_field", 2.0d)
.append("flba_field", "foo")
.append("int96_field", Binary.fromConstantByteArray(new byte[12])));
}
writer.close();
ParquetReader<Group> reader = ParquetReader.builder(new GroupReadSupport(), file).withConf(conf).build();
for (int i = 0; i < 1000; i++) {
Group group = reader.read();
assertEquals("test" + (i % modulo), group.getBinary("binary_field", 0).toStringUsingUTF8());
assertEquals(32, group.getInteger("int32_field", 0));
assertEquals(64l, group.getLong("int64_field", 0));
assertEquals(true, group.getBoolean("boolean_field", 0));
assertEquals(1.0f, group.getFloat("float_field", 0), 0.001);
assertEquals(2.0d, group.getDouble("double_field", 0), 0.001);
assertEquals("foo", group.getBinary("flba_field", 0).toStringUsingUTF8());
assertEquals(Binary.fromConstantByteArray(new byte[12]),
group.getInt96("int96_field",0));
}
reader.close();
ParquetMetadata footer = readFooter(conf, file, NO_FILTER);
for (BlockMetaData blockMetaData : footer.getBlocks()) {
for (ColumnChunkMetaData column : blockMetaData.getColumns()) {
if (column.getPath().toDotString().equals("binary_field")) {
String key = modulo + "-" + version;
Encoding expectedEncoding = expected.get(key);
assertTrue(
key + ":" + column.getEncodings() + " should contain " + expectedEncoding,
column.getEncodings().contains(expectedEncoding));
}
}
}
assertEquals("Object model property should be example",
"example", footer.getFileMetaData().getKeyValueMetaData()
.get(ParquetWriter.OBJECT_MODEL_NAME_PROP));
}
}
}
@Rule
public TemporaryFolder temp = new TemporaryFolder();
@Test
public void testBadWriteSchema() throws IOException {
final File file = temp.newFile("test.parquet");
file.delete();
TestUtils.assertThrows("Should reject a schema with an empty group",
InvalidSchemaException.class, new Callable<Void>() {
@Override
public Void call() throws IOException {
ExampleParquetWriter.builder(new Path(file.toString()))
.withType(Types.buildMessage()
.addField(new GroupType(REQUIRED, "invalid_group"))
.named("invalid_message"))
.build();
return null;
}
});
Assert.assertFalse("Should not create a file when schema is rejected",
file.exists());
}
}

View File

@@ -0,0 +1,271 @@
package org.talend.parquet.util;
import static org.apache.parquet.schema.MessageTypeParser.parseMessageType;
import static org.apache.parquet.schema.OriginalType.DECIMAL;
import static org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY;
import static org.apache.parquet.schema.Type.Repetition.REQUIRED;
import java.math.BigDecimal;
import java.util.Arrays;
import java.util.List;
import org.apache.parquet.schema.DecimalMetadata;
import org.apache.parquet.schema.GroupType;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.OriginalType;
import org.apache.parquet.schema.PrimitiveType;
import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName;
import org.apache.parquet.schema.Type;
import org.hamcrest.MatcherAssert;
import org.hamcrest.Matchers;
import org.junit.Assert;
import org.junit.Test;
import org.talend.parquet.data.Group;
import org.talend.parquet.data.simple.SimpleGroup;
import org.talend.parquet.utils.TalendParquetUtils;
public class TalendParquetUtilsTest {
@Test
public void testGetStringList() {
MessageType schema = parseMessageType("message Schema { " //
+ " optional int64 fieldo; " //
+ " optional group field1 { " //
+ " repeated binary field2 (UTF8); " //
+ " } " //
+ "}"); //
Group group = new SimpleGroup(schema.getType(1).asGroupType());
group.add(0, "element 1");
group.add(0, "element 2");
group.add(0, "element 3");
group.add(0, "element 4");
List<Object> values = TalendParquetUtils.groupFieldValueToList(group);
MatcherAssert.assertThat("", values, Matchers.contains("element 1", "element 2", "element 3", "element 4"));
}
@Test
public void testGetIntList() {
MessageType schema = parseMessageType("message Schema { " //
+ " optional int64 fieldo; " //
+ " optional group field1 { " //
+ " repeated int32 field2 ; " //
+ " } " //
+ "}"); //
Group group = new SimpleGroup(schema.getType(1).asGroupType());
group.add(0, 123);
group.add(0, 345);
group.add(0, 431);
List<Object> values = TalendParquetUtils.groupFieldValueToList(group);
MatcherAssert.assertThat("", values, Matchers.contains(123, 345, 431));
}
@SuppressWarnings("unchecked")
@Test
public void testNestGroupList() {
MessageType schema = parseMessageType("message Schema { " //
+ " optional int64 fieldo; " //
+ " optional group field1 { " //
+ " repeated group field2 {" //
+ " repeated double field3;" //
+ " } " //
+ " } " //
+ "}"); //
Group group = new SimpleGroup(schema.getType(1).asGroupType());
Group nest1 = new SimpleGroup(schema.getType(1).asGroupType().getType(0).asGroupType());
nest1.add(0, 123.0);
nest1.add(0, 345.0);
nest1.add(0, 431.0);
Group nest2 = new SimpleGroup(schema.getType(1).asGroupType().getType(0).asGroupType());
nest2.add(0, 2123.0);
nest2.add(0, 2345.0);
nest2.add(0, 2431.0);
group.add(0, nest1);
group.add(0, nest2);
List<Object> values = TalendParquetUtils.groupFieldValueToList(group);
MatcherAssert.assertThat("", (List<Object>) values.get(0), Matchers.contains(123.0, 345.0, 431.0));
MatcherAssert.assertThat("", (List<Object>) values.get(1), Matchers.contains(2123.0, 2345.0, 2431.0));
}
@Test
public void testNullGroupList() {
List<Object> values = TalendParquetUtils.groupFieldValueToList(null);
Assert.assertNull(values);
}
@Test
public void testCreateGroupElementType() {
Type emptyElement = TalendParquetUtils.createGroupElementType("field0", null);
Assert.assertEquals(PrimitiveTypeName.BINARY, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", "1");
Assert.assertEquals(PrimitiveTypeName.BINARY, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", 1.0);
Assert.assertEquals(PrimitiveTypeName.DOUBLE, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", 1.0f);
Assert.assertEquals(PrimitiveTypeName.FLOAT, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", 1);
Assert.assertEquals(PrimitiveTypeName.INT32, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", 1L);
Assert.assertEquals(PrimitiveTypeName.INT64, emptyElement.asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupElementType("field0", true);
Assert.assertEquals(PrimitiveTypeName.BOOLEAN, emptyElement.asPrimitiveType().getPrimitiveTypeName());
// Nest group
MessageType schema = parseMessageType("message Schema { " //
+ " optional group field1 { " //
+ " repeated group field2 {" //
+ " repeated double field3;" //
+ " } " //
+ " } " //
+ "}"); //
Group group = new SimpleGroup(schema.getType(0).asGroupType());
Group nest1 = new SimpleGroup(schema.getType(0).asGroupType().getType(0).asGroupType());
nest1.add(0, 123.0);
nest1.add(0, 345.0);
nest1.add(0, 431.0);
Group nest2 = new SimpleGroup(schema.getType(0).asGroupType().getType(0).asGroupType());
nest2.add(0, 2123.0);
nest2.add(0, 2345.0);
nest2.add(0, 2431.0);
group.add(0, nest1);
group.add(0, nest2);
Assert.assertFalse("Should be group type", group.getType().isPrimitive());
Assert.assertEquals(2, group.getFieldRepetitionCount(0));
emptyElement = TalendParquetUtils.createGroupElementType("field0", group);
Assert.assertFalse("Should be group type", emptyElement.isPrimitive());
Assert.assertEquals(schema.getType(0).asGroupType(), emptyElement);
}
@Test
public void testCreateGroupType() {
GroupType emptyElement = TalendParquetUtils.createGroupType("field0", true, null);
Assert.assertEquals(OriginalType.LIST, emptyElement.asGroupType().getOriginalType());
Assert.assertEquals(OriginalType.UTF8, emptyElement.getType(0).asPrimitiveType().getOriginalType());
emptyElement = TalendParquetUtils.createGroupType("field0", true, 2);
Assert.assertEquals(OriginalType.LIST, emptyElement.asGroupType().getOriginalType());
Assert.assertEquals(PrimitiveTypeName.INT32, emptyElement.getType(0).asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupType("field0", true, Byte.valueOf("1"));
Assert.assertEquals(OriginalType.LIST, emptyElement.asGroupType().getOriginalType());
Assert.assertEquals(OriginalType.INT_8, emptyElement.getType(0).asPrimitiveType().getOriginalType());
Assert.assertEquals(PrimitiveTypeName.INT32, emptyElement.getType(0).asPrimitiveType().getPrimitiveTypeName());
emptyElement = TalendParquetUtils.createGroupType("field0", true, Short.valueOf("1"));
Assert.assertEquals(OriginalType.LIST, emptyElement.asGroupType().getOriginalType());
Assert.assertEquals(OriginalType.INT_16, emptyElement.getType(0).asPrimitiveType().getOriginalType());
Assert.assertEquals(PrimitiveTypeName.INT32, emptyElement.getType(0).asPrimitiveType().getPrimitiveTypeName());
}
@Test
public void testWriteGroupField() {
Group group = null;
MessageType schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated int32 array;" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
List<?> values = Arrays.asList(1, 2, 3);
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(3, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated int32 array(INT_8);" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(Byte.valueOf("1"), Byte.valueOf("2"));
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(2, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated int32 array(INT_16);" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(Short.valueOf("1"));
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(1, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated int64 array;" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(1L, 2L, 3L);
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(3, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated double array;" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(1.0, 2.0, 3.0);
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(3, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated float array;" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(1.0f, 2.0f, 3.0f);
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(3, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated binary array (UTF8);" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList("element 1", "element 2");
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(2, group.getFieldRepetitionCount(0));
schema = parseMessageType("message Schema { " //
+ " optional group field0 (LIST) {" + " repeated boolean array ;" + " } " //
+ "}"); //
group = new SimpleGroup(schema.getType(0).asGroupType());
values = Arrays.asList(true, false);
TalendParquetUtils.writeGroupField(group, values);
Assert.assertEquals(2, group.getFieldRepetitionCount(0));
}
@Test
public void testDecimalAnnotation() {
Group group = null;
MessageType schema = new MessageType("DecimalMessage", new PrimitiveType(REQUIRED, FIXED_LEN_BYTE_ARRAY, 16,
"aDecimal", DECIMAL, new DecimalMetadata(38, 2), null));
BigDecimal decimalValue = new BigDecimal("1234423199.9999");
group = new SimpleGroup(schema);
group.append("aDecimal", TalendParquetUtils.decimalToBinary(decimalValue, 5));
Assert.assertEquals(decimalValue.setScale(5), TalendParquetUtils.binaryToDecimal(group.getBinary(0, 0), 38, 5));
group = new SimpleGroup(schema);
group.append("aDecimal", TalendParquetUtils.decimalToBinary(decimalValue, 4));
Assert.assertEquals(decimalValue, TalendParquetUtils.binaryToDecimal(group.getBinary(0, 0), 38, 4));
decimalValue = new BigDecimal("1234");
group = new SimpleGroup(schema);
group.append("aDecimal", TalendParquetUtils.decimalToBinary(decimalValue, 0));
Assert.assertEquals(decimalValue, TalendParquetUtils.binaryToDecimal(group.getBinary(0, 0), 10, 0));
}
}

Some files were not shown because too many files have changed in this diff Show More