1. Introduction to Hierarchy Builder Transformation
Informatica Cloud supports creating a JSON or XML file through Hierarchy Builder transformation in Cloud Mappings. Hierarchy Builder transformation processes relational input from the upstream transformation and provides JSON or XML output to the downstream transformation.
This is just opposite to the Hierarchy Parser transformation discussed in our previous article which is used to read JSON and XML files and provide relational output.
2. Hierarchical schema
A Hierarchical Schema defines the structure of the file that a Hierarchy Builder transformation creates from relational input.
Similar to the Hierarchy Parser, a Hierarchical schema needs to be created and associated with Hierarchy Builder transformation.
Let us discuss in detail how to create a JSON file using the Hierarchy Builder transformation in Informatica Cloud.
3. Creating a JSON output file using Hierarchy Builder
3.1 Create a Hierarchical Schema
Consider below is the simple JSON structure in which we want our output file. This JSON structure represents an array of author details.
[{
"AUTHOR_UID": 1,
"FIRST_NAME": "Gian",
"MIDDLE_NAME": "Paulo",
"LAST_NAME": "Faleschini"
}]
Create a Hierarchical Schema using a schema template file using the above JSON structure.
3.2 Configuring Source Transformation
Below is the source file prepared to populate the data in the output JSON file.
Contents of author.csv file
"author_uid","first_name","middle_name","last_name"
1,"Fiona",,"Macdonald"
2,"Gian","Paulo","Faleschini"
3,"Laura","K","Egendorf"
Select the author.csv file as the source file in Source transformation.
3.3 Configuring Hierarchy Builder Transformation
Drag and drop a Hierarchy Builder transformation into mapping canvas.
In the Output Settings, select the Hierarchical Schema we have created in the earlier step.
Map the source transformation to the Hierarchy Parser after this step.
In the Field Mapping tab, map the source Relational Fields to the target Hierarchy Fields as shown below.
3.4 Configuring Target transformation
Once the Hierarchy Builder is set up, pass the data into a target transformation.
In the target transformation, select the flat file connection and create a dynamic target HB_Author_op.json.
The final mapping will be as below.
Save and run the mapping. The final output we get will be as below.
Contents of HB_Author_op.json file
[{"AUTHOR_UID":1,"FIRST_NAME":"Fiona","LAST_NAME":"Macdonald"}]
[{"AUTHOR_UID":2,"FIRST_NAME":"Gian","MIDDLE_NAME":"Paulo","LAST_NAME":"Faleschini"}]
[{"AUTHOR_UID":3,"FIRST_NAME":"Laura","MIDDLE_NAME":"K","LAST_NAME":"Egendorf"}]
The output represents arrays of author details which are not connected to each other. That implies each author record is an independent array. This is not a valid JSON format.
This is not the expected output. The expected output should be a single array holding all author details in it as a separate element.
Expected output:
[{AUTHOR_1 Details}, {AUTHOR_2 Details}, {AUTHOR_3 Details}]
Output received:
[{AUTHOR_1 Details}]
[{AUTHOR_2 Details}]
[{AUTHOR_3 Details}]
4. Creating a JSON output file with single array of multiple elements
In order to get all the author details in a single array, we need to define that in Hierarchy Builder transformation.
If you have noticed in the previous example, there is a rootArray element which we did not map.
All our author attributes – AUTHOR_UID, FIRST_NAME, MIDDLE_NAME and LAST_NAME are tagged under root. That is why we receive all these elements as a single author element.
In order to bring all the author elements under a single array, the root element needs to be tagged under rootArray.
To achieve this, we need to introduce a temporary field.
4.1 Configuring Source Transformation
Let us introduce a new source HB_Input1.csv with only one column and a row.
Contents of HB_Input1.csv file.
"Row_Type"
"Author"
The other source would be same author.csv file discussed in previous example.
We need a common field in author.csv file to join with HB_Input1.csv file
4.2 Configuring Expression Transformation
Pass the data from Source author.csv to Expression transformation.
In expression create a field ‘Type’ and assign value as ‘Author’
4.3 Configuring Hierarchy Builder Transformation
In the Hierarchy Builder, we need to define Primary Key and Foreign Key relationship between the Source1 and Source2.
In the Relational Fields, click on Key to define the Row_Type coming from new source as Primary key and Type field coming from expression as Foreign Key.
Golden key represents Primary key. White key in a golden box represents Foreign key.
Now map the Source to the rootArray in the Hierarchy Fields. This links all the root element to the rootArray.
4.4.Configuring Target Transformation
Pass the data into a target transformation and the rest of the procedure is same as above example.
The final mapping will be as below.
Save and run the mapping. The output will be as below.
Contents of HB_Author_op.json file
[{"AUTHOR_UID":1,"FIRST_NAME":"Fiona","LAST_NAME":"Macdonald"},{"AUTHOR_UID":2,"FIRST_NAME":"Gian","MIDDLE_NAME":"Paulo","LAST_NAME":"Faleschini"},{"AUTHOR_UID":3,"FIRST_NAME":"Laura","MIDDLE_NAME":"K","LAST_NAME":"Egendorf"}]
Now all the author elements are clubbed under single array. We receive the output in a single row and it a valid JSON file.
5. Conclusion
If the data is expected to be an array of values, the Primary Key – Foreign key relationship needs to be established on the relational fields.
In order to achieve that, the appropriate joining fields needs to be introduced in your source data to establish the relationship.
When there are arrays with in your array, pass each array data separately with proper primary and foreign keys to join them.
When you read a JSON file with multiple arrays using Hierarchy Parser, you will get each array output as a separate Output Group. Similarly, if you want to recreate the same JSON file using Hierarchy builder, you need to pass each array data as a separate source with required joining fields.
Subscribe to our Newsletter !!
Related Articles:
Understand how to trigger IICS tasks from command line using RunAJob utility and its requirements and setup process.
Learn how to calculate daylight saving time start and end dates in a given year in Informatica Cloud along with common issue with DST and how to solve them.
Informatica Cloud supports passing data from one Mapping Task to another. The data can be passed through a Task flow using IO parameters.
how to filter the records from an XML file?
are you creating the file or reading the file?
Great explanation.
I like the way you showed how the first time the JSON format is not valid.
Showing how to make it valid was great.
Thanks Sunil..Appreciate your feedback!!