Having two data sources with the same exact schema is rare. When you integrate vendor data into BriteCore, you need a way to map objects between the two domains and transform the data for integration and reporting purposes.
For example, BriteLines is essentially a product modeler for insurance products. Consider the schema as the visible data model you can view from the BriteLines interface, not the actual tables powering BriteLines in our own database. A rough and incomplete example of the BriteLines schema would be:
|
You might have a BriteCore field located under:
|
For the purposes of a VINMaster lookup, you may want to output the vehicle_identification_number
field to vin
for a specific application. Data maps that allow you to persistently map a field in BriteLines should be canonically treated as vin
for a specific purpose.
Instead of hardcoding the data mapping into your UI, BriteDataMapping allows you to make API calls to encode and execute mapping logic over HTTP. This works by taking a source object, a mapping schema, running the source object through mapping, and finally producing the output object. It works on lists and deeply nested data structures as well as flat files.
There are three broad steps involved when using BriteDataMapping:
- Map data objects between domains: Refer to the VINMaster lookup example above.
- Encode the logic: You make
POST /mappings/
requests to create a map of your data objects.. - Execute the transformation: Then you make
POST /map/?from=&to=
requests to execute the logic against a given piece of input data.
The service basically transforms a given JSON object produced within a certain domain using a mapping to produce a JSON object for another domain.
This tutorial will walk you through the steps and considerations to apply when implementing BriteDataMapping for your integrations.
Step 1: Map the data objects between domains
To map the data objects, you need to:
- Retrieve a JSON data object from the vendor.
- Retrieve your BriteCore line output to compare with the vendor data object. You need to evaluate the
field_answers
from BriteLines interface; alternatively, you can make an API call to retrieve your product definition. - Compare the two JSON data objects.
Step 2: Use BriteDataMapping to create a map for your data objects (encode the logic)
There are two possible modes to map your data:
- Default mapping mode: Defines a number of operations to apply to the data and returns all of the data. Ideal for basic data structures.
- Template mode: Renders the data to a template. Recommended for most use cases and more complex data structures, such as lists and deeply nested data structures.
To differentiate between the two modes, add mode=
to the query string of the map request.
Note: The default mapping mode will apply if you don’t specify the mode in your map request.
Example 1: Create a mapping in default mapping mode
Use the default mapping mode only for basic data structures, such as flat files to flat files; for example, if you have data from a payment system and want to store that data in your accounting system.
The payment object looks like:
{"payment": {"first_name": "Abigail"}} |
However, the accounting system expects first_name
to be in camel case, i.e., firstName
.
You will need to transform the key of first_name
to firstName
and produce the following:
{"payment": {"firstName": "Abigail"}} |
To create a mapping, we make a POST request to /mappings/
with the request body having the following schema:
{ "data": { "type": "mapping", "attributes": { "from": "", "to": "", "mappings" : { "" : { "operation": "", "operand" : "", "key" : "", "scope" : ""} } } } } |
Sample request
The actual request to define the mapping would be:
{"data": { "type": "mapping", "attributes": { "from": "payment_system", "to": "accounting_system", "mappings" : { "first_name" : { "operation": "map_key", "operand" : "firstName" } } } } } |
Sample response
The response sample for the above payload:
{ "type": "mapping", "attributes": { "from": "payment_system", "version": "1", "to": "accounting_system", "mapping": {"first_name": {"operation": "map", "key": "firstName"}} }, "id": "payments_system@accounting_system@version@1", "links": {"self": "/mappings/my-payments-system@version@1"} } |
Example 2: Create a mapping in template mode
Template mode applies a Jinja template—specified in the template key of the mapping—to the provided data.
This mode provides you flexibility in defining completely new JSON data structures from the source data.
The template must be a valid Python data structure as a string. That is then evaluated with literal_eval and finally returned as JSON.
POST "/map/?from=payments_system&to=accounting_system&mode=template" data_you_want_to_transform |
Example template:
{"template": "{'a_key': {{source.my_key}} }" |
Returned data:
{"a_key": ""} |
Sample template mapping
{ "data": { "type": "mapping", "attributes": { "from": "", "to": "", "mappings" : { "template" : "" } } } } |
The value of template must be a valid Jinja template string.
Jinja Environment details
- Your data is provided to the template as source.
- OrderedDict: The OrderedDict collection is available as a global.
- regex sub: The regex’s
sub
function is available as a filter, i.e.,sub
(“”, “”).
Sample Jinja details
"""\ {% set typeLabels = OrderedDict([ ('type1', 'Label for type 1'), ('type2', 'Label for type 2'), ('type3', 'Label for type 3') ]) %} { "name": "{{source.name}}", "things": [ {% for item in source.things %} {% if item.a < 3 %} {{item.a}}{% if loop.nextitem is defined %},{% endif %} {% endif %} {% endfor %} ], "street": "{{source.address|sub("[^0-9]", "")|trim}}", "state": "{{source.state.split('_')[1][0:2].upper()}}", "claimAmt": "{{source.claims|sum(attribute="amt")}}", "claimTypes": [ {% for claim in source.claims %} {% if claim.type %} {{typeLabels[claim.type]}}, {% endif %} {% endfor %} ], }""" |
Step 3: Execute your mapping (execute the transformation)
Use the from
and to
in the attributes when creating the mapping. You may include a version
.
POST "/map/?from=payments_system&to=accounting_system&version=1&mode=template" data_you_want_to_transform |
The returned data will be a transformed/mapped version of the data provided.
BriteDataMapping details
Versioning
- The id of each mapping consists of the
from
,to
, andversion
attributes, separated by@
. For example:my-payments-system@accounting@version@1
- Each version is a separate
mapping
. - Create a new version by creating a new
mapping
using the same data, but with a differentversion
. - Once a
mapping
is created, changing thefrom
,to
, orversion
won’t change theid
or be available to use.
Transforms
A mapping consists of key-value pairs that can be thought of as a single unit, called a Transform
.
Each Transform
has two objects:
path_expression
Operation
Path_expresion
A path_expression
allows you to describe the location of the input data that you’re interested in mapping.
Operation
<operation>
: Describes the kind of transformation you can make to the input data.<operand>
: A value that you provide to theoperation
.<scope>
: A scope is anotherTransform
that, after evaluation, is used as a value to the operation instead of the value of thepath_expression
value in the source data.<key>
: The key at which you want to return this data.
Table 1 summarizes:
- Supported operations.
- Type of any required
operand
. - Result of the
operation
on the example data.- Example data:
{"some_field": "some_value"}
- Example data:
Table 1: BriteDataMapping supported operations
Example data:
{"some_field": "some_value"}
Operations | Example Mapping | Operand | Operand Types | Result | Notes |
---|---|---|---|---|---|
prefix | { "some_field": { "operation": "prefix", "operand": "Prefix " }} | "Prefix " | string | {"some_field": "Prefix some_value"} | |
from_map | { "some_field": { "operation": "from_dict", "operand": {"some_value": "zero", "another": "one"} }} | {"some_value": "zero", "another": "one"} | dictionary | {"some_field": "zero"} | The value in the original object is used as the selector/key in the operand object |
equals | { "some_field": { "operation": "equals", "operand": "another_value" }} | "another_value" | string, number, array, dictionary | {"some_field": false} | The value in the original object is compared to the operand and the result is returned in place of the original value |
default | { "random_field": { "operation": "default", "operand": "random_value" }} | "random_value" | string, number, array, dictionary | {"some_field": "some_value", "random_field": "random_value"} | If the result of the path expression does not exist, we create it. If the value mapped to the path expression is None, we replace it with the operand |
split | { "some_field": { "operation": "split", "operand": "_" }} | "_" | string | {"some_field": ["some", "value"]} | Splits a string by the operand |
slice | { "some_field": { "operation": "slice", "operand": [None, 4] }} | [None, 1] | string, array | {"some_field": "some"} | Replaces the path expression's value with a sub-selection of the original value, as specified in the operand. [None, 4] means give me every element from start to position 4 |
right_add | { "some_field": { "operation": "right_add", "operand": "addition " }} | "addition " | string, number, array | {"some_field": "addition some_value"} | Evaluates the expression operand + value |
left_add | { "some_field": { "operation": "left_add", "operand": " addition" }} | " addition" | string, number, array | {"some_field": "some_value addition"} | Evaluates the expression value + operand |
lower | { "some_field": { "operation": "lower" }} | string | {"some_field": "some_value"} | All characters in the string are converted to lower case | |
upper | { "some_field": { "operation": "upper" }} | string | {"some_field": "SOME_VALUE"} | All characters in the string are converted to upper case | |
map_key | { "some_field": { "operation": "map_key", "operand": "another_field" }} | "another_field" | string | {"another_field": "some_value"} | Replace the key that matches the path expression with the operand |
Return types
The service first converts the JSON data into Pythonic types and then operates on that data. Therefore, the return types are dependent upon the input data.
Recursive mapping
Using scopes, you can recursively transform the data. This allows you to combine operations as well.
For example, if you wanted to both prefix and convert to uppercase (upper), you need to use a scope
and a special operation called recursive_map
.
This will take three Transforms:
- Prefix Transform:
{"a": {"operation": "prefix", "key": "C", "operand": "my_"}}
- Upper Transform:
{
"a": {"operation": "upper", "key": "C", "scope": }}
- Recursive Transform:
{"a": {"operation": "recursive_map", "key": "B", "scope": }}
Example of recursive mapping
{ "a": { "operation": "recursive_map", "key": "B", "scope": { "a": { "operation": "upper", "key": "C", "scope": { "a": { "operation": "prefix", "key": "C", "operand": "my_" } } } } } } |
Data provided
{"a": "oh"} |
Resulting data after mapping
{"B": {"C": "MY_OH"}} |
Plugins
BriteCore’s UI Plugins client has a mapping request tool, BriteCorePluginRequest.makeMappingRequest
(from
, to
, data
, mode
), which simplifies BriteDataMapping usage from plugins.