Semantics to increase synergies and move towards the Industry 4.0 Digital Thread (2/2)

As we explained in the part #1, in MarkLogic we can store PLM objects (Part, Assembly, etc.) and product structure section as XML or JSON documents. MarkLogic being an operational database, it provides all transactional capabilities required to perform read/write access to these objects. 

Today we will deep dive into MarkLogic semantics and multi-model capabilities.

From documents to semantic triples

A natural product structure....
We mentioned before that the product structure management can vary from one solution to the other. 

As the objective is to create a unified source of truth for all PLM data coming from internal and external (providers, partners) PLM systems, it's also important to be able to manipulate the business concepts using a shared knowledge. This is where the semantics is important. 


Template Driven Extraction

TDE principles
The first thing is so to lift the data which currently live in documents into a semantic layer. MarkLogic provides the ability to create a semantic lens on existing data stored in documents, it's called TDE (Template Driven Extraction).

TDE is using templates written in XML or Json which describe what data of the documents and how they must be lifted into the triple store (actually into triples). The templates are written once and apply for a specific scope.






Why it matters ?

The semantic layer maps complex data to unified business concepts that business users can manipulate. In a typical corporate environment, mapping all data to a semantic layer would require du create billions, sometimes hundreds of billions of triples. When a semantic layer is part of an operational solution, maintaining and querying such amount of triples face major scalability constraints.
By leveraging multi-model approach, only some data have to be lifted into triples (mainly relations/object properties) and the others can stay in the documents. They can still be used to filter the semantic graph by combining document queries and graph queries.
We will illustrate this approach using the PLM data of our scenario.



Semantic layer


In our scenario, we have one template for the business objects to lift whatever property we want, and one for the product structure sections which will create the parent/child semantic relations based on the links stored in the document.






TDE Template


A simple template example could look like this, where for each link, the "parent" ID is the subject of a triple, the object is the "child" ID and the predicate is provider#child.








With TDE, the triple store is automatically up-to-date based on whatever update is performed on the source document (from which the triples are extracted). Please note also that MarkLogic also keeps the relation between the lifted triples and the source document. This is important for the next section.

We now have a database, loaded with PLM business objects, product structure sections and their semantic representation in the triple store.

From document to triples to effective product structure

Let's recap what we've seen so far :
  • We have documents representing business objects
  • We have documents representing product structure section
  • We have effectivity converted into queries stored in these documents
  • We have triples lifted from the documents

Everything is in place to get the actual product structure for a specific product context .

Thanks to MarkLogic SPARQL API, we can perform a very smart query :
We can create a SPARQL query (W3C language to query triple stores) on the product structure but only on sections eligible to a specific context.

How does it work ?

We can create a MarkLogic reverse query, the one we discussed in the first part of this post, that will select only the product structure documents positive to the a specific product context. This query is given as a parameter to the SPARQL execution function and MarkLogic will apply its magic: the triple store actually queries by the SPARQL will be based only on triples lifted from positive documents (from the reverse query). 
So the SPARQL query only "sees" the triples which exist in the selected product context.


The effort?

Almost nothing, the main logic is at loading when we converted effectivity into queries, then everything is managed by MarkLogic. As an example, we recently created a bespoke illustration for a client based on a real product structure export. The overall logic from source parsing, to business object document creation, to full effectivity management, to triples creation and bespoke UI creation was delivered in less than 2 days. Of course, it remains a proof of concept but with all the functional logic implemented.

What about product structure variation between systems?

Now that we have a semantic layer, we can leverage semantics capabilities to derive any product structure. There are several strategies to do it in MarkLogic  but the most agile is probably inference. (Backward chaining)
Inference to adapt the structure on the fly

We can indeed apply on the fly inference rules to modify the product structure and adapt it to the concepts required by the PLM system which want to consume these data. For this requirement, the inference rule will add for exemple intermediate concepts in the product structure in order to match what the client system is expecting, it could also move effectivity at a valid level for this client PLM.


What are the benefits?

Thanks to this semantic layer it's now possible the share PLM objects between systems which were not designed to communicate. This is the opportunity to leverage existing designs from one product to the other or to merge designs coming from multiple parties whatever their respective technologies are.
If you have any question don't hesitate to get in touch with me.






Popular posts from this blog

Snowflake Data sharing is a game changer : Be ready to connect the dots (with a click)

Domain centric architecture : Data driven business process powered by Snowflake Data Sharing

Process XML, JSON and other sources with XQuery at scale in the Snowflake Data Cloud