Nueva publicación

Encontrar

Artículo
· 22 feb, 2023 Lectura de 4 min

Export to JSON - relationships and inheritance

Why I've decided to write this

Once again I had a challenge that costed me some time and a lot of testing to reach the best solution. And now that I've managed to solve it, I'd like to share a little bit of my knowledge.
 

What happened?

In a namespace there were a lot of similar classes, so to make them simpler there were a superclass with comon properties. Also, there are relationships between them. I had to export one of them to JSON, but I couldn't change the superclasses, or I would break down the flow of many other integrations.

What made it all difficult was the problem that my JSON couldn't have the properties of the superclass. Ouch! I could export and take them off one by one, but.. what if someone changes the superclass?

And even worse... what happens with the relationships? If we export a relationship, we export another whole object, with all of its properties, but I couldn't have them either in the JSON.

 

A light in the end of the tunnel

Luckily, there is always a light in the end of the tunnel, and my light is the XData.

The solution is very simple: lets call the class that I had to export ClassToExport, the class with the relationship RelatedClass and the superclass SuperClass.

We'll have:

Class project.SuperClass Extends %Persistent {
    Property CommonProperty As %String;
}
Class project.ClassToExport Extends project.SuperClass {
    Property PropertyToExport As %String;
    Relationship RelationshipToExport As project.RelatedClass [ Cardinality = many, Inverse = RelatedProperty ];
}
Class project.RelatedClass Extends project.SuperClass {
    Property DontExportThis As %String;
    Property ExportThis As %String;
    Relationship RelatedProperty As project.ClassToExport [ Cardinality = one, Inverse = RelationshipToExport ];
}

 

In ClassToExport, I write the XData: there must be a name and a tag <Mapping> with the tags <Property>. The tag <Mapping> carries the xml namespace, xmlns="http://intersystems.com/jsonmapping", and the tags <Property> carry the properties described in %JSON.MappingProperty¹ (of the official documentation).

 

The magic trick is that everything that is not specified in the mapping will be ignored. So, if we change ClassToExport to:

Class project.ClassToExport Extends project.SuperClass {
    Property PropertyToExport As %String;
    Relationship RelationshipToExport As project.RelatedClass [ Cardinality = many, Inverse = RelatedProperty ];
    XData MappingJSON {
        <Mapping xmlns = "http://intersystems.com/jsonmapping">
            <Property Name = "PropertyToExport" FieldName = "Property-JSON"/>
            <Property Name = "RelationshipToExport" FieldName = "RelatedClassJSON"/>
        </Mapping>
    }
}

we'll have in the JSON something like:

{
   "Property-JSON":"value",
   "RelatedClassJSON": [
      {"CommonProperty":"value", "DontExportThis":"value", "ExportThis":"value"},
      {"CommonProperty":"value", "DontExportThis":"value", "ExportThis":"value"}
   ]
}

So the names of the ClassToExport are ready, and only the properties we want are in the JSON, but the RelatedClass still has work to do.

 

Then, we change RelatedClass with an XData with the same name to arrange their properties:

Class project.RelatedClass Extends project.SuperClass {
    Property DontExportThis As %String;
    Property ExportThis As %String;
    Relationship RelatedProperty As project.ClassToExport [ Cardinality = one, Inverse = RelationshipToExport ];
    XData MappingJSON
    {
        <Mapping xmlns = "http://intersystems.com/jsonmapping">
            <Property Name = "DontExportThis" Include="None"/>
            <Property Name = "ExportThis" Include="INOUT"/>
            <Property Name = "CommonProperty" Include="INOUT"/>
        </Mapping>
    }
    
    
}

 

so we'll have in the JSON somehting like:

{
   "Property-JSON":"value",
   "RelatedClassJSON": [
      {"CommonProperty":"value", "ExportThis":"value"},
      {"CommonProperty":"value", "ExportThis":"value"}
   ]
}

which is what we want.

It is interesting to observe that for the property "DontExportThis", I specified a tag with INclude="None". This is the same of not putting any ta at all for that property.

 

¹ read also %JSON.PropertyParameters to understant what each property does.
 

Thank you for reading and I hope the article was useful!

Feel free to ask me for doubts or to get in touch if you think I can help in some specific case. I'll be happy to help!

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 15 feb, 2023 Lectura de 11 min

ChatGPT to create message transformations for you?

A "big" or "small" ask for ChatGPT?


I tried OpenAI GPT's coding model a couple of weeks ago, to see whether it can do e.g. some message transformations between healthcare protocols. It surely "can", to a seemingly fair degree. 

It has been nearly 3 weeks, and it's a long, long time for ChatGPT, so I am wondering how quickly it grows up by now, and whether it could do some of integration engineer jobs for us, e.g. can it create an InterSystems COS DTL tool to turn the HL7 into FHIR message? 

Immediately I got some quick answers, in less than one minute or two.


Test

First I want to test I am talking to the right "guy" behind it.


Question 1:  how to convert HL7 V2.4 messages to FHIR STU3?


ChatGPT:

Converting HL7 V2.4 messages to FHIR STU3 involves several steps, as the two standards have different data models and structures. Here is a high-level overview of the process:

  1. Identify the relevant data elements in the HL7 message that need to be mapped to FHIR resources. This will depend on the specific use case and data being exchanged.
  2. Define the mapping rules for each data element. This involves identifying the equivalent FHIR resource and element that corresponds to the HL7 data element, and any necessary transformations or translations.
  3. Write a program or script that can extract the data from the HL7 message, apply the mapping rules, and create the corresponding FHIR resources. This may involve using a combination of regular expressions, parsing libraries, and FHIR resource creation libraries.
  4. Validate the FHIR resources against the STU3 specification to ensure they conform to the standard and contain all required elements.

Here is an example of how this process might look for converting an HL7 V2.4 observation message to a FHIR STU3 Observation resource:

  1. Identify the relevant data elements in the HL7 message, such as the observation code, value, and units.
  2. Define the mapping rules for each data element, such as mapping the HL7 observation code to a FHIR CodeableConcept and the value and units to a FHIR Quantity.
  3. Write a program or script that can extract the data from the HL7 message, apply the mapping rules, and create a FHIR Observation resource with the appropriate elements.
  4. Validate the FHIR Observation resource against the STU3 specification to ensure it contains all required elements and is valid.

The specific implementation details will depend on the programming language and libraries used, as well as the specific HL7 message and FHIR resource being mapped. It's recommended to consult the HL7 and FHIR specifications and seek expert advice if needed.


Well, it not just sounds about right - it actually answers more comprehensively than most human would do. It gets my attention now, so let's jump into it.   
 


Question 2: Can you create a sample transformation in Python to convert HL7 to FHIR  


ChatGPT:

Sure! Here's a sample Python code snippet that demonstrates how to convert an HL7 message into a FHIR STU3 Observation resource:

from hl7apy import core
from fhir.resources.observation import Observation
# Parse the HL7 message
hl7_message = 'MSH|^~\&|SENDING_APP|SENDING_FACILITY|RECEIVING_APP|RECEIVING_FACILITY|20220527120512||ORU^R01|MSGID12345|P|2.4\r' \
              'OBX|1|NM|GLU^Glucose||100|mg/dL||N|||F|||20220527120512\r'
hl7_parsed = core.parse_message(hl7_message)

# Map HL7 elements to FHIR elements
observation = Observation()

observation.code = {
    "coding": [{
        "system": "http://loinc.org",
        "code": hl7_parsed.obx[0].observation_id.value,
        "display": hl7_parsed.obx[0].observation_id.value
    }]
}

observation.valueQuantity = {
    "value": hl7_parsed.obx[0].value.value,
    "unit": hl7_parsed.obx[0].units.value,
    "system": "http://unitsofmeasure.org",
    "code": hl7_parsed.obx[0].units.value
}

# Serialize the FHIR resource as JSON
observation_json = observation.json()

# Print the serialized FHIR resource
print(observation_json)



This code uses the hl7apy library to parse the HL7 message and extract the relevant data elements, and the fhir.resources.observation class from the fhir.resources module to create a FHIR Observation resource.

In this example, the code maps the HL7 observation ID to the FHIR code element, and the HL7 value and units to the FHIR valueQuantity element. The FHIR resource is then serialized as JSON using the json() method.

Note that this is a simple example and you may need to map more data elements depending on your specific use case. Additionally, you will need to install the hl7apy and fhir.resources libraries to run this code.


 

It's not a bad answer at all - gives a decent boilerplate code. Let's give it a bit specific challenge now: 


Questions 3: Can you give a sample transformation from HL7 V2 to FHIR Stu3 in Intersystems COS DTL


ChatGPT:


Certainly! Here's an example of how to transform an HL7 V2 message to a FHIR STU3 Observation resource in InterSystems COS DTL (Data Transformation Language):

Class DTL.HL7toFHIR Extends %RegisteredObject
{

ClassMethod TransformHL7ToObservation(hl7Message As %String) As %String [ CodeMode = expression, DDLAllowed = 0 ]
{
    set src = ##class(EnsLib.HL7.Message).%New()
    set src.DocType = "2.4"
    set src.Parse(hl7Message)

    set obs = ##class(FHIR.Resource.Observation).%New()
    set obs.subject = { "reference": "Patient/"_src.MSH.GetSendingApplication() }
    set obs.status = "final"
    set loincCode = src.GetSegment("OBX", 1).GetField(3).GetField(1)
    set obs.code = { 
        "coding": [ 
            {
                "system": "http://loinc.org", 
                "code": loincCode, 
                "display": loincCode 
            }
        ]
    }

    set value = src.GetSegment("OBX", 1).GetField(5).GetField(1)
    set units = src.GetSegment("OBX", 1).GetField(6).GetField(1)
    set obs.valueQuantity = { 
        "value": value, 
        "unit": units, 
        "system": "http://unitsofmeasure.org", 
        "code": units 
    }

    quit obs.%ToJSON()
}

}

This code defines a class method called TransformHL7ToObservation that takes an HL7 V2 message as input and returns a FHIR STU3 Observation resource as a JSON string.

The method first creates a new EnsLib.HL7.Message object and uses it to parse the input HL7 message. It then creates a new FHIR.Resource.Observation object and sets the subject and status elements.

Next, the code extracts the LOINC code, value, and units from the HL7 message and maps them to the corresponding elements in the FHIR resource.

Finally, the method serializes the FHIR resource as JSON and returns it.

To use this class method, you can call it from another COS method or from an external system using InterSystems' REST API. Here's an example of how to call the method from another COS method:

set hl7Message = "MSH|^~\&|SENDING_APP|SENDING_FACILITY|RECEIVING_APP|RECEIVING_FACILITY|20220527120512||ORU^R01|MSGID12345|P|2.4\rOBX|1|NM|GLU^Glucose||100|mg/dL||N|||F|||20220527120512\r"
set fhirObservation = ##class(DTL.HL7toFHIR).TransformHL7ToObservation(hl7Message)
write fhirObservation

Note that you will need to define the EnsLib.HL7.Message and FHIR.Resource.Observation classes in your InterSystems environment and have the appropriate permissions to run COS code.


 

Well, that's probably the most effective yet comprehensive answer we could ever have expected within about 25 seconds. Right?

(Please note: the above COS sample isn't perfectly accurate. It's only a boilerplate structure, needs more fine-tuning for real-world implementations. As @Tete Zhang kindly pointed, Ensemble level HL7 message class uses ImportFromSting() instead of Parse(); DocType should be more specific etc. And apparently it seemed to create a new FHIR class hierarchy on its own from the COS ground up, instead of using existing HealthShare's built-in FHIR STU3 plumbings. Its "thought process" appears to be following the previous answer of Python structure. It's not going to achieve an Ensemble or HealthShare certification for now, and has a lot to learn with more specialist interactions, but as an excuse we might have to pardon it for its first trial in less than 2 minutes so far, by today?  )   
  

Now that we are here, we need to be fair to other widely used options I have been enjoying too. So I carried on a few more questions with slightly more faith:


Question 4: Can you give a sample transformation from HL7 V2 to FHIR Stu3 in Orion Rhapsody


ChatGPT:

Certainly! Here's an example of how to transform an HL7 V2 message to a FHIR STU3 Observation resource in Orion Rhapsody:

  1. First, create a new Rhapsody channel and add an HL7 Connector component to receive the input HL7 V2 message.
  2. Add a new Data Transformation component to the channel and configure it to use a custom script.
  3. In the Data Transformation component, add the following script to transform the HL7 V2 message to a FHIR STU3 Observation resource:
function transform(msg) {

    var hl7 = Packages.ca.uhn.hl7v2.model.v24.message.ORU_R01(msg.getRawData());

    var observation = new Packages.org.hl7.fhir.dstu3.model.Observation();
    observation.setStatus(Packages.org.hl7.fhir.dstu3.model.Observation.ObservationStatus.FINAL);

    var codeableConcept = new Packages.org.hl7.fhir.dstu3.model.CodeableConcept();
    var coding = new Packages.org.hl7.fhir.dstu3.model.Coding();
    coding.setSystem("http://loinc.org");
    coding.setCode(hl7.getRESPONSE().getORDER_OBSERVATION()[0].getOBR().getUniversalServiceIdentifier().getIdentifier().getValue());
    coding.setDisplay(hl7.getRESPONSE().getORDER_OBSERVATION()[0].getOBR().getUniversalServiceIdentifier().getText().getValue());
    codeableConcept.addCoding(coding);
    observation.setCode(codeableConcept);

    var quantity = new Packages.org.hl7.fhir.dstu3.model.Quantity();
    quantity.setValue(hl7.getRESPONSE().getORDER_OBSERVATION()[0].getOBSERVATION()[0].getOBX().getObservationValue()[0].getValue().getValue());
    quantity.setUnit(hl7.getRESPONSE().getORDER_OBSERVATION()[0].getOBSERVATION()[0].getOBX().getUnits().getText().getValue());
    observation.setValue(quantity);

    var reference = new Packages.org.hl7.fhir.dstu3.model.Reference();
    reference.setReference("Patient/" + hl7.getPATIENT_RESULT().getPATIENT().getPid3_PatientIdentifierList()[0].getID().getValue());
    observation.setSubject(reference);

    return JSON.stringify(observation);
}
  1. Save the script and test the transformation by sending an HL7 V2 message to the channel. The transformed FHIR STU3 Observation resource should be output by the Data Transformation component in JSON format.

This script uses the HAPI FHIR and HAPI HL7 v2 libraries, which are included with Orion Rhapsody. It starts by parsing the input HL7 V2 message using the HAPI HL7 library.

Next, the script creates a new FHIR STU3 Observation resource and sets its status, code, value, and subject elements based on the corresponding values in the HL7 message.

Finally, the script serializes the FHIR resource as JSON and returns it.

Note that this is just a simple example and may not work for all HL7 V2 message types or FHIR resources. You may need to modify the script to handle different message types or map additional fields to the FHIR resource.


 


Questions 5 Can you create a sample transformation converting HL7 v2.4 to FHIR R4 in Mirth integration engine


ChatGPT

Certainly! Here's an example of how to transform an HL7 V2.4 message to a FHIR R4 Observation resource in Mirth:

  1. First, create a new Mirth channel and add an HL7v2 Listener source connector to receive the input HL7 V2 message.
  2. Add a new JavaScript Writer destination connector to the channel and configure it to use a custom script.
  3. In the JavaScript Writer connector, add the following script to transform the HL7 V2 message to a FHIR R4 Observation resource:
var hl7msg = msg['HL7v2 Message'].toString(); // get the HL7 V2 message
var hl7data = Packages.ca.uhn.hl7v2.parser.Parser.parse(hl7msg); // parse the message with the HAPI HL7 parser
// extract values from the HL7 message and create the FHIR resource
var obs = new Packages.org.hl7.fhir.r4.model.Observation();
obs.setStatus(Packages.org.hl7.fhir.r4.model.Observation.ObservationStatus.FINAL);

var code = new Packages.org.hl7.fhir.r4.model.CodeableConcept();
code.addCoding()
  .setSystem("http://loinc.org")
  .setCode(hl7data.get("OBR-4-1").getValue())
  .setDisplay(hl7data.get("OBR-4-2").getValue());
obs.setCode(code);

var value = new Packages.org.hl7.fhir.r4.model.Quantity();
value.setValue(hl7data.get("OBX-5-1").getValue());
value.setUnit(hl7data.get("OBX-6").getValue());
obs.setValue(value);

var subject = new Packages.org.hl7.fhir.r4.model.Reference();
subject.setReference("Patient/" + hl7data.get("PID-3-1").getValue());
obs.setSubject(subject);

// output the FHIR resource as JSON
var json = org.hl7.fhir.r4.formats.JsonParser().setPrettyPrint(true).encodeResourceToString(obs);
return json;
  1. Save the script and test the transformation by sending an HL7 V2 message to the channel. The transformed FHIR R4 Observation resource should be output by the JavaScript Writer connector in JSON format.

This script uses the HAPI FHIR and HAPI HL7 v2 libraries, which are included with Mirth. It starts by extracting the HL7 V2 message from the Mirth message object and parsing it with the HAPI HL7 parser.

Next, the script creates a new FHIR R4 Observation resource and sets its status, code, value, and subject elements based on the corresponding values in the HL7 message.

Finally, the script serializes the FHIR resource as JSON and returns it.

Note that this is just a simple example and may not work for all HL7 V2 message types or FHIR resources. You may need to modify the script to handle different message types or map additional fields to the FHIR resource.



Conclusions?


I don't have any conclusions or next steps anymore, since even my imagination can't catch up with its leaps.  It looks as if there were only limited options for me now:  either I can spend time in picking out its imperfect coding styles, or I can re-think of what's left on my plate in another few weeks? I need to think carefully more than ever about the potential of this thing. :)

Joking aside, while I am enjoying posting on this forum (thanks to the hosts), another thought comes to my mind would be, this "could" actually be an important enabler for some competitive niche player to leap forward to mass market, right?   It used to take years to be really comfortable with some coding languages and scripts, due to various reasons, but now the landscape is moving, and with ChatGPT it not only offers well-composed documentation, instructions and samples, but it might also be able to automatically manufacture the engineering tools of your choice over the coming months or years, right?  It seems to be able to level up the play ground in "languages", and eventually the non-functional side of features such as performance etc service qualities would look more  outstanding.  

7 comentarios
Comentarios (7)3
Inicie sesión o regístrese para continuar
Artículo
· 13 feb, 2023 Lectura de 4 min

When to use Columnar Storage

With InterSystems IRIS 2022.2, we introduced Columnar Storage as a new option for persisting your IRIS SQL tables that can boost your analytical queries by an order of magnitude. The capability is marked as experimental in 2022.2 and 2022.3, but will "graduate" to a fully supported production capability in the upcoming 2023.1 release. 

The product documentation and this introductory video, already describe the differences between row storage, still the default on IRIS and used throughout our customer base, and columnar table storage and provide high-level guidance on choosing the appropriate storage layout for your use case. In this article, we'll elaborate on this subject and share some recommendations based on industry-practice modelling principles, internal testing, and feedback from Early Access Program participants. 

Generally, our guidance on choosing an appropriate table layout for your IRIS SQL schema is as follows:

  1. If you’re deploying an application that leverages IRIS SQL or Objects, such as an EHR, ERP or transaction processing application, there is no need to change its current row storage layout to a columnar one. Most SQL queries issued for end user applications or programmatic transactions only retrieve or update a limited number of rows, and result rows usually correspond to table rows, with very limited use of aggregate functions. In such cases, the benefits offered by columnar storage and vectorized query processing don’t apply.  
  2. If such an application also embeds operational analytics, consider adding columnar indices if the corresponding analytical queries’ current performance is not satisfactory. This includes, for example, dashboards showing the current inventory or basic financial reporting on live data. Look for numeric fields used in aggregations (e.g. quantities, currencies) or high-cardinality fields used in range conditions (e.g. timestamps). A good indicator for such opportunities is current use of bitmap indices to speed up the filtering of large numbers of rows, usually on low-cardinality fields (e.g. categorical or ordinal fields). There is no need to replace these bitmap indices; the additional columnar indices work well in conjunction with them and are meant to avoid excessive reads from the master map or regular index maps (single gref per row).  
  3. If your IRIS SQL tables contain less than a million rows, there is no need to consider columnar storage. We prefer not to pin ourselves to specific numbers, but the benefits of vectorized query processing are unlikely to make a difference in these low ranges.  
  4. If you’re deploying an IRIS SQL schema for Data Warehouse, Business Intelligence, or similar analytical use cases, consider changing it to default to columnar storage. Star schemas, snowflake schemas or other denormalized table structures as well as broad use of bitmap indices and batch ingestion are good indicators for these use cases. Analytical queries that will benefit most from columnar storage are those that scan large numbers of rows and aggregate values across them. When defining a “columnar table”, IRIS will transparently resort to a row layout for columns in that table that aren’t a good fit for columnar storage, such as streams, long strings or serial fields. IRIS SQL fully supports such mixed table layouts and will use vectorized query processing for eligible parts of the query plan. The added value of bitmap indices on columnar tables is limited, so they can be left out.

Mileage will vary based on both environmental and data-related parameters. Therefore, we highly recommend customers test the different layouts in a representative setup. Columnar indices are easy to add to a regular row-organized table and will quickly yield a realistic perspective on query performance benefits. This, along with the flexibility of mixed table layouts, is a key differentiator of InterSystems IRIS that helps customers achieve an order-of-magnitude performance improvement.

We intend to make these recommendations more concrete as we get more real-world experience on the full production release. Obviously, we can provide more concrete advice based on customers’ actual schema and workload through the Early Access Program and POC engagements, and look forward to feedback from customers and community members. Columnar Storage is part of the InterSystems IRIS Advanced Server license and also enabled in the Community Edition of InterSystems IRIS and IRIS for Health. For a fully scripted demo environment, please refer to this GitHub repository.

2 comentarios
Comentarios (2)2
Inicie sesión o regístrese para continuar
Artículo
· 12 feb, 2023 Lectura de 3 min

Enabling IRIS Interoperability Source Control with InterSystems Package Manager and git-source-control

Hi Developers!

As you know InterSystems IRIS Interoperability solutions contain different elements of the solution, such as: production, business rule, business process, data transformation, record mapper. And sometimes we can create and modify these elements with UI tools.  And of course we need a handy and robust way to source-control the changes made with UI tools.

For a long time this was a manual (export class, element, global, etc) or cumbersome settings procedure, so the saved time with source-control UI automation was competing with lost time to setup and maintain the settings.

Now the problem doesn't exist any more. With two approaches: package first development and usage of IPM package git-source-control by @Timothy Leavitt 
.

Meme Creator - Funny WOW IT REALLY WORKS Meme Generator at MemeCreator.org!

The details are below!

Disclaimer: this relates to a client-side approach of development, when the elements of the Interoperability production are the files in the repository.

So, this article will not be long at all, as the solution is fantastically simple.

I suppose you develop with docker and once you build the dev environment docker image with IRIS you load the solution as an IPM module. This is called "Package first" development and there is the related video and article. The basic idea is that dev-environment docker image with iris gets the solution loaded as package, as it is being deployed on a client's server.

To make a package first dev environment for your solution add a module.xml into the repository, describe all the elements of it and call "zpm load "repository/folder" command at a building phase of docker image.

I can demonstrate the idea with the example template: IRIS Interoperability template and its module.xml. Here is how the package is being loaded during docker build:

zpm "load /home/irisowner/irisdev/ -v":1:1

the source. 

See the following two lines placed before loading the package source control. Because of it source control starts working automatically for ALL the interoperability elements in the package and will export it in a proper folders in a proper format:

zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")

the source

How is it possible?

Since recently git-source-control app supports IPM pakcages for source control that are loaded in a dev mode. It reads the folder to export, and imports the structure of sources from module.xml. @Timothy Leavitt can give provide more details.

If we check in terminal the list of IPM modules after the environment is built we can see that loaded module is indeed in dev mode:

USER>zpm
=============================================================================
|| Welcome to the Package Manager Shell (ZPM).                             ||
|| Enter q/quit to exit the shell. Enter ?/help to view available commands ||
=============================================================================
zpm:USER>list
git-source-control      2.1.0
interoperability-sample 0.1.2 (DeveloperMode)
sslclient               1.0.1
zpm:USER>

Let's try? 

I cloned this repository, opened in VSCode and built the image. And below I test Interoperability UI and source control. I make a change in UI and it immediately appear in the sources and diffs:

It works! That's it! 

As a conclusion, what is needed to let you have source control for Interoperability UI elements in your project:

1. Add two lines in iris.script while building docker image:

zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")

And load your solution as a module after that, e.g. like here:

zpm "load /home/irisowner/irisdev/ -v":1:1

2. Or you can start a new one by creating repository from Interoperability template.

Thanks for reading! Comments and feedback are welcome!

8 comentarios
Comentarios (8)3
Inicie sesión o regístrese para continuar
Comentarios (1)1
Inicie sesión o regístrese para continuar