Nueva publicación

検索

Pregunta
· 26 jun, 2023

Returning a DICOM Worklist using a non-DICOM external data source

Hi everyone.

Has anyone here had any luck with receiving a DICOM C-FIND-RQ and returning a worklist using data gathered from a non-DICOM source (for example, an external SQL query) and would be able to share how they achieved this?

I started to look at the demo however I'm tripping over the logic for building the response. I think I just need to just act on the initial message by executing the call to the external data source and then looping through the result set, however the demos structure seems to suggest that I need to almost loop through the result set outside of the process (using a "context variable") to allow for a DICOM C-CANCEL-RQ to be able to interrupt the result set loop.

2 comentarios
Comentarios (2)1
Inicie sesión o regístrese para continuar
Artículo
· 19 jun, 2023 Lectura de 8 min

Open AI integration with IRIS

 

As you all know, the world of artificial intelligence is already here, and everyone wants to use it to their benefit.

There are many platforms that offer artificial intelligence services for free, by subscription or private ones. However, the one that stands out because of the amount of "noise" it made in the world of computing is Open AI, mainy thanks to its most renowned services: ChatGPT and DALL-E.

<--break->What is Open AI?

Open AI is a non-profit AI research laboratory launched in 2015 by Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, Elon Musk, John Schulman and Andrej Karpathy with the aim of promoting and developing friendly artificial intelligence that would benefit humanity as a whole.

Since its foundation, the guys have released some fascinating products that, if used for good purposes, could be really powerful tools. Yet, like any other new technology, they pose a threat of potentially being used to commit crimes or do evil.

I decided to test the ChatGPT service and asked it what the definition of artificial intelligence was. The answer I received was an accumulation of notions found on the Internet and summarized in such a way a human would respond.

In short, an AI can only reply using the information used to train it. Employing its internal algorithms and the data fed to it during the training, it could compose articles, poems, or even pieces of computer code.

Artificial intelligence is going to impact the industry considerably and ultimately revolutionize everything…. Perhaps the expectations of how artificial intelligence will affect our future are being overstated, so we shall start to use it correctly for the common good.

We are tired of hearing that this new technology will change everything and that ChatGPT is the tool that will turn our world upside down, just like its brother GPT-4 did. Neither will these tools leave people without jobs, nor are they going to rule the world (like Skynet). What we are trying to analyze here is the trend. We start by looking at where we were before to understand what we have achieved so far and thus anticipate where we will find ourselves in the future.

In 2020, psychologist and cognitive scientist Gary Marcus published an article analyzing how GPT-2 worked. He conducted a methodical study of its operation revealing that that type of tool actually failed to understand what it was writing or what orders it received.

“Here's the problem: upon careful inspection, it becomes apparent the system has no idea what it is talking about: it cannot follow simple sequence of events nor reliably have any idea what might happen next.”

Follow the link below to see the entire article: https://thegradient.pub/gpt2-and-the-nature-of-intelligence/

You can clearly witness the evolution here! GPT-3 (2020) had to be trained using enough inputs indicating what you wanted to achieve, whereas the current GPT-4 version can use a natural language making it possible to give those inputs in an easier way. Now it "seems" to understand us as well as know what it is talking about itself.

Now when we use the same example designed by Gary Marcus in 2020 for GPT-2, we get the result as expected:
 

OpenAI can currently provide us with a set of tools that have greatly evolved amazingly fast and, if combined properly, will make it much easier for us to obtain a more efficient result compared to the past.

What products does OpenAI offer?

I am going to talk about the two best-known ones, such as DALL-E and Chat-GPT. However, they also have other services, such as Whisper, which transcribes audio into text and even translates it into a different language, or Embeddings, which allows us to measure the relationship of the text strings for searches, recommendations, groupings, etc…

What do I need to use these services?

You will have to create an OpenAI account, which is very easy to make, and you will be all set to use their services directly through their website at that point.

Chat: https://chat.openai.com

DALL-E: https://labs.openai.com

We want to integrate these services from IRIS, so we should use its API to access them. First, we must create an account and provide a payment method to be able to use the API. The cost is relatively small and depends on the use you want to give it. The more tokens you consume, the more you have to pay 😉

What is a token?

It is a way the models uses to understand and process the texts. Tokens can be words or just character fragments. For example, the word "hamburger" is divided into the tiles "ham," "bur," and "ger," while a short common word like "pear" is a single tile. Many tokens begin with a blank space, for example, " hello" and " bye".

Is it complicated to use the API?

Not at all. Follow the steps bellow and you will not have any problems:

Step 1: Create an API Key

Select the option "View API Key" within the menu of your user

 

Step 2: Create a new secret key

Press the button at the bottom of the section API Keys

 

VERY IMPORTANT: Once the secret key is created, it cannot be recovered at any time, so remember to save that information in a secure place.

Step 3: Define the name of your organization

The definition of your organization is not mandatory but recommended. It is a part of the header of API calls. You also ought to copy your organization's code for later use.

 You can modify it as many times as you want.

Step 4: Prepare the API call using the Secret Key and the Organization ID

As part of the API call, you must use a Bearer token authentication header and indicate the Secret Key.

It should also be indicated as a header parameter next to the Organization ID

Header parameter Value
api-key sk-MyPersonalSecretKey12345
OpenAI-Organization org-123ABCFakeId

This would be an example of an invocation

POST https://api.openai.com/v1/images/create
header 'Authorization: Bearer sk-MyPersonalSecretKey12345'
header 'api-key: sk-MyPersonalSecretKey12345'
header 'OpenAI-Organization: org-123ABCFakeId '
header 'Content-Type: application/json'

This configuration is common for all endpoints, so let's see how some of its best-known methods work.

Models

Endpoint: GET https://api.openai.com/v1/models

You can download all the models that OpenAI has defined to be used. Each of those models has different characteristics though. The latest, most up-to-date model for use in Chat is “gpt-4”. Bear in mind that all models IDs are in lowercase.

If the model name is not provided, it will return all existing models.

You can see its features and where you can use it on the OpenAI documentation page https://platform.openai.com/docs/models/overview

Chat

Endpoint: POST https://api.openai.com/v1/chat/completions

It allows you to create a conversation with the indicated model or through a notice. You can indicate how many tokens you want to use as a maximum and when you should stop the conversation.

The input parameters will be as follows:

  • model: Required. This is the ID of the model to use. You can use the ListModels API to see all of the available models or check our model overview for its description.
  • messages: Required. Contains the type of message indicating which paper is to be used. You can define a dialog form indicating if it is the user or the assistant.
    • role: It is the role of the person of the message.
    • content: It is the content of the message.
  • temperature: Optional. It describes the demonstrated temperature with the value between 0 and 2. If a very high number is given, the result is more random. If a low digit is chosen, the answer will be more focused and deterministic. If it’s not defined, the default value is 1.
  • stop: Optional. It is related to sequences where the API stops generating more tokens. If "none" is indicated, the tokens will be generated infinitely.
  • max_tokens: Optional. It describes the maximum number of tokens to generate the content and is limited to the maximum number of tokens allowed by the model.

Check out the link below for the documentation describing this method: https://platform.openai.com/docs/api-reference/chat/create

Image

Endpoint: POST https://api.openai.com/v1/images/generations

It allows you to create an image as indicated by the prompt parameter. In addition, we can define the size and the way we want the result to be returned, be it through a link or content in Base64

The input parameters would be as mentioned below:

  • message: Required. It is related to a text describing the image we want to generate.
  • n: Optional. It is related to the maximum number of images to generate. This value should be between 1 and 10. If it's not indicated, the default value is 1.
  • size: Optional. It is related to the size of the generated image. The value must be "256x256", "512x512", or "1024x1024". if it's not indicated, the default values is "1024x1024"
  • response_format: Optional. It is related to the format of how you want the generated images to be returned. Values should be "url" or "b64_json". If it’s not indicated, the default values is "url"

Check out the link below for the documentation describing this method: https://platform.openai.com/docs/api-reference/images/create

What does iris-openai offer?

Link: https://openexchange.intersystems.com/package/iris-openai

This framework is designed to utilize request and response messages with the necessary properties to connect with OpenAi and use such of its methods as Chat, Models, and Images.

You can configure your production in a way that will allow you to use the messaging classes to call a Business Operation that connects to the OpenAI API.

Remember that you must configure the production to indicate the values of the Secret Key and Organization ID as stated above.

 

If you want to create an image, you need to produce a class instance "St.OpenAi.Msg.Images.ImagesRequest" and populate the values of the options to make a new picture.

Example:

Set image=##class(St.OpenAi.Msg.Images.ImagesRequest).%New()
Set image.Operation = "generations"
Set image.Prompt = "Two cats with a hat reading a comic"
Set image.NumOfImages = 2

When finished, call Business Operation "St.OpenAi.BO.Api.Connect"

 

Note: In this case, it will retrieve the link of two created images.

{
    "created": 1683482604,
    "data": [
        {
            "url": "https://link_of_image_01.png”
        },
        {
            "url": "https://link_of_image_02.png”
        }
    ]
}

If it has been indicated that we want to operate a Base64 instead of a link, it will retrieve the following message:

{
    "created": 1683482604,
    "data": [
        {
            "b64_json": "iVBORw0KGgoAAAANSUhEUgAABAAAAAQACAIAAADwf7zUAAAAaGVYSWZNTQAqAAAACAACknwAAgAAACkAAAAmkoYAAgAAABgAAABQAAAAAE9wZW5BSS0tZjM5NTgwMmMzNTZiZjNkMDFjMzczOGM2OTAxYWJiNzUAAE1hZGUgd2l0aCBPcGVuQUkgREFMTC1FAAjcEcwAAQAASURBVHgBAAuE9HsBs74g/wHtAAL7AAP6AP8E+/z/BQYAAQH++vz+CQcH+fn+AgMBAwQAAPr++///AwD+BgYGAAIC/fz9//3+AAL7AwEF/wL+9/j9DQ0O/vz/+ff0CQUJAQQF/f/89fj4BwcD/wEAAfv//f4BAQQDAQH9AgIA/f3+AAABAgAA/wH8Af/9AQMGAQIBAvv+/////v/+/wEA/wEAAgMA//sCBAYCAQ”
        },
        {
            "b64_json": "D99vf7BwcI/v0A/vz9/wH8CQcI+vz8AQL9/vv+CAcF+wH/AwMA9/f8BwUEAwEB9fT+BAcKBAIB//7//gX5//v8/P7+DgkO+fr6/wD8AP8B/wAC/f4CAwD+/wT+Av79BwcE/Pz7+/sBAAD+AAQE//8BAP79AgIE///+AQABAv8BAwYA+vkB/v7/AwQE//7+/Pr6BAYCBgkE/f0B/Pr6AQP+BAED/gMC/fr+AwEC/v/+//7+CQcH+fz5BAYB9vf9BgQD+/n+BwYK/wD////9/gD5AwIDAAQE+/j6BAUD//rwAC/fr6+wYEBAQAA/4B//v6+/8AAAUDB/L49woGAQMDCfr7+wMCAQMHBPvy+AQJBQD+/wEEAfr3+gIGBgP/Af3++gUFAvz9//4A/wP/AQIGBPz+/QD7/wEDAgkGCPX29wMCAP4FBwX/+23"
        }
    ]
}

What's next?

After the release of this article, the content of iris-openai will be extended to make it possible to use for Wisper methods and image modification.

A further article will explain how to use these methods and how to include our images or make transcriptions of the audio content.

6 comentarios
Comentarios (6)3
Inicie sesión o regístrese para continuar
Pregunta
· 16 jun, 2023

How to publish 2 SMP on a serveriris

Hi,
i got 2 server with iris instances on them:

srv1
irisinstance1 port 51773/52773
irisinstance2 port 51774/52774

srv2
irisinstance3 port 51773/52773
irisinstance4 port 51774/52774

 

Both of them have apps published on an external apache on port 443 and i would like to publish irisinstance1 and irisinstance2 on port 443 of srv2.

Something like https://srv2/mgmt1/csp/sys/UtilHome.csp and similar to mgmt2.

I've tried with proxypass without luck.

How can i do that? Is there a guide?

 

Thanks!

1 Comentario
Comentarios (1)1
Inicie sesión o regístrese para continuar
Artículo
· 16 jun, 2023 Lectura de 10 min

Creating a REST service in IRIS

One of the most common needs of our clients is the creation of REST services that allow access to the information present in IRIS / HealthConnect. The advantage of these REST services is that it allows the development of custom user interfaces with the most current technologies taking advantage of the reliability and performance of IRIS in the back-end.

In today's article we are going to create a web service step by step that will allow us to both store data in our database and later consult them. In addition, we are going to do it by configuring a production that allows us to control the flow of messages that we are receiving at all times and thus be able to control its correct operation.

Before starting, just tell you that you have all the code available on Open Exchange so that you can replicate it as many times as you need, the project is configured in Docker, using the IRIS Community so that you don't have to do anything more than deploy it.

Alright, let's get started.

Environment preparation

Before starting with the configuration of the REST service we must prepare our development environment, we must know what information we are going to receive and what we are going to do with it. For this example we have decided that we are going to receive personal data in the following JSON format:

{
    "PersonId": 1,
    "Name": "Irene",
    "LastName": "Dukas",
    "Sex": "Female",
    "Dob": "01/04/1975"
}

As one of the objectives is to store the information we receive, we are going to create an Objectscript class that allows us to register the information in our IRIS. As you can see, the data is quite simple, so the class will not have much complication:

Class WSTEST.Object.Person Extends %Persistent
{

/// ID of the person
Property PersonId As %Integer;
/// Name of the person
Property Name As %String;
/// Lastname of the person
Property LastName As %String;
/// Sex of the person
Property Sex As %String;
/// DOB of the person
Property Dob As %String;
Index PersonIDX On PersonId [ PrimaryKey ];
}

Perfect, we already have our class defined and we can start working with it.

Creation of our endpoint

Now that we have defined the data class with which we are going to work, it is time to create our Objectscript class that will work as an endpoint that will be called from our front-end. Let's see the example class we have in our project step by step:

Class WSTEST.Endpoint Extends %CSP.REST
{

Parameter HandleCorsRequest = 0;
XData UrlMap [ XMLNamespace = "https://www.intersystems.com/urlmap" ]
{
<Routes>
	<Route Url="/testGet/:pid" Method="GET" Call="TestGet" />
	<Route Url="/testPost" Method="POST" Call="TestPost" />
</Routes>
}

ClassMethod OnHandleCorsRequest(url As %String) As %Status
{
	set url = %request.GetCgiEnv("HTTP_REFERER")
    set origin = $p(url,"/",1,3) // origin = "http(s)://origin.com:port"
    // here you can check specific origins
    // otherway, it will allow all origins (useful while developing only)
	do %response.SetHeader("Access-Control-Allow-Credentials","true")
	do %response.SetHeader("Access-Control-Allow-Methods","GET,POST,PUT,DELETE,OPTIONS")
	do %response.SetHeader("Access-Control-Allow-Origin",origin)
	do %response.SetHeader("Access-Control-Allow-Headers","Access-Control-Allow-Origin, Origin, X-Requested-With, Content-Type, Accept, Authorization, Cache-Control")
	quit $$$OK
}
// Class method to retrieve the data of a person filtered by PersonId
ClassMethod TestGet(pid As %Integer) As %Status
{
    Try {
        Do ##class(%REST.Impl).%SetContentType("application/json")
        If '##class(%REST.Impl).%CheckAccepts("application/json") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP406NOTACCEPTABLE,$$$ERROR($$$RESTBadAccepts)) Quit
        // Creation of BS instance
        set status = ##class(Ens.Director).CreateBusinessService("WSTEST.BS.PersonSearchBS", .instance)

        // Invocation of BS with pid parameter
        set status = instance.OnProcessInput(pid, .response)
       	if $ISOBJECT(response) {
            // Sending person data to client in JSON format
        	Do ##class(%REST.Impl).%WriteResponse(response.%JSONExport())
		}
        
    } Catch (ex) {
        Do ##class(%REST.Impl).%SetStatusCode("400")
        Do ##class(%REST.Impl).%WriteResponse(ex.DisplayString())
        return {"errormessage": "Client error"}
    }
    Quit $$$OK
}
// Class method to receive person data to persist in our database
ClassMethod TestPost() As %Status
{
    Try {
        Do ##class(%REST.Impl).%SetContentType("application/json")
        If '##class(%REST.Impl).%CheckAccepts("application/json") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP406NOTACCEPTABLE,$$$ERROR($$$RESTBadAccepts)) Quit
        // Reading the body of the http call with the person data
        set bodyJson = %request.Content.Read()
        
        // Creation of BS instance
        set status = ##class(Ens.Director).CreateBusinessService("WSTEST.BS.PersonSaveBS", .instance)
       	#dim response as %DynamicObject
        // Invocation of BS with person data
        set status = instance.OnProcessInput(bodyJson, .response)
        
        if $ISOBJECT(response) {
            // Returning to the client the person object in JSON format after save it
            Do ##class(%REST.Impl).%WriteResponse(response.%JSONExport())
	    }
        
    } Catch (ex) {
        Do ##class(%REST.Impl).%SetStatusCode("400")
        Do ##class(%REST.Impl).%WriteResponse(ex.DisplayString())
        return {"errormessage": "Client error"}
    }
    Quit $$$OK
}

}

Don't worry if it seems unintelligible to you, let's see the most relevant parts of our class:

Class declaration:

Class WSTEST.Endpoint Extends %CSP.REST

As you can see, our WSTEST.Endpoint class extends %CSP.REST, this is necessary to be able to use the class as an endpoint.

Routes definition:

XData UrlMap [ XMLNamespace = "https://www.intersystems.com/urlmap" ]
{
<Routes>
	<Route Url="/testGet/:pid" Method="GET" Call="TestGet" />
	<Route Url="/testPost" Method="POST" Call="TestPost" />
</Routes>
}

In this code snippet we are declaring the routes that can be called from our front-end.

As you can see, we have two declared routes, the first of which will be a GET call in which we will be sent the pid parameter that we will use to search for people by their identifier and that will be managed by the ClassMethod TestGet. The second call will be of the POST type in which we will be sent the information of the person that we have to record in our database and that will be processed by the ClassMethod TestPost.

Let's take a look at both methods:

Retrieving data of a person:

ClassMethod TestGet(pid As %Integer) As %Status
{
    Try {
        Do ##class(%REST.Impl).%SetContentType("application/json")
        If '##class(%REST.Impl).%CheckAccepts("application/json") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP406NOTACCEPTABLE,$$$ERROR($$$RESTBadAccepts)) Quit
        // Creation of BS instance
        set status = ##class(Ens.Director).CreateBusinessService("WSTEST.BS.PersonSearchBS", .instance)

        // Invocation of BS with pid parameter
        set status = instance.OnProcessInput(pid, .response)
       	if $ISOBJECT(response) {
            // Sending person data to client in JSON format
        	Do ##class(%REST.Impl).%WriteResponse(response.%JSONExport())
		}
        
    } Catch (ex) {
        Do ##class(%REST.Impl).%SetStatusCode("400")
        Do ##class(%REST.Impl).%WriteResponse(ex.DisplayString())
        return {"errormessage": "Client error"}
    }
    Quit $$$OK
}

In this method you can see how we have declared in our ClassMethod the reception of the pid attribute that we will use in the subsequent search. Although we could have done the search directly from this class, we have decided to do it within a production to be able to control each of the operations, which is why we are creating an instance of the Business Service WSTEST.BS.PersonSearchBS to which we later call its method OnProcessInput with the received pid. The response that we will receive will be of the WSTEST.Object.PersonSearchResponse type that we will transform into JSON before sending it to the requester.

Storing a person's data:

ClassMethod TestPost() As %Status
{
    Try {
        Do ##class(%REST.Impl).%SetContentType("application/json")
        If '##class(%REST.Impl).%CheckAccepts("application/json") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP406NOTACCEPTABLE,$$$ERROR($$$RESTBadAccepts)) Quit
        // Reading the body of the http call with the person data
        set bodyJson = %request.Content.Read()
        
        // Creation of BS instance
        set status = ##class(Ens.Director).CreateBusinessService("WSTEST.BS.PersonSaveBS", .instance)
       	#dim response as %DynamicObject
        // Invocation of BS with person data
        set status = instance.OnProcessInput(bodyJson, .response)
        
        if $ISOBJECT(response) {
            // Returning to the client the person object in JSON format after save it
            Do ##class(%REST.Impl).%WriteResponse(response.%JSONExport())
	    }
        
    } Catch (ex) {
        Do ##class(%REST.Impl).%SetStatusCode("400")
        Do ##class(%REST.Impl).%WriteResponse(ex.DisplayString())
        return {"errormessage": "Client error"}
    }
    Quit $$$OK
}

As in the previous case, we could have saved our person object directly from this class, but we have decided to do it from a Business Operation that will be called from the Business Service WSTEST.BS.PersonSaveBS.

As you can see in the code we are retrieving the information sent by the client in the POST call by reading the Stream present in %request.Content. The string obtained will be what we will pass to the Business Service.

 

Publishing our Endpoint

To avoid taking this article forever, we are going to ignore the explanation related to production, you can review the code directly in the OpenExchange project. The production that we have configured is as follows:

We have 2 declared Business Services, one to receive search requests and another for storage requests of the person's data. Each of them will invoke their appropriate Business Operation.

Perfect, let's review what we have configured in our project:

  • 1 endpoint class that will receive client requests (WSTest.Endpoint)
  • 2 Business Services that will be called from our endpoint class (WSTest.BS.PersonSaveBS and WSTest.BS.PersonSearchBS).
  • 2 Business Operations in charge of carrying out the search and recording of the data (WSTest.BS.PersonSaveBS and WSTest.BS.PersonSearchBS)
  • 4 classes to send and receive data within the production that extend from Ens.Request and Ens.Response (WSTest.Object.PersonSearchRequestWSTest.Object.PersonSaveRequestWSTest.Object.PersonSearchResponse WSTest.Object.PersonSaveResponse).

We only have one last step left to put our web service into operation and that is its publication. To do this, we will access the Management Portal option System Administration -->  Security -> Applications -> Web Applications

We will see a list of all the Web Applications configured in our instance:

Let's create our web application:

Let's go over the points we need to configure:

  • Name: We will define the route that we are going to use to make the invocations to our service, for our example it will be /csp/rest/wstest
  • Namespace: the namespace on which the web service will work, in this case we have created WSTEST in which we have configured our production and our classes.
  • Enable Application: enabling the web service to be able to receive calls.
  • Enable - REST: When selecting REST we indicate that this web service is configured to receive REST calls, when selecting it we must define the Dispatch Class that will be our endpoint class WSTEST.Endpoint.
  • Allowed Authentication Methods: configuration of the authentication of the user who makes the call to the web service. In this case we have defined that it be done through Password, so in our Postman we will configure the Basic Authorization mode in which we will indicate the username and password. We have the option of defining the use of JWT Authentication which is quite useful as it does not expose the user data and password in REST calls, if you are interested in delving into JWT you can consult this article.

Once we finish configuring our web application we can launch a couple of tests by opening Postman and importing the WSTest.postman_collection.json file present in the project.

Testing our Endpoint

With everything configured in our project and production started, we can launch a couple of tests on our web service. We have superuser configured as the requesting user so we won't have problems in order to save and retrieve data. In case of using a different user, we must make sure that either he has the necessary roles assigned or we assign them in the Application Roles tab of the definition of our web application.

Let's start by recording someone in our database:

We have received a 200, so it seems that everything went well, let's check the message in production:

Everything has gone well, the message with the JSON has been correctly received in the BS and has been successfully recorded in the BO.

Now let's try to recover the data of our beloved Alexios Kommenos:

Bingo! There is Alexios with all the information about him. Let's check the message in production:

Perfect, everything has worked as it should. As you have seen, it is really easy to create a REST web service in IRIS, you can use this project as a base for your future developments and if you have any questions, don't hesitate to leave a comment.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Pregunta
· 15 jun, 2023

Newbie question on calculate size of X12 messages.

How do I determine the size of an X12 message within the Rule Editor?  I should add this is for HealthConnect or Ensemble.

The FullSize property always shows 122, and the FullSizeGet method isn't available within the Rule Editor.

Our vendor can't handle messages over 500,000 characters, so I need to write those to another queue for later processing, and to get the interface into they're system from crashing.

Thanks,

2 comentarios
Comentarios (2)1
Inicie sesión o regístrese para continuar