Nueva publicación

Encontrar

Pregunta
· 19 mayo, 2024

Pooling exception: Unable to find original pool for connection and IRIS Security Error

I created a Dotnet api to connect with IRIS database to do some task.
"InterSystems.Data.IRISClient.dll" (NativeAPI) is used in dotnet api to connect IRIS.
I used following code in dotnet to open and close the IRIS connection.

Open Connection
---------------------------
IRISConnection iRISConnect = new IRISConnection();
IRISCommand command = new IRISCommand();
iRISConnect.ConnectionString = "Server=xxxx.com; Port=1972; Namespace=aaa; Password=yyyy; User ID=xxxxxx;";
IRIS NativeAPI = IRIS.CreateIRIS(iRISConnect);
iRISConnect.Open();

Connection Close
--------------------------
command.Dispose();
iRISConnect.Close();
IRISPoolManager.RemoveAllIdleConnections();

For single API call this code is working, But when I call the dotnet API mutliple times, getting following errors. Anyone faced the same issue or is there any solution for this issue.

1. "Pooling exception: Unable to find original pool for connection"
    {"severityLevel":"Error","outerId":"0","message":"Pooling exception: Unable to find original pool for connection","type":"InterSystems.Data.IRISClient.IRISException","id":"5416676","parsedStack":[{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISPoolManager.ReleaseConnection","level":0,"line":0},{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISADOConnection.Close","level":1,"line":0},{"assembly":"GHIS.Claims.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null","method":"GHIS.Claims.Domain.Common.DBConnect.ClearPool"
2. "IRIS Security Error"
   {"severityLevel":"Error","outerId":"0","message":"IRIS Security Error","type":"InterSystems.Data.IRISClient.IRISException","id":"60962899","parsedStack":[{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISADOConnection.GetServerError","level":0,"line":0},{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISADOConnection.processError","level":1,"line":0},{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.InStream.readHeader","level":2,"line":0},{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISADOConnection.Login","level":3,"line":0},{"assembly":"InterSystems.Data.IRISClient, Version=4.5.1.0, Culture=neutral, PublicKeyToken=ad350a26c4a4447c","method":"InterSystems.Data.IRISClient.IRISPool.CreateNewPooledConnection"

1 Comentario
Comentarios (1)2
Inicie sesión o regístrese para continuar
Artículo
· 19 mayo, 2024 Lectura de 4 min

Como separar código fuente y datos en diferentes bases de datos

   

 

📜 Santa Tecla, versículo 8: "Extiende tu ratón sobre la pantalla, y el mar de datos abrirá un sendero delante de ti!!"

 

Hola comunidad, lo primero perdón si a alguien le ha ofendido la blasfemia 😔 

 

¿Alguna vez has pensado que sería interesante tener separado el código fuente de los datos de la base de datos?. Quizás te gustaría poder hacer copias de seguridad de tu código sin tener que copiar gigas de información de los datos de tus clientes.

 

A continuación os explico los pasos para separar en 2 bases de datos distintas vuestro mar formado por el código fuente y los datos de un namespace.

Para este ejemplo voy a partir de un nuevo Namespace que voy a crear para el ejemplo.

 

Primero crearemos 2 nuevos Databases:

Accedemos al portal y vamos al apartado de Base de datos locales:

 

Pulsaremos el botón crear nueva base de datos:

 

Le pondremos un nombre e indicaremos la carpeta donde se va a almacenar (Le he puesto el prefijo Tutorial porque el Namespace tengo planeado llamarlo Tutorial):

 

Ahora nos permite elegir el tamaño que queremos asignarle y si queremos hacer Journal del mismo:

 

Seleccionamos crear un nuevo recurso:

 

Asignamos el nombre del nuevo Recurso, ponemos la descripción y si lo consideramos oportuno le marcamos los permisos de acceso públicos:

 

A continuación seguimos seguimos los mismos pasos para el otro Database para los datos:

Seleccionamos crear un nuevo recurso también:

 

Ya tendríamos nuestros 2 base de datos creadas:

 

Ahora crearemos el nuevo Namespace y le asignamos los 2 Databases que acabamos de crear.

Accedemos a la sección Namespaces:

 

 

Pulsamos en el botón Crear nuevo Namespace:

 

Le damos un nombre y elegimos la base de datos para los datos y para el código fuente, a continuación pulsamos el botón Guardar.

 

Y voila, ya tendremos nuestro flamante nuevo namespace creado con 2 bases de datos separadas, una para datos y otra para código fuente:

Esto mismo también podría ser muy útil para situaciones como... Imagina que tienes una base de datos con un tamaño enorme de datos que es compartido por todos los integrantes del equipo de desarrollo.

Puede que estos integrantes tengan portátiles que llevan de un lado para otro y no te gustaría que esos portátiles pudiesen tener una copia de los datos de la base de datos en local, bien por seguridad o bien porque habría que crear quizás datos de prueba en todos ellos.

En ese caso podría ser buena idea crear el Namespace de código fuente en local y las rutinas en una base de datos remota. ¿Que? ¿Qué como se conectaría una base de datos remota? Chupado!

 

Conectando a una base de datos remota. **Hay que configurar previamente el servidor remoto como ECP (lo explico al final del tutorial).

Nos vamos al apartado de servidores remotos para configurarlo (si no lo tenemos configurado ya):

 

Pulsamos sobre el botón Servidores de datos:

 

Y añadimos el nuevo Servidor:

 

Rellenamos la información:

Por defecto lo creará como desactivado, pulsamos el botón Cambiar estado para activarlo:

 

 

 

 

 

Ahora nos vamos al apartado de Base de datos Remotas:

 

Y pulsamos en el botón Crear base de datos remota:

Seleccionaremos el servidor que tiene la base de datos remota y elegiremos la misma en el desplegable:

 

A continuación vamos a crear un nuevo Namespace "Hibrido" con la información del código fuente en el equipo local y los datos en un servidor remoto.

Accedemos al apartado Namespaces y pulsamos el botón crear Namespace, pero esta vez en el apartado de base de datos para los globales marcaremos la opción Base de datos remota y elegiremos el Database remoto que hemos creado:

 

Y ya tendríamos nuestro increíble Namespace hibrido configurado!

**Configurar servidor como ECP (para poder servir bases de datos remotas, requiere una licencia de pago):

 

Establecemos el nº máximo de servidores de datos, SSL, etc.. y pulsamos en los botones Guardar y Activar:

 

Ahora si, ya podríamos tener nuestros datos separados dentro de un mismo servidor o incluso en distintos servidores.

Os dejo aquí un video de como crear los databases y crear y configurar en Namespace:

Espero que este tutorial os haga mas ameno el largo camino por el  desierto 🌴🐪  para conseguir el perdón del CTO / CIO hasta llegar a la jubilación prometida.

Nos vemos en el siguiente post! Hacedme saber si os ha parecido interesante este artículo, todos vuestros comentarios o dudas son siempre bienvenidos. 🙌

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 18 mayo, 2024 Lectura de 5 min

Imagine the Future of Medical Triage: MediCoPilot

 

Current triage systems often rely on the experience of admitting physicians. This can lead to delays in care for some patients, especially when faced with inexperienced residents or non-critical symptoms. Additionally, it can result in unnecessary hospital admissions, straining resources and increasing healthcare costs.

We focused our project on pregnant women and conducted a survey with friends of ours who work at a large hospital in São Paulo, Brazil, specifically in the area of monitoring and caring for pregnant women.

We discovered that a significant problem occurs when an inexperienced doctor, still a resident, admits a patient with symptoms that are not considered risky, occupying a bed that could be allocated to a patient with more severe symptoms or at risk.

We believe that AI-powered triage systems can objectively analyze patient data, ensuring faster and more efficient care for all. This empowers residents and novice physicians by providing guidance and reducing the burden of initial assessment. Additionally, by optimizing bed allocation, we can promote better resource utilization and a reduction in financial waste.

Interoperability

Inspired by Evgeny Shvarov's post Making your own Chat with Open AI ChatGPT in Telegram, (highly recommended reading!), we embarked on this project leveraging a similar approach using InterSystems interoperability.

 

The first component we created was a BusinessService utilizing the Telegram Adapter by Nikolay Solovyev. This BusinessService acts as a central hub for communication with the Telegram bot. When it receives a message from a patient, it routes it through a dedicated Rule (Message.Route).

The Message.Route that identifies if the conversation with the bot is starting, using the /start command, and returns a welcome message.

In the future, at this stage, we plan to check if the patient is already registered and connect it with FHIR(Fast Healthcare Interoperability Resources), a standardized healthcare data format.

This feature is planned for a future version and not for this contest.

The Message.Route Rule also differentiates between text and voice messages. For text messages, the Rule seamlessly routes the information to the MediCopilot.Process component. This component, which we'll explore in detail later, is responsible for analyzing the patient's text-based symptoms.

Speech to Text

Thinking of a way to speed up the service, or in cases where the patient might not be able to type out their symptoms, the best approach would be to send a voice message. If a voice message is received, it should be converted to text before being sent to MediCopilot.Process.

For this, we used the IRIS Open-AI adapter created by Kurro Lopez, also mentioned in @Evgeny Shvarov 's post, however, we adopted a specific method within the adapter – Audio Transcription.

This method accepts an audio file (in formats like mp3, ogg, or wav) encoded in base64 format and returns the transcribed text.

While the Audio Transcription method offered an efficient solution, we encountered a slight challenge.

The Telegram API returns only the voice file ID. The solution was to extend the Telegram.OutboundAdapter class to fetch the voice file.

Determined to bridge this gap, we explored alternative approaches. The Telegram Adapter's private API property made direct modifications challenging. However, after the contest, I plan to submit a Pull Request to the Telegram Adapter to formally introduce voice file retrieval functionality.

In the meantime, my idea was to use embedded Python to overcome this obstacle. By reading the Telegram API documentation, I found that with the voice file ID, I needed to find the path of this file by querying the getFile endpoint. After this, I had to query another endpoint to fetch the file.

To manage this process efficiently, we created a dedicated Business Operation named "VoiceFile.BusinessOperation." This operation includes two methods written in ePython:

  • GetVoiceFilePath: This method queries the "getFile" endpoint using the voice file ID, retrieving the path where the audio file resides.
  • GetVoiceEncodedData: This method utilizes the retrieved path to download the audio file from the Telegram server. It then encodes the downloaded file and encoded in base64 format, which is the format expected by the OpenAI Audio Transcription method.

With the file in base64, it can be sent to the OpenAI.Operation to get the transcription. I created a Business Process called TelegramVoice.Process to make these calls synchronously and, after receiving the transcription, send the text to MediCopilot.Process.

MediCopilot Adapter

With the transcription of the audio file or the text message, it's time to send all this to the core of our project, the engine that analyzes patient-reported symptoms.

 @José Pereira wrote an incredible article explaining how this mechanism works in detail, so if you haven't read it yet, which I strongly recommend since it will be worth your time and give you a better perspective of our project

I won't delve into this part of the project, but we created an adapter to execute this Python class that utilizes LangChain and LangChain-Iris by Dmitry Maslennikov. This powerful combination allows us to perform Vector Search within InterSystems IRIS, a high-performance database platform.

Based on the research we conducted with our friends working in the medical field, we identified several common symptoms in pregnant women. We categorized these into non-risk symptoms, risk symptoms, and potential diseases or conditions that may require hospitalization.

To train our system in identifying potential risks, we employed OpenAI technology. By simulating patient conversations, we were able to build a synthetic database that effectively mirrored real-world scenarios. However, our ultimate goal is to transition to a database built upon real medical data, including anonymized anamneses (patient histories) and diagnoses.

Thus, we send the patient's report received via Telegram and return a possible diagnosis. The idea is to add another rule to identify the severity of the risk and either refer the patient or alert a doctor.

Conclusion

Our project aims to reduce hospitalization time and financial waste in hospitals while improving the speed of patient care. By focusing on pregnant women and leveraging interoperability, we developed a system that can perform preliminary triage and assist patients efficiently. Using components like the Telegram Adapter, IRIS Open-AI adapter, and MediCopilot Adapter, we created a seamless workflow from receiving a patient's message to delivering a potential diagnosis.

We would love for you to vote for our tool, but even more importantly, we would like to hear from you about how we can improve it. What features could be added? Your comments are welcome, and we are happy to respond.

Comentarios (0)0
Inicie sesión o regístrese para continuar
Artículo
· 18 mayo, 2024 Lectura de 2 min

Wall-M : Perform semantic queries on your email inbox and get accurate responses along with source citations

 

Introduction

With the rise of Gen AI, we believe that now users should be able to access unstructured data in a much simpler fashion. Most people have many emails that they cannot often keep track of. For example, in investment/trading strategies, professionals rely on quick decisions leveraging as much information as possible. Similarly, senior employees in a startup dealing with many teams and disciplines might find it difficult to organize all the emails that they receive. These common problems can be solved using GenAI and help make their lives easier and more organized. The possibility of hallucinations in GenAI models can be scary and that's where RAG + Hybrid search comes in to save the day. This is what inspired us to build the product WALL-M ( Work Assistant LL-M). At HackUPC 2024, we developed WALL-M as part of the InterSystems vector search challenge. It is a retrieval augmented generation (RAG) platform designed for accurate question-answering in emails with minimal hallucinations. This solution addresses the challenge of managing numerous long emails, especially in fast-paced fields like investment/trading, startups with multiple teams and disciplines, or individuals looking to manage their full inbox.   
 

What it does

You can load the emails from your inbox, and choose to filter by date and senders to define the context for the LLM. Then within the context, you can choose to make specific queries related to the chosen emails. Example 1, trading ideas based on select bank reports or investment research reports. Example 2, An employee in a company/startup can ask for a list of action items based on the work-emails they received over the last week.


After this, if you have any further questions we also added a segment to chat with Wall-M, based on the context selected using the initial query. This ensures that all the follow up questions still receive responses that do not hallucinate and include the source emails used to generate the response.

 

How we built it

Frontend: Taipy

Backend: InterSystems Database, SQL

RAG + Vector Search: InterSystems Software, ChatGPT

Tools: LangChain, LlamaIndex

 

Challenges we ran into

Learning to work with the Python full-stack framework "TaiPy" Prompt optimization to avoid hallucinations Using LangChain to get a specific template that includes citations pointing to the source of the response/claim Incompatibilities between different tools we wanted to use

What's next for Wall-M

Use the proof of concept for the specified use cases and evaluate its performance using benchmarks to validate the product's credibility. Improved integration into commonly used email applications such as Outlook and Gmail with personalized uses to improve the utility of WALL-M

Try it out 

Our Github repository : https://github.com/lars-quaedvlieg/WALL-M

Comentarios (0)1
Inicie sesión o regístrese para continuar
Anuncio
· 18 mayo, 2024

[Video] IRIS AI Studio: A detailed Functionality & Code Walkthrough

Hi Community,

This is a detailed, candid walkthrough of the IRIS AI Studio platform. I speak out loud on my thoughts while trying different examples, some of which fail to deliver expected results -  which I believe is a need for such a platform to explore different models, configurations and limitations. This will be helpful if you're interested in how to build 'Chat with PDF' or data recommendation systems using IRIS DB and LLM models.

2 comentarios
Comentarios (2)0
Inicie sesión o regístrese para continuar