Nueva publicación

Encontrar

Artículo
· 14 hr atrás Lectura de 2 min

Archivando mis paquetes OEX

Durante los últimos 9 años, he publicado más de 90 paquetes en OEX.
Y durante todo este tiempo, las condiciones y los entornos cambiaron.
Al principio, 

  • no había Docker
  • no había PM/ZPM
  • no había Python embebido, no había AI
  • el Caché, Ensemble, CSP, ZEN, ... dominaban

A medida que pasó el tiempo, también cambiaron las versiones de los productos y los lenguajes externos.
Ajustar unos pocos paquetes no era un problema al principio,
y era una cuestión de calidad de soporte para vuestros “consumidores”.

Con el volumen actual, no veo forma de mantener este objetivo para todos mis paquetes.
Y basándome en las comprobaciones de calidad, tengo la impresión de que no es solo mi problema.
Los cambios recientes causaron suficientes problemas solo con las actualizaciones de versión.

Así que propuse la idea de una etiqueta DEPRECATED para los paquetes de OEX.

Sería justo señalizar a otros usuarios de OEX que no hay intención de hacer ningún mantenimiento de un paquete.

También, como una especie de advertencia, si se usa.

Además, un paquete DEPRECATED debería estar libre para
ser adoptado y re-elaborado, y eventualmente arreglado
por algún otro miembro de la comunidad.

Casi no hubo eco.
De hecho, mi única opción fue retirarlos de la publicación. Unos 90 paquetes en total.
Mucho código que funcionaba bien dejó de estar disponible y que podría ser un ejemplo para principiantes o una fuente de trucos.
Así que cambié de opinión y marqué los paquetes que podrían ser útiles,

 sin mantenimiento ni actualización

Como solución temporal, también estoy usando la función de archivado de GitHub para evitar cambios por accidente.

Así que espero que la mayoría de estos 90 paquetes vuelvan a aparecer con el tiempo.
Y con ellos también todos los comentarios y artículos relacionados.
Lo entiendo como una especie de museo para mostrar lo fácil o complicado que fue el entorno en su momento.
Y sin ignorar los logros pasados.

1 nuevo comentario
Comentarios (1)1
Inicie sesión o regístrese para continuar
Anuncio
· 14 hr atrás

Concurso Full Stack de InterSystems 2026

Hola desarrolladores,

Nos alegra anunciar el primer concurso de programación online de InterSystems del año:

🏆 Concurso Full Stack de InterSystems 🏆

Duración: del 2 de febrero al 1 de marzo de 2026

Bolsa de premios: 12.000 $


El tema

Desarrollad una solución full stack utilizando InterSystems IRIS, InterSystems IRIS for Health o IRIS Cloud Service como backend. Por full stack entendemos una aplicación web o móvil de frontend que inserte, actualice o elimine datos en InterSystems IRIS a través de una API REST, la API nativa, ODBC/JDBC o Python embebido.

Requisitos generales:

  1. Una aplicación o librería debe ser completamente funcional. No debe ser una importación ni una interfaz directa de una librería ya existente en otro lenguaje (excepto en C++, donde realmente es necesario hacer bastante trabajo para crear una interfaz para IRIS). No debe ser un copia y pega de una aplicación o librería existente.
  2. Aplicaciones aceptadas: aplicaciones nuevas en Open Exchange o ya existentes, pero con una mejora significativa. Nuestro equipo revisará todas las aplicaciones antes de aprobarlas para el concurso.
  3. La aplicación debe funcionar en IRIS Community Edition o en IRIS for Health Community Edition. Ambas pueden descargarse como versiones host (Mac, Windows) desde el sitio de evaluación, o utilizarse en forma de contenedores obtenidos desde InterSystems Container Registry o Community Containers: intersystemsdc/iris-community:latest o intersystemsdc/irishealth-community:latest.
  4. La aplicación debe ser de código abierto y publicarse en GitHub o GitLab.
  5. El archivo README de la aplicación debe estar en inglés, contener los pasos de instalación e incluir un vídeo de demostración y/o una descripción de cómo funciona la aplicación.
  6. Solo se permiten 3 participaciones por desarrollador.

Nota: nuestros expertos tendrán la última palabra sobre si la aplicación se aprueba o no para el concurso, basándose en los criterios de complejidad y utilidad. Su decisión es definitiva y no está sujeta a apelación.

Premios

  1. Nominación de expertos - un jurado especialmente seleccionado determinará a los ganadores:

🥇1.º puesto: 5.000 $
🥈2.º puesto: 2.500 $
🥉3.º puesto: 1.000 $
🏅 4.º puesto: 500 $
🏅5.º puesto: 300 $
🌟Puestos 6.º a 10.º: 100 $

  1. Ganadores de la comunidad - las aplicaciones que reciban más votos en total:

🥇 1.º puesto - 1.000 $
🥈 2.º puesto - 600 $
🥉 3.º puesto - 300 $
🏅 4.º puesto - 200 $
🏅 5.º puesto - 100 $

❗ Si varios participantes obtienen el mismo número de votos, todos se consideran ganadores y el premio en efectivo se repartirá entre ellos.
❗ Los premios en efectivo se entregan solo a quienes puedan verificar su identidad. Si hay alguna duda, los organizadores se pondrán en contacto y solicitarán información adicional sobre el/los participante(s).

¿Quiénes pueden participar?

Cualquier miembro de la Comunidad de Desarrolladores, excepto empleados de InterSystems (los contratistas de ISC sí pueden participar). ¡Cread una cuenta!

Los desarrolladores pueden formar equipos para crear una aplicación colaborativa. Se permite de 2 a 5 desarrolladores por equipo.

No olvidéis destacar a los miembros de vuestro equipo en el README de vuestra aplicación - perfiles de usuario de DC.

Fechas importantes:

🛠 Fase de desarrollo de la aplicación y registro:

  • 2 de febrero de 2026 (00:00 EST): Comienza el concurso.
  • 22 de febrero de 2026 (23:59 EST): Fecha límite para enviar las aplicaciones.

✅ Período de votación:

  • 23 de febrero de 2026 (00:00 EST): Comienza la votación.
  • 1 de marzo de 2026 (23:59 EST): Termina la votación.

Nota: los desarrolladores pueden mejorar sus aplicaciones durante todo el período de registro y votación.

Recursos útiles:

✓ Aplicaciones de ejemplo:

✓ Plantillas que recomendamos para empezar:

✓ Para principiantes con IRIS:

✓ Para principiantes con ObjectScript Package Manager (IPM):

✓ Cómo enviar vuestra aplicación al concurso:

¿Necesitáis ayuda?

Uníos al canal del concurso en el servidor de Discord de InterSystems o hablad con nosotros en los comentarios de esta publicación.

¡Estamos esperando VUESTRO proyecto – uníos a nuestro maratón de programación para ganar!


Al participar en este concurso, aceptáis los términos de la competición establecidos aquí. Por favor, leedlos detenidamente antes de continuar.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 17 hr atrás Lectura de 5 min

What Are Custom Mailer Boxes and How Do They Work?

Picture background
Custom mailer boxes have become a popular packaging solution for businesses that ship products directly to customers. These boxes are designed to protect items during transit while offering a neat and organized presentation. Unlike generic shipping cartons, mailer boxes are often customized in size, structure, and material to match specific product needs. Their growing use in e-commerce and retail shows how packaging has evolved beyond simple protection.

In simple terms, custom mailer boxes are folding cartons made to fit products snugly and ship them safely without requiring additional outer packaging. They are commonly used for lightweight to medium-weight items and are shipped flat before being assembled. Because of their smart design and ease of use, these boxes are now a standard choice for brands looking for both protection and efficiency in shipping.


Understanding Custom Mailer Boxes

Custom mailer boxes are usually made from corrugated cardboard or kraft material. They are designed to be self-locking, meaning no tape or glue is required to assemble them. Once folded, the box holds its shape securely, making it suitable for shipping through courier and postal services.

These boxes are widely used by online stores, subscription services, and small businesses. Their structure allows products to stay in place, reducing movement and the risk of damage. Since they can be tailored to exact dimensions, businesses avoid using oversized boxes, which helps lower shipping costs and material waste.

Another important aspect is consistency. When products are shipped in the same type of packaging every time, handling becomes easier for both sellers and logistics providers.


How Custom Mailer Boxes Work in Shipping

The working mechanism of custom mailer boxes is simple but effective. First, the box is manufactured according to the product’s size and weight requirements. Once delivered to the business, the boxes are stored flat, saving warehouse space.

During packing, the box is folded along pre-scored lines. The locking tabs and flaps interlock to form a rigid structure. The product is placed inside, often with minimal additional padding if needed. After closing the lid, the box is ready for labeling and shipping.

Because of their sturdy construction, these boxes can withstand stacking, handling, and transportation pressures. This makes them ideal for shipping products directly to customers without needing a second outer box.


Key Features of Custom Mailer Boxes

Custom mailer boxes offer several practical features that make them suitable for modern shipping needs.

Main Features Include:

  • Self-locking design for quick assembly
  • Custom sizing to reduce empty space
  • Durable materials for product protection
  • Lightweight structure to manage shipping costs
  • Easy stacking and storage before use

These features help businesses streamline their packaging process while ensuring that products reach customers in good condition.


Materials Used in Custom Mailer Boxes

The choice of material plays a major role in how mailer boxes function. Corrugated cardboard is the most commonly used material because it provides strength without adding excessive weight. It consists of a fluted layer between two linerboards, offering cushioning and durability.

Kraft paper is another popular option, especially for businesses looking for a natural and simple appearance. It is strong, recyclable, and suitable for a wide range of products. Depending on the shipping needs, different flute sizes can be selected to provide varying levels of protection.

Material selection ensures that the box can handle pressure, vibration, and temperature changes during transit.


Custom Mailer Boxes vs Standard Shipping Boxes

Understanding the difference between custom mailer boxes and standard shipping boxes helps explain how mailer boxes work more efficiently for certain products.

Feature Custom Mailer Boxes Standard Shipping Boxes
Design Self-locking, foldable Requires tape or glue
Size Fit Product-specific Often oversized
Storage Ships and stores flat Takes more space
Assembly Time Quick and simple Time-consuming
Shipping Use Single-box shipping Often needs inner packaging

This comparison shows why many businesses prefer mailer boxes for direct-to-customer deliveries.


Why Businesses Use Custom Mailer Boxes

One of the main reasons businesses use custom mailer boxes is efficiency. These boxes simplify the packing process and reduce the need for extra materials like bubble wrap or filler. When a box fits the product well, it minimizes movement and lowers the chance of damage.

Another reason is consistency in shipping. Using the same box size and structure helps businesses standardize operations. This leads to faster packing times and fewer errors during order fulfillment.

Mailer boxes also help manage shipping costs. Their lightweight nature and compact size reduce dimensional weight charges, which are common in courier pricing models.


Role of Custom Mailer Boxes in E-commerce

E-commerce relies heavily on packaging that can handle frequent shipping. Custom mailer boxes are designed to meet this demand. They are strong enough for long-distance transport and simple enough for quick order processing.

For subscription-based businesses, these boxes are especially useful. Products are shipped regularly, and having a reliable packaging solution ensures consistency across shipments. Customers also find these boxes easy to open and dispose of, which improves overall satisfaction.

As online shopping continues to grow, the role of mailer boxes in daily shipping operations becomes even more important.


Environmental Considerations

Many custom mailer boxes are made from recyclable materials, making them a more responsible packaging choice. Because they are designed to fit products closely, they reduce material waste and unnecessary fillers.

Using right-sized packaging also helps lower carbon emissions during transportation. Smaller and lighter boxes mean more efficient shipping, which benefits both businesses and the environment.

This practical approach aligns well with modern packaging trends focused on sustainability and efficiency.


Conclusion

Custom mailer boxes are a smart and functional packaging solution designed to protect products and simplify shipping. Their self-locking structure, durable materials, and custom sizing allow businesses to ship items securely without added complexity. By understanding how these boxes work, businesses can make better packaging decisions that support efficient operations and reliable deliveries.

As shipping needs continue to evolve, custom mailer boxes remain a dependable choice for businesses seeking practical, well-designed packaging solutions.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Resumen
· 17 hr atrás

【週間ダイジェスト】 1/19 ~ 1/25 の開発者コミュニティへの投稿

1/19 ~ 1/25Week at a GlanceInterSystems Developer Community
Artículo
· 18 hr atrás Lectura de 14 min

IRIS Agents: Building Agents on IRIS!

 

Ever since I started using IRIS, I have wondered if we could create agents on IRIS. It seemed obvious: we have an Interoperability GUI that can trace messages, we have an underlying object database that can store SQL, Vectors and even Base64 images. We currently have a Python SDK that allows us to interface with the platform using Python, but not particularly optimized for developing agentic workflows. This was my attempt to create a Python SDK that can leverage several parts of IRIS to support development of agentic systems.

First, I set out to define the functional requirements:

  • Developers should code primarily in Python
  • Developers should not have to set configuration settings on Management Portal
  • Developers should not need to code in ObjectScript

Luckily, the existing Python SDK  does allow quite a bit of interfacing with the IRIS data platform. Let's explore how we can leverage them to manage context, register tools, observe messages and build agents.

Here's how I envision the SDK to be used:

from iris.agents import Agent, Production, Chat, Prompt
from iris.agents.tools import Calendar, Weather
from pydantic import BaseModel

class Response(BaseModel):
	text: str  
	reasoning: str  
	
weather_conversation = Chat('WeatherDiscussion')
molly_system = Prompt('MollySystemPrompt').build(scale='Fahrenheit')
alex_system = Prompt(name='AlexSystemPrompt',
					text='You are an excellent assistant')

molly = Agent(
	name='Molly',
	description='General Assistant Agent',
	system_prompt=molly_system,
	model='gpt-5',
	response_format=Response)

alex = Agent(
	name='Alex',
	description='General Assistant Agent',
	system_prompt=alex_system,
	model='gpt-5',
	tools=[Calendar, Weather],
	response_format=Response)


prod = Production(name='AgentSpace', agents=[molly, alex]).start()

molly("What's the weather in Boston today?",
	  chat=weather_conversation)

Let's start by defining the structure of an agent in the IRIS context. Every agent in IRIS is construed a Business Process, with their own Business Services. Every tool is construed as a Business Operation. Some tools come out of the box, such as one to query the IRIS database using SQL (also used for vector search) and one to call an LLM. The underlying database is used to store knowledge bases, prompts, agent configurations, production specs, user information, and logged information such as agent reasoning.

 Before we dive into the Agents themselves, let's look at how Messages are handled. I converted each Pydantic BaseModel (useful for structured outputs) into an ObjectScript Message class stored in the namespace "Agents". If the developer defines a new BaseModel with an existing name, the structure overrides the previous one. These Message classes are converted back into Pydantic BaseModels in the LLM Business Operation, when it makes the call using appropriate libraries using Embedded Python.

class Message:
    def __init__(self, name, model:BaseModel):
        self.name = name
        self.build_message(model)
    
    def build_message(self, model:BaseModel):
        model_json = model.model_json_schema()

        cls_name = f'Agents.Message.{self.name}'
        cls_text = f'''Class {cls_name} Extends (Ens.Request, %JSON.Adaptor)

        {{

        '''

        for prop_name, prop_attribs in model_json['properties'].items():
            cls_text += f'''Property {prop_name} As {r'%Double' if prop_attribs['type'] == 'number' else r'%String'}; \n'''
        cls_text += '}'
        irispy = get_connection(True)

        stream = irispy.classMethodObject('%Stream.GlobalCharacter', '%New')
        stream.invoke('Write', cls_text)
        stream.invoke('Rewind')

        errorlog = iris.IRISReference(None)
        loadedlist = iris.IRISReference(None)

        sc = irispy.classMethodValue(
            '%SYSTEM.OBJ', 'LoadStream',
            stream,
            'ck',
            errorlog,
            loadedlist,
            0,
            '',
            f'{cls_name}.cls',
            'UTF-8'
        )

        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))

        return 'Successful'

Once Messages are taken care of, here's how I created my Agent class:

class Agent:
    def __init__(self,
                 name: str,
                 description: str | None = None,
                 system_prompt: Prompt | None = None,
                 model: str | None = None,
                 tools: list[Tool] | None = None,
                 response_format: BaseModel | None = None,
                 chat: Chat | None = None,
                 override: bool = True):
        conn = get_connection()
        cur = conn.cursor()
        response_format = Message(response_format.__name__, response_format)

        sql = '''SELECT TABLE_NAME
                    FROM INFORMATION_SCHEMA.Tables
                    WHERE TABLE_TYPE='BASE TABLE'
                    AND TABLE_SCHEMA='SQLUser' '''
        if 'Agent' not in pd.read_sql_query(sql, conn)['TABLE_NAME'].to_list():
            sql = '''CREATE TABLE Agent (
                        agent_name VARCHAR(200) NOT NULL PRIMARY KEY,
                        description VARCHAR(4000),
                        system_prompt_id VARCHAR(200),
                        model VARCHAR(200),
                        tools VARCHAR(4000),
                        response_format VARCHAR(4000),
                        chat_id VARCHAR(200)
                        )'''
            cur.execute(sql)
            conn.commit()

        # 2) Check if agent exists
        sql = f"SELECT * FROM Agent WHERE agent_name = '{name}'"
        agent_df = pd.read_sql_query(sql, conn)

        if agent_df is not None and len(agent_df) > 0:
            row = agent_df.iloc[0]

            if not override:
                self.name = row['agent_name']
                self.description = row['description']
                self.system_prompt = Prompt(row['system_prompt_id']) if row['system_prompt_id'] else None
                self.model = row['model']
                self.tools = row['tools']
                self.response_format = row['response_format']
                self.chat_id = row['chat_id']
                return
            sp_id = system_prompt.name if system_prompt else row['system_prompt_id']
            chat_id = chat.id if chat else row['chat_id']

            sql = f'''UPDATE Agent SET
                        description = '{description}',
                        system_prompt_id = '{sp_id}' ,
                        model = '{model}',
                        tools = '{str(tools)}',
                        response_format = '{response_format.name if response_format else None}',
                        chat_id = '{chat_id}'
                        WHERE agent_name = '{name}' '''
            cur.execute(sql)
            conn.commit()

            self.name = name
            self.description = description
            self.system_prompt = Prompt(sp_id) if sp_id else None
            self.model = model
            self.tools = tools
            self.response_format = response_format
            self.chat = chat
            return
        # 3) Agent does not exist → create or error
        if any(x is None for x in (description, model, response_format)):
            raise KeyError("Missing required fields to create a new agent.")

        sp_id = system_prompt.name if system_prompt else None
        chat_id = chat.id if chat else None
        sql = f'''INSERT INTO Agent
                    (agent_name, description, system_prompt_id, model, tools, response_format, chat_id)
                    VALUES
                    ('{name}', '{description}', '{sp_id}', '{model}', '{str(tools)}', '{response_format.name if response_format else None}', '{chat_id}')'''
        cur.execute(sql)
        conn.commit()

        self.name = name
        self.description = description
        self.system_prompt = Prompt(sp_id) if sp_id else None
        self.model = model
        self.tools = tools
        self.response_format = response_format
        self.chat = chat

    def __repr__(self) -> str:
        return f"Agent(name={self.name!r}, model={self.model!r}, system_prompt={getattr(self.system_prompt,'name',None)!r})"
    def __call__(self, chat:Chat|None=None) -> str:
        # TODO: API call to agent's business service
        pass

Whenever Agents are initialized for the first time, it requires most of its parameters. Once an agent has been defined, it can be fetched with a simple Agent("Name") call, and the Agent's specs are loaded from database., or can be overridden by providing different specs.

For Prompts, I created a versioning system where prompts can be identified by their names (similar to Agents and Messages), but subsequent changes are versioned and stored, with the latest version being fetched when called. The prompt can also be "built" at runtime, which might allow users to inject details into a prompt template depending on the use case. All Prompts are persisted in tables.
 

class Prompt:
    def __init__(self, name:str, text:str|None=None, iris_args:dict[str,str]|None=None):
        conn = get_connection()
        cur = conn.cursor()

        sql = '''SELECT TABLE_SCHEMA, TABLE_NAME from INFORMATION_SCHEMA.Tables WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA = 'SQLUser' '''
        if 'Prompt' not in pd.read_sql_query(sql, conn)['TABLE_NAME'].to_list():

            sql = '''CREATE TABLE Prompt (
                prompt_id    VARCHAR(200) NOT NULL,
                prompt_text   VARCHAR(200) NOT NULL,
                version INT NOT NULL,
                PRIMARY KEY (prompt_id, version))'''
            cur.execute(sql)
            conn.commit()

        sql = f'''SELECT * FROM Prompt WHERE prompt_id = '{name}' ORDER BY version DESC LIMIT 1'''
        prompt_df = pd.read_sql_query(sql, conn)

        last_text = None
        version = 0
        if prompt_df is not None and len(prompt_df) > 0:
            name, last_text, version = prompt_df.iloc[0].tolist()
        self.name = name
        self.text = last_text
        self.version = version

        if not last_text and not text:
            raise KeyError(f'No prompt text found for \'{name}\', and no \'text\' was provided.')
        
        if text:
            sql = f'''INSERT INTO Prompt (prompt_id, prompt_text, version) VALUES ('{name}', '{text}', {version + 1})'''
            cur.execute(sql)
            conn.commit()
            self.text = text
            self.version += 1
    def __repr__(self) -> str:
        return f'Prompt(name={self.name!r}, version={self.version}, text={self.text!r})'
    def __str__(self) -> str:
        return self.text or ''
    def build(self, **vars) -> str:
        import string
        vars_req = {var for _, var, _, _ in string.Formatter().parse(self.text) if var}
        missing = vars_req - vars.keys()
        if missing:
            raise KeyError(f'Missing variables {sorted(missing)} for the selected prompt')
        return self.text.format(**vars)

Finally, the Production itself. The production creates the production configuration as well as the dispatch class needed to pass the REST calls to the correct Business Service (depending on which agent is being invoked).

class Production:
    def __init__(self, 
                 name: str,
                 agents: list[Agent], 
                 tools: list[Tool] | None = None):
        self.name = name
        self.agents = agents
        self.build_production()
        self.create_dispatch()

    def create_class(self, name, text):
        irispy = get_connection(True)

        stream = irispy.classMethodObject('%Stream.GlobalCharacter', '%New')
        stream.invoke('Write', text)
        stream.invoke('Rewind')

        errorlog = iris.IRISReference(None)
        loadedlist = iris.IRISReference(None)

        sc = irispy.classMethodValue(
            '%SYSTEM.OBJ', 'LoadStream',
            stream,
            'ck',
            errorlog,
            loadedlist,
            0, 
            '',
            f'{name}.cls',
            'UTF-8'
        )

        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))
        
    def create_gateway(self, name:str):
        cls_text = f'''Class Agents.Gateway.{name}Service Extends Ens.BusinessService
            {{
            Method OnProcessInput(pInput As Agents.Message.Request, pOutput As Agents.Message.Response) As %Status
            {{
                set sc = ..SendRequestSync("{name}", pInput, .pResponse)
                set pOutput = ##class(Agents.Message.Response).%New()
                set pOutput = pResponse.%ConstructClone*(0)
                Quit sc
            }}
            ClassMethod OnGetConnections(Output pArray As %String, pItem As Ens.Config.Item)
            {{
                Do ##super(.pArray, pItem)
                Set pArray("{name}") = ""
            }}
            }}
            '''
        create_class(f'Agents.Gateway.{agent.name}Service', cls_text)

    def create_process(self, name:str, response_format:str):
        cls_text = f'''Class Agents.Process.{agent.name} Extends Ens.BusinessProcessBPL
            {{
            
            ClassMethod BuildChatJSON(pText As %String) As %String
            {{
                Set arr = ##class(%DynamicArray).%New()
                Set obj = ##class(%DynamicObject).%New()
                Do obj.%Set("role","user")
                Do obj.%Set("content", pText)
                Do arr.%Push(obj)
                Quit arr.%ToJSON()
            }}
            
            /// BPL Definition
            XData BPL [ XMLNamespace = "http://www.intersystems.com/bpl" ]
            {{
            <process language='objectscript' request='Agents.Message.Request' response='Agents.Message.{agent.response_format.name}'>
            <context>
            <property name='LLMResponse' type='Agents.Message.LLMResponse' instantiate='0' />
            <property name='ChatJSON' type='%String' instantiate='0' />
            </context>

            <sequence>
            <switch>
            <case name='LLM' condition='1'>
            <assign property="context.ChatJSON"
                action="set"
                languageOverride="objectscript"
                value="##class(Agents.Process.{name}).BuildChatJSON(request.Message)" />


            <call name='CallLLM' target='LLM' async='0'>
            <request type='Agents.Message.LLMRequest' >
            <assign property="callrequest.responseType" value="&quot;Agents.Message.{response_format}&quot;" action="set" />
            <assign property="callrequest.chat" value="context.ChatJSON" action="set" />
            </request>
            <response type='Agents.Message.LLMResponse' >
            <assign property="context.LLMResponse" value="callresponse" action="set"/>
            </response>
            </call>

            <assign property="response.Message" value="context.LLMResponse.message" action="set"/>
            </case>

            <default>
            <assign property="response.Message" value="&quot;Hello&quot;" action="set"/>
            </default>
            </switch>
            </sequence>
            </process>
            }}

            }}'''
        create_class(f'Agents.Process.{name}', cls_text)



    def build_production(self):
        prod_xml = f'''<Production Name="{name}" LogGeneralTraceEvents="false">
            <Description></Description>
            <ActorPoolSize>1</ActorPoolSize>
            '''
        for agent in self.agents:

            self.create_gateway(agent.name)

            self.create_process(agent.name, agent.response_format.name)

            prod_xml += f'<Item Name="{agent.name}Gateway" ClassName="Agents.Gateway.{agent.name}Service" PoolSize="1" Enabled="true"/>\n' + \
                f'<Item Name="{agent.name}" ClassName="Agents.Process.{agent.name}" PoolSize="1" Enabled="true"/>\n'
        prod_xml += '<Item Name="LLM" ClassName="Agents.Operation.LLM" PoolSize="1" Enabled="true"/>\n</Production>'
        cls_text = f"""Class {name} Extends Ens.Production
        {{
        XData ProductionDefinition
        {{
        {prod_xml}
        }}
        }}
        """
        create_class(name, cls_text)

    def start(self):
        # Stop existing Production
        irispy = get_connection(True)
        sc = irispy.classMethodValue("Ens.Director", "StopProduction", 10, 1)
        if sc != 1:
            print(irispy.classMethodValue("%SYSTEM.Status","GetErrorText", sc))
        

        irispy = get_connection(True)
        sc = irispy.classMethodValue("Ens.Director", "StartProduction", self.name)
        if sc != 1:
            raise RuntimeError(irispy.classMethodValue("%SYSTEM.Status", "GetErrorText", sc))

        print("Created/compiled/started:", self.name)

    def create_dispatch(self):
        cls_text = r'''
        Class Agents.REST.Dispatch Extends %CSP.REST
        {

        XData UrlMap
        {
        <Routes>
            <Route Url="/:agentName" Method="POST" Call="Agent" Cors="false" />
        </Routes>
        }

        /// POST /csp/agents/{agentName}
        ClassMethod Agent(agentName As %String) As %Status
        {
            Set %response.ContentType="application/json"

            Set body = %request.Content.Read()
            If body = "" {
                Do %response.SetStatus(400)
                Quit $$$OK
            }

            Set req = ##class(Agents.Message.Request).%New()
            Do req.%JSONImport(body)

            Set itemName = agentName _ "Gateway"
            Set sc = ##class(Ens.Director).SendRequestSync(itemName, .req, .resp)

            If sc '= 1 {
                Do %response.SetStatus(500)
                Quit $$$OK
            }

            Do %response.SetStatus(200)
            Do %response.Write(resp.%ToJSON())
            Quit $$$OK
        }

        }
        '''
        self.create_class('Agents.REST.Dispatch', cls_text)

The work here is far from over: I am currently exploring ways to send requests to the system using the dispatch class without needing to operate on Management Portal (currently it seems the web server is blocking my requests before it can reach the dispatch class). Once that is fixed, we need a few more elements to make this super useful:

NL2SQL Tool: This tool profiles a table, including creating descriptions, vectors and such. I already have created the algorithm to do this, but I intend on making it into a tool that can be directly called from Python to profile new tables, which can be leveraged by the LLM to create SQL statements.

SQL Business Operation: This tool would query the database and return the information. This would also be used by a higher level Vector Search and Index SDK that would query the database using SQL statements.

Passthrough: For Vector Search and NL2SQL profiles, a passthrough process would exist to serve the information to appropriate business services without involving agents.

Chat: Chat would exist as a table containing messages alongside chat_ids. A call to an Agent can be parameterized with a chat_id to dynamically query the database and construct the past conversation before making the LLM call. If no Chat is provided, the agentic flow remains standalone.

 

Note on why IRIS is uniquely positioned to help agentic application development

A typical agentic application development flow is messy. MCP tooling, context retrieval from a database, a vector db, observability (reasoning to inform prompt optimization) which is logged in a separate database and platforms like Langfuse, which itself uses multiple databases under the hood. IRIS offers a single platform to develop agents end to end, observe messages and traces on the Management Portal, and enable developers in ways few (if any) platforms can. I hope to publish this project on OpenExchange once finalized and packaged appropriately.

I hope you have enjoyed reading this article. If you have any questions, I'm always happy to discuss ideas, especially those that can change the world for the better!

Comentarios (0)1
Inicie sesión o regístrese para continuar