Nueva publicación

Encontrar

Pregunta
· 24 ene, 2026

Sustainable Medical Scrubs: Eco-Conscious Choices Shaping the Future of Healthcare Apparel

As sustainability becomes a global priority, the healthcare industry is increasingly reevaluating its environmental impact—including the apparel worn by medical professionals every day. Eco-friendly Pemium medical scrubs are emerging as a responsible alternative to traditional uniforms, offering durability, comfort, and performance while reducing harm to the environment. This shift reflects a broader commitment to sustainability without compromising professional standards or functionality.

Why Sustainability Matters in Healthcare Apparel

Healthcare facilities consume significant resources daily, from energy and water to disposable supplies and textiles. Medical scrubs, which are frequently washed and replaced, contribute to textile waste and carbon emissions. Sustainable scrubs address these concerns by focusing on responsible materials, ethical manufacturing, and longer product lifecycles.

By choosing eco-friendly scrubs, hospitals and clinics can align their apparel choices with environmental responsibility while reinforcing a positive public image. For healthcare professionals, wearing sustainable uniforms also reflects personal values and awareness of global environmental challenges.

Eco-Friendly Fabrics Used in Medical Scrubs

The foundation of sustainable scrubs lies in fabric selection. Modern eco-friendly medical scrubs are made using innovative materials designed to minimize environmental impact.

Organic cotton is a popular choice, grown without harmful pesticides or synthetic fertilizers. It offers natural breathability and softness while reducing soil and water pollution. However, organic cotton is often blended with other fibers to improve durability and performance.

Recycled polyester is another key material in sustainable scrubs. Made from post-consumer plastic waste such as bottles, recycled polyester reduces landfill waste and energy consumption compared to virgin polyester production. It provides strength, color retention, and easy-care properties ideal for healthcare environments.

Bamboo and TENCEL™ fibers are also gaining popularity. These materials are derived from renewable resources and processed using closed-loop systems that limit chemical waste. They offer excellent moisture-wicking, breathability, and antimicrobial properties, making them well-suited for long medical shifts.

Performance Without Compromise

A common misconception about eco-friendly scrubs is that sustainability comes at the expense of performance. In reality, modern sustainable scrubs are engineered to meet the same rigorous standards as traditional medical apparel.

Eco-conscious scrubs often feature stretch blends, moisture-wicking technology, and wrinkle resistance, ensuring comfort and mobility throughout demanding shifts. Many sustainable fabrics are designed to withstand frequent laundering while maintaining shape, color, and softness, reducing the need for frequent replacements.

Longer-lasting scrubs contribute directly to sustainability by minimizing textile waste and lowering overall consumption.

Ethical Manufacturing and Responsible Production

Sustainability extends beyond materials to how scrubs are produced. Many eco-friendly scrub manufacturers prioritize ethical labor practices, reduced water usage, and lower carbon emissions throughout the supply chain.

Responsible production includes minimizing fabric waste, using non-toxic dyes, and optimizing manufacturing efficiency. For hospitals and clinics, partnering with suppliers committed to ethical standards strengthens corporate responsibility initiatives and supports transparent sourcing.

Benefits for Hospitals and Clinics

Adopting eco-friendly medical scrubs offers practical and reputational advantages for healthcare institutions. Sustainable uniforms support environmental goals, enhance brand credibility, and demonstrate leadership in responsible healthcare practices.

From an operational perspective, durable eco-friendly scrubs reduce long-term costs by extending garment lifespan. Standardizing sustainable apparel also simplifies procurement while reinforcing a consistent, professional appearance across departments.

Supporting Healthcare Professionals

For healthcare professionals, eco-friendly scrubs offer comfort with purpose. Soft, breathable fabrics reduce skin irritation, while stretch and moisture control enhance performance during long shifts. Knowing their uniforms are environmentally responsible can also contribute to job satisfaction and workplace pride.

The Future of Sustainable Healthcare Apparel

As environmental awareness continues to grow, sustainable medical scrubs are becoming a standard rather than a niche option. Advances in textile innovation and manufacturing practices will further improve performance, affordability, and availability.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Pregunta
· 23 ene, 2026

X509 Certificate - how to save it?

Given a sting beginning with "-----BEGIN CERTIFICATE-----" and ending with "-----END CERTIFICATE-----", how do I save the string as X509 certificate in ObjectScript?

6 nuevos comentarios
Comentarios (6)2
Inicie sesión o regístrese para continuar
Artículo
· 23 ene, 2026 Lectura de 4 min

Circumventing Default SQL Upper-Case Collation Behavior post IRIS 2024.2.0

We were testing upgrades on our internal systems recently, and were initially surprised to see a change in some instances of SQL behavior. On further investigation, the changes make a lot of sense, and are easy to adapt to. But, to hopefully save you some time and provide additional information in case this is something you too find curious or interesting, we thought we would share our understanding of the changes and how we got around it to preserve previous behavior where desired.

Starting in IRIS version 2024.2.0, slight tweaks were made to SQL UNION and SELECT DISTINCT behaviors to consistently enforce the implicit DISTINCT function. The upgrade note is available on the Upgrade Impact Checklist here under "DP-429949: Apply collation for UNION"

Background

Previously, for UNION actions the field values were taken in their given state, and the column was assumed to use EXACT collation. This happened regardless of the fields' collation or the namespace default settings. This meant that the casing provided was the casing returned. In the new behavior, the default field collation for that column, which is typically SQLUPPER, will be applied to each value.

A similar change is seen with SELECT DISTINCT calls. Previously, a SELECT DISTINCT call returned the provided casing. In the new behavior, the default field collation for that column, which is typically SQLUPPER, will be applied to each value.

What to do if I see this come up?

If you would prefer to retain the previous provided casing behavior, like we needed to in a few distinct areas, there are a number of ways to enforce this.

(1) Add %EXACT to your fields.

Adding %EXACT to either UNION or SELECT DISTINCT queries will preserve the same behavior as original. %EXACT can be added to any columns for which you want to prevent the default SQLUPPER behavior.

See below for examples.

Returned results from a simple SELECT DISTINCT query prior to IRIS 2024.2.0:

SELECT DISTINCT FirstName FROM DNames
FirstName
David
Dimitri
Daniel
Darren
Dany

 

Versus returned results with IRIS version in or after 2024.2.0:

FirstName
DAVID
DIMITRI
DANIEL
DARREN
DANY

 

Instead, in IRIS 2024.2.0 or newer using SELECT DISTINCT with %EXACT:
NOTE: Because %EXACT makes the returned column an expression, you will need to specify as <column name> to retain it in the result 

SELECT DISTINCT %EXACT FirstName as FirstName FROM DNames
FirstName
David
Dimitri
Daniel
Darren
Dany

 

The same %EXACT principle as applied to a UNION queries:

SELECT %EXACT FirstName as AllDNames FROM DNames
UNION
SELECT %EXACT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

(2) Change from using UNION to UNION ALL.

UNION ALL returns the results without applying the default SQLUPPER collation.

It is important to note that UNION and UNION ALL have varying behavior. UNION filters out duplicate rows in cases where all columns in the results are the same, whereas UNION ALL does not. If your dataset has no duplicate rows (for example, say each row has a unique %GUID or other identifier), then UNION ALL will return the exact same results as UNION.

However, if your data does contains duplicates which need to be filtered, this will not work as a valid replacement.

See below for examples.

Returned results from a simple UNION query prior to 2024.2.0:

SELECT FirstName as AllDNames FROM DNames
UNION
SELECT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

Versus returned results with IRIS version in or after 2024.2.0:

FirstName
DAVID
DIMITRI
DANIEL
DESTEFANIS
DE LORENZO

 

Instead, in IRIS 2024.2.0 or newer using UNION ALL:

SELECT FirstName as AllDNames FROM DNames
UNION ALL
SELECT LastName FROM DNames
FirstName
David
Dimitri
Daniel
DeStefanis
De Lorenzo

 

(3) Override the default SQLUPPER collation behavior namespace wise.

From the documentation, "Each namespace has a current string collation setting. This string collation is defined for the data type in %Library.StringOpens in a new tab. The default is SQLUPPER. This default can be changed." The documentation provides instructions for how to easily change this default. 

Please note that because this change is namespace wide, you should be careful to test thoroughly for unintended affects upon making the change.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 23 ene, 2026 Lectura de 3 min

Lean Integration via OS Delegation

Senior engineering is defined not by the volume of code produced, but by the strategic avoidance of it. In complex integration environments, the tendency to utilize general-purpose libraries for every niche requirement introduces unnecessary overhead. True architectural maturity requires a commitment to "minimalist tooling"—prioritizing resilient, battle-tested system utilities over custom logic. This assessment examines our PGP encryption/decryption pipeline to demonstrate how shifting from application-level libraries to OS-native delegation enhances system durability.

Current State: The Heavyweight Implementation

Our current MPHP.HS.PGPUtil class represents a high-friction design. While functional, the existing InterSystems IRIS Business Process is burdened by a significant dependency footprint. By bridging into Embedded Python to utilize the pgpy library, we have introduced a "weighty" stack that necessitates the Python runtime, third-party library management, and specific cryptographic binaries.

The technical core currently forces the application to manage internal cryptographic states, manual file I/O, and memory heap allocation:

Python Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String [ Language = python ] {
    import pgpy
    try:
        with open(EncryptedFilePath, "rb") as enc_file:
            encrypted_content = enc_file.read()
        pgp_key, _ = pgpy.PGPKey.from_file(PrivateKeyPath)
        encrypted_message = pgpy.PGPMessage.from_blob(encrypted_content)
        with pgp_key.unlock(Passphrase) as unlocked_key:
            decrypted_message = unlocked_key.decrypt(encrypted_message)
        with open(TargetFilePath, "wb") as dec_file:
            dec_file.write(decrypted_message.message)
        return 1
    except Exception as e:
        return f"ERROR: {str(e)}"
}

 

Critical Friction Points

* Maintenance Debt: Every third-party library increases the security attack surface, requiring constant auditing and patching.

* Resource Inefficiency: Loading file data into the application’s memory heap for processing is less efficient than stream-based OS utilities.

* Tool Oversizing: We are utilizing a general-purpose language to solve a narrow, standardized cryptographic task.

Proposed State: Native Utility Delegation

To align with modern programming guidelines, we must move from "writing" to "delegating." In this context, the operating system provides a high-performance, battle-tested solution: GnuPG (GPG).

By replacing custom Python methods with direct OS-level calls, we eliminate the external dependency footprint entirely. Native utilities are optimized to stream data directly from the disk, significantly improving memory efficiency and reducing our codebase to a pure orchestrator.

Command Line Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String {
    // Build the command to delegate to the OS-native GPG utility
    Set cmd = "gpg --batch --yes --passphrase " _ Passphrase _ " --output " _ TargetFilePath _ " --decrypt " _ EncryptedFilePath   
    // Execute at the system level; $zf(-1) returns the OS exit code
    Set exitCode = $zf(-1, cmd) 
    // Return 1 for success (exit code 0), or the error code
    Return (exitCode = 0) ? 1 : "ERROR: GPG exit code " _ exitCode
}

 

Architectural Conclusion

Shifting to an OS-delegation model reinforces two vital pillars of our design philosophy:

* Suitability: Purpose-built binaries outperform general-purpose libraries for standardized tasks.

* Orchestration: Our code should serve as a thin management layer for existing utilities rather than a re-invention of established tools.

The result is a leaner, more secure, and highly maintainable enterprise data pipeline while reserving developer talent to apply to new technical problems.

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 23 ene, 2026 Lectura de 6 min

Connecting IRIS to Apache Kafka securely with SSL and SASL

1-) Introduction

This article reflects my recent experiences at trying to connect an IRIS Business Operation to a secure Kafka Server, using SSL tunnels to encrypt the communications and using SASL (Simple Authentication and Security Layer) password hashing with SCRAM-SHA-512.

 

2-) Background Information

Kafka implementations can be made extremely secure by using encryption and ACLs (Access Control Lists) to control access to topics and other resources in the cluster.

2.1-) SSL Encryption

The first part, encryption, can be easily implemented by configuring SSL on the Kafka Listeners.

The instructions and details can be found at this link:  https://kafka.apache.org/41/security/encryption-and-authentication-using-ssl/

From the link, it is important to notice that Kafka can implement Client Authentication through mutual authentication of the SSL tunnel, and that is the default behavior of the Kafka Listener.

However, to implement SASL (Simple Authentication and Security Layer) Authentication with SCRAM-SHA-512 (or SCRAM-SHA-256) hashing, the SSL Client Authentication must be disabled on the Listener.

To disable the Client Authentication on the SSL Listener, it is necessary to change the Kafka Server configuration:

ssl.client.auth=requested

ssl.endpoint.identification.algorithm=””

The default value for ssl.client.auth is required, which implies the client trying to connect to Kafka has to present a digital certificate to authenticate. Although extremely secure, SSL Authentication of the clients connecting to Kafka, require the overhead of issuing and managing the expiration of digital certificates to each individual client that is going to connect. Most customers, prefer not deal with this overhead and instead configure Kafka to use SASL.

The parameter ssl.endpoint.identification.algorithm must be set to an empty string.

2.2-) SASL Authentication

To enable SASL the required information can be found at this link: https://kafka.apache.org/41/security/authentication-using-sasl/

On my test environment I configured a Listener with the SASL_SSL security mapping on Kafka, using the SCRAM-SHA-512 mechanism for password hashing.

listeners=SASL_SSL://:9093

advertised.listeners=SASL_SSL://<kafka-hostname>:9093

sasl.enabled.mechanisms=SCRAM-SHA-512

On my test environment, with three Kafka Servers on the cluster, this configuration needs to be present on all servers.

2.3-) Authorizer and ACLs

To enable ACLs an Authorizer needs to be configured on each Broker and Controller of the Kafka cluster.

The relevant information can be found at this link: https://kafka.apache.org/41/security/authorization-and-acls/

My tests were executed on a cluster in Kraft Mode, so I had to add the following properties in the configuration files for the Kafka Brokers and Controller:

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

allow.everyone.if.no.acl.found=true

super.users=User:broker;User:client

The authorizer.class.name points to the default Authorizer that is shipped with Kafka. 

The allow.everyone.if.no.acl.found=true is a bit of overkill that allows any authenticated users to access a resource that has no specific ACL attached to it. The default value of this parameter is false, which will deny access to resources without a specific ACL.

The super.users=User:broker;User:client is a list of Principals (aka Users) that are considered Super Users, which are allowed to access any resource.

2.4-) Create SCRAM Credentials

It is also important to create the SASL credentials on the Kafka Brokers for each user that is going to authenticate on the Cluster, by using the Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--alter --add-config 'SCRAM-SHA-512=[password=<passwd>]' \

--entity-type users --entity-name <USER> \

--command-config <Config File for Connection>

It is important to create the credentials with the same level of encryption that is configured on the Listener. In my tests, I configured the Listener with SCRAM-SHA-512, but it is possible to use SCRAM-SHA-256 instead.

You can verify that the SCRAM Credentials are created on the cluster by running this Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--describe \

--entity-type users \

--command-config <Config File for Connection>

Each User (aka Principal) with associated SASL credentials will be listed on the output, like the one bellow:

SCRAM credential configs for user-principal '<USER>' are SCRAM-SHA-512=iterations=4096

At this point, we have done all the configuration required on the Kafka side.

3-) Configuring IRIS

The tests were executed using a IRIS Version soon to be GA 2025.1.3 (Build 457U), which includes some enhancements necessary to use advanced Kafka Security options like SASL SCRAM-SHA-512 or SCRAM-SHA-256.

To test this configuration, I created a simple Business Operation, in a Production in IRIS.

3.1-) Create a Production

Create a new Interoperability Production in your Namespace, if necessary create a Namespace that is capable of running Interoperability Productions:

Create a new Production, and select the options to enable Testing and Trace Events:

3.2-) Create a Simple Business Operation

Create a new Business Operation, using the out-of-the-box Class EnsLib.Kafka.Operation, I named my “Kafka Producer”, select the “Enable Now” option.

On the Business Operation Settings make the necessary changes to the Kafka Settings section:

The above example reflect my test environment, make the necessary adjustments to reflect your environment.

  • On the Servers Setting, change the broker hostname and port number to point to a Kafka Listener that has SASL_SSL security protocol enabled;
  • Credentials point to the Interoperability Credential that has the userID and password created on the Kafka broker, that have SCRAM credentials created, see details bellow;
  • SecurityProtocol has to be set to SASL_SSL;
  • SASLMechanism is set to SCRAM-SHA-512,or the encryption level that is set on your environment;
  • TrustStoreLocation, KeyStoreLocation are the files containing the SSL TrustStore and KeyStore that are going to be used to create the SSL tunnel. Those files are created on the Kafka Server and should be copied over in jks or p12 format;
  • TrustStoreCredentials, KeyStoreCredentails and KeyCredentials point to a Interoperability Credentials that have the credentials to access the TrustStore,KeyStore and Key files, see bellow;

 

3.3-) Create the Credentials

On the Credentials page:

Create two credentials:

The Credential used to authenticate on the Kafka Broker with SCRAM Credentials created on step 2.4 above.

The second credential is the password used to access the TrustStore KeyStore and Keyfile.

In my case all files have the same password, so just one credential opens all files.

3.4-) Sending a Test Message

Now we can test the configuration settings and send a Test Message, using the Test function of the Business Operation.

If everything is configured properly you should see the Test Results instantly. If the Test Results progress bar takes some time to progress, this is an indication that some mis-configuration have occurred, and you need to double-check your settings.

Clicking on the Visual Trace shows the results:

 

And if you have a Kafka IU available you can see the message on the Topic, or you can see the messages using Kafka CLI:

$ kafka-console-consumer.sh \

--bootstrap-server <Server URI> \

--topic <TOPIC> --from-beginning \

--consumer.config <consumer.config.properties>

The above CLI show all messages on the Topic <TOPIC> and it runs until it is interrupted by a Ctrl-C.

This concludes this quick tests.

Best of luck!

1 nuevo comentario
Comentarios (1)1
Inicie sesión o regístrese para continuar