Last week saw the launch of the InterSystems IRIS Data Platform in sunny California.
For the engaging eXPerience Labs (XP-Labs) training sessions, my first customer and favourite department (Learning Services), was working hard assisting and supporting us all behind the scene.
Before the event, Learning Services set up the most complicated part of public cloud :) "credentials-for-free" for a smooth and fast experience for all our customers at the summit. They did extensive testing before the event so that we could all spin up cloud infrastructures to test the various new features of the new InterSystems IRIS data platform without glitches.
The reason why they were so agile, nimble & fast in setting up all those complex environments is that they used technologies we provided straight out of our development furnace.
OK, I'll be honest, our Online Education Manager, Douglas Foster and his team have worked hard too and deserve a special mention. :-)
Last week, at our Global Summit 2017, we had nine XP-Labs over three days. More than 180 people had the opportunity to test-drive new products & features.
The labs were repeated each day of the summit and customers had the chance to follow the training courses with a BYOD approach as everything worked (and works in the online training courses that will be provided at https://learning.intersystems.com/) inside a browser.
Here is the list of the XP-Lab given and some facts:
1) Build your own cloud
Cloud is about taking advantage of the on-demand resources available and the scalability, flexibility, and agility that they offer. The XP-Lab focused on the process of quickly defining and creating a multi-node infrastructure on GCP. Using InterSystems Cloud Manager, students provisioned a multi-node infrastructure which had a dynamically configured InterSystems IRIS data platform cluster that they could test by running few commands. They also had the opportunity to unprovision it all with one single command without having to click all over a time-consuming web portal.
I think it is important to understand that each student was actually creating her/his own virtual private cloud (VPC) with her or his dedicated resources and her/his dedicated InterSystems IRIS instances. Everybody was independent of each other. Every student had her or his own cloud solution. There was no sharing of resources.
Numbers: we had more than a dozen students per session. Each student had his own VPC with 3 compute-node each. With the largest class of 15 people we ended up with 15 individual clusters. There was then a total of 45 compute-nodes provisioned during the class with 45 InterSystems IRIS instances running & configured in a small shard cluster. There were a total of 225 storage volumes. Respecting our best practices, we provide default volumes for a sharded DB, the JRN & the WIJ files and the Durable %SYS feature (more on this in another post later) + the default boot OS volume.
2) Hands-On with Spark
Apache Spark is an open-source cluster-computing framework that is gaining popularity for analytics, particularly predictive analytics and machine learning. In this XP-Lab students used InterSystems' connector for Apache Spark to analyze data that was spread over a multi-node sharded architecture of the new InterSystems IRIS data platform.
Numbers: 42 spark cluster were pre-provisioned by 1 person (thank you, Douglas again). Each cluster consisted of 3 compute-nodes for a total of 126 node instances. There were 630 storage volumes for a total of 6.3TB of storage used.
The InterSystems person that pre-provisioned the clusters run multiple InterSystems Cloud Manager instances in parallel to pre-provision all 42 clusters. The same Cloud Manager tool was also used to re-set the InterSystems IRIS containers (stop/start/drop_table) and, at the end of the summit, to unprovision/destroy all clusters so to avoid un-necessary charges.
3) RESTful FHIR & Messaging in Health Connect.
Students used Health Connect messaging and FHIR data models to transform and search for clinical data. Various transformations were applied to various messages.
Numbers: two paired containers per student were used for this class. On one container we provided the web-based Eclipse Orion editor and on the other the actual Health Connect instance. Containers were running over 6 different nodes managed by the orchestrator Docker Swarm.
So how did our team achieve all the above? How were they able to run all those training labs on the Google Compute Platform? Did you know there was a backup plan (you never know in the cloud) to run on AWS? And did you know we could just as easily run on Microsoft Azure? How could all those infrastructures & instances run and be configured so quickly over the practical lab-session of no more than 20 minutes? Furthermore, how can we quickly and efficiently remove hundreds or thousands of resources without wasting hours clicking on web portals?
As you must have gathered by now, our Online Education team used the new InterSystems Cloud Manager to define, create, provision, deploy and unprovision the cloud infrastructures and services running on top of it.
Secondly, everything customers saw, touched & experienced run in containers. What else these days? :-)
InterSystems Cloud Manager is a public, private and on-premises cloud tool that allows you to provision the infrastructure + configure + run InterSystems IRIS data platform instances.
Out of the box Cloud Manager supports the top three public IaaS providers
- GCP and
but it can also assist you with a private and/or on-premise solution as it supports
- the VMware vSphere API and
- Pre-Existing server nodes (either virtual or physical)
When I said "out of the box" above, I did not lie :)
InterSystems Cloud Manager comes packaged in a container so that you do not have to install anything and don't have to configure any software or set any variable in your environment. You just run the container, and you're ready to provision your cloud. Don't forget your credentials, though ;-)
The InterSystems Cloud Manager, although in the infancy of its MVP (minimum viable product) version, has already proven itself. It allows us to run on and test various IaaS providers quickly, provision a solution on-premise or just carve out a cloud infrastructure according to our definition.
I like to define it as a "battery included but swappable" solution.
If you already have your installation and configuration solution developed with configuration management (CM) tools (Ansible, Puppet, Chef, Salt or others) and perhaps you want to test an alternative cloud provider, Cloud Manager allows you to create just the cloud infrastructure, while you can still build your systems with your CM tool. Just be careful of the unavoidable system drifts over time.
On the other hand, if you want to start embracing a more DevOps type approach, appreciate the difference between the build phase and the run phase of your artefact, become more agile, support multiple deliveries and possibly deployments per day, you can use InterSystems' containers together with the Cloud Manager.
The tool can provision and configure both the new InterSystems IRIS data platform sharded cluster and traditional architectures (ECP client-servers + Data server with or without InterSystems Mirroring).
At the summit, we also had several technical sessions on Docker containers and two on the Cloud Manager tool itself. All sessions registered a full house. I also heard that many other sessions were packed. I was particularly impressed with the Docker container introductory session on Sunday afternoon where I counted 75 people. I don't think we could have fitted anybody else in the room. I thought people would have gone to the swimming pool :) instead, I think we had a clear sign telling us that our customers like innovation and are keen to learn.
Below is a picture depicting how our Learning Services department allowed us to test-drive the Cloud Manager at the XP-Lab. They run a container based on the InterSystems Cloud Manager; they added a nginx web server so that we can http-connect to it. The web server delivers a simple single page where they load a browser-based editor (Eclipse Orion) and at the bottom of the screen, the student is connected directly to the shell (GoTTY via websocket) of the same container so that she or he can run the provisioning & deployment commands. This training container, with all these goodies :) runs on a cloud -of course- and thanks to the pre-installed InterSystems Cloud Manager, students can provision and deploy a cluster solution on any cloud (just provide credentials).
To learn more about InterSystems Cloud Manager
here is an introductory video https://learning.intersystems.com/course/view.php?id=756
and the global summit session https://learning.intersystems.com/mod/page/view.php?id=2864
and InterSystems & Containers
here are some of the sessions from GS2017