Encontrar

Pregunta
· 15 dic, 2024

Versión evaluación CACHE

Como se puede observar mi versión de Cache ya no esta en la lista de productos.

Hace ya bastante tiempo hice algunas preguntas de una versión aun mas vieja, en esa versión tenia 5 bases de datos, en su momento solo instalé la principal para mí en la versión 2010, estoy intentando migrar las otras cuatro y me encuentro con el siguiente problema:

         - Al montar la base de datos solo la monta "lectura", sin opción de cambiar ni por el "portal de gestión de sistema" ni con ^DATABASE

Ya me identifique en su momento como un dinosaurio para las épocas que corren, mi conclusión en este punto es que al ser una licencia de evaluación no deja montar mas de una base de datos (lectura/escritura).

¿Es correcta mi conclusión?, de lo contrario alguien me podría indicar a donde radica el problema.

Gracias

2 comentarios
Comentarios (2)1
Inicie sesión o regístrese para continuar
Pregunta
· 15 dic, 2024

Is it possible to feed a question to the D.C. A.I. via URL parameter?

I want to provide a shortcut to the D.C. A.I. from a hotkey which will automatically populate the question field here - https://community.intersystems.com/ask-dc-ai.

Is there a way to construct this URL such that it will populate the "Ask a Programming Question" field (and better yet, execute the query)?

Thanks!

3 comentarios
Comentarios (3)1
Inicie sesión o regístrese para continuar
Artículo
· 14 dic, 2024 Lectura de 6 min

Rivian GeoLocation Plotting with IRIS Cloud Document and Databricks


 

Plotting the gnSSLocation data from my Rivian R1S across Michigan with InterSystems Cloud Document and Databricks

If you been looking for a use case for a Document Database, I came to the realization my favorite dead simple one is the ability to query a pile of JSON, right along side my other data with sql without really doing much. Which is the dream realized from the powerful Multi Model InterSystems Data Platform, and shown here in a simple notebook to visualize my geo location data my Rivian R1S is emitting for DeezWatts ( A Rivian Data Adventure ).

So here is the 2 step approach, Ingestion to and Visualization from InterSystems Cloud Document, using the JDBC document driver.

InterSystems Cloud Document Deployment

For starters, I fired up a small Cloud Document deployment on the InterSystems Cloud Services Portal, with an enabled listener.

I downloaded the ssl certificate, and snagged the drivers for JDBC and accompanying document driver as well.

Ingestion

For ingestion, I wanted to get a grip on how to lift a JSON document from the file system and persist it as a collection in the document database over the listener, for this I wrote a standalone java app. This was more utility as the fun all happened in the notebook after the data was up in there.
 

 
RivianDocDB.java

The above is quite close to JAVA trash, but worked, we can see the collection in the collection browser in the deployment.

Databricks

Now this takes a little bit of Databricks setup, but is well worth it to work with pyspark for the fun part.

I added the two InterSystems drivers to the cluster, and put the certificate in the import_cloudsql_certficiate.sh cluster init script so it gets added to the keystore.

For completeness, the cluster is running Databricks 16, Spark 3.5.0 and Scala 2.12

Visualization

So we should be set to run a PySpark job and plot where my whip has been in the subset of data Ill drag in.

We are using geopandas and geodatasets for a straight forward approach to plotting.

import geopandas as gpd
import geodatasets
from shapely.geometry import Polygon

Now, this takes a little bit to get used to, but here is the query to InterSystems Cloud Document using the JSON paths syntax and JSON_TABLE.

dbtablequery = f"(SELECT TOP 1000 lat,longitude FROM JSON_TABLE(deezwatts2 FORMAT COLLECTION, '$' COLUMNS (lat VARCHAR(20) path '$.whip2.data.vehicleState.gnssLocation.latitude', longitude VARCHAR(20) path '$.whip2.data.vehicleState.gnssLocation.longitude' ))) AS temp_table;"

 

I did manage to find a site that made it dead simple to create the json path @ jsonpath.com.

Next we setup the connection to the IRIS Document Database Deployment and read it into a dataframe.

# Read data from InterSystems Document Database via query above
df = (spark.read.format("jdbc") \
  .option("url", "jdbc:IRIS://k8s-05868f04-a88b7ecb-5c5e41660d-404345a22ba1370c.elb.us-east-1.amazonaws.com:443/USER") \
  .option("jars", "/Volumes/cloudsql/iris/irisvolume/intersystems-document-1.0.1.jar") \
  .option("driver", "com.intersystems.jdbc.IRISDriver") \
  .option("dbtable", dbtablequery) \
  .option("sql", "SELECT * FROM temp_table;") \
  .option("user", "SQLAdmin") \
  .option("password", "REDACTED") \
  .option("connection security level","10") \
  .option("sslConnection","true") \
  .load())


Next we grab an available map from geodatasets, the sdoh one is great for generic use of the united states.
 

# sdoh map is fantastic with bounding boxes
michigan = gpd.read_file(geodatasets.get_path("geoda.us_sdoh"))

gdf = gpd.GeoDataFrame(
    df.toPandas(), 
    geometry=gpd.points_from_xy(df.toPandas()['longitude'].astype(float), df.toPandas()['lat'].astype(float)), 
    crs=michigan.crs #"EPSG:4326"
)

Now the cool part, we want to zoom in on where we want to contain the geo location points of where the R1S has driven, for this we need a bounding box for the state of Michigan.

For this I used a really slick tool from Keene to draw the geo fence bounding box and it gives me the coordinates array!

Now that we have the coordinates array of the bounding box, we need slap them into a Polygon object.

polygon = Polygon([
      (
        -87.286377,
        45.9664245
      ),
      (
        -81.6503906,
        45.8134865
      ),
      (
        -82.3864746,
        42.1063737
      ),
      (
        -84.7814941,
        41.3520721
      ),
      (
        -87.253418,
        42.5045029
      ),
      (
        -87.5610352,
        45.8823607
      )
    ])

 

Now, lets plot the trail of the Rivian R1S! This will be for about 10,000 records (I used a top statement above to limit the results)
 

ax = michigan.clip(polygon).plot(color="lightblue", alpha=0.5,linewidth=0.8, edgecolor='gray')
ax.axis('off')
ax.annotate("Data: Rivian R1S Telemetry Data via InterSystems Document Database", xy=(0.01, .085), xycoords='figure fraction', fontsize=14, color='#555555')

gdf.plot(ax=ax, color="red", markersize=1.50, alpha=0.5, figsize=(200,200))

 

And there we have it... Detroit, Traverse City, Silver Lake Sand Dunes, Holland, Mullet Lake, Interlachen... Pure Michigan, Rivian style.


 

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 14 dic, 2024 Lectura de 4 min

第五十章 File 输入 输出

第五十章 File 输入 输出

本页介绍在 IRIS 数据平台中使用顺序文件。

重要:在大多数情况下,可以使用 %Library.File 类提供的 API,而不需要此页面上的详细信息。请参阅使用 %Library.File

介绍

所有操作系统都将磁盘 I/O 文件视为顺序文件。 Windows 系统将打印机视为顺序文件 I/O 设备(除非打印机通过串行通信端口连接)。 UNIX® 系统将打印机视为终端 I/O` 设备。有关打印机的更多详细信息,请参阅打印机。

本节讨论 IRIS 如何处理顺序文件。它提供了顺序文件 I/O 的介绍以及相关命令的说明。

Comentarios (0)1
Inicie sesión o regístrese para continuar
Artículo
· 13 dic, 2024 Lectura de 3 min

第四十九章 终端输入 输出 - 转义序列编程

第四十九章 终端输入 输出 - 转义序列编程

转义序列编程

转义序列的 ANSI 标准使智能终端的编程变得实用。字符串中转义字符及其后面的所有字符不会显示在屏幕上,但会更新$X$Y 。使用WRITE *语句将转义序列发送到终端,并通过直接设置$X和$Y来保持它们最新。

ANSI 标准建立了转义序列的标准语法。特定转义序列的效果取决于使用的终端类型。

每次READ之后在$ZB中查找传入的转义序列。 IRISANSI 标准转义序列和任何其他使用 ANSI 形式的转义序列放入$ZB中。 IRIS 可识别两种形式的转义序列:

Comentarios (0)1
Inicie sesión o regístrese para continuar