Skip to main content

Hey everyone,

Let he who is without sin cast the first... line of AI-generated code! 😜

So, I've stumbled upon a fun fact: Turns out, if you whisper the magic word "PICK" to Google Gemini, it'll cough up some code snippets. Forget I said D3, that was so last week and Gemini refers some other arb platform

Does anyone else have some hilarious or surprisingly useful AI-assisted coding stories to share?

Happy Friday, may your bugs be shallow and easily squashed! 🍻

Stefano



------------------------------
Stefano Gallotta
Managing Member
Simply Red Open Systems
Milnerton ZA
------------------------------

Hey Stefano, 

have you tried asking the AI the best way to ask it questions?



------------------------------
Stuart Boydell
AU
------------------------------

Hi @Stuart Boydell

No, but I have watched a few Chatgpt YouTube videos, which gave me a heads-up.

 



------------------------------
Stefano Gallotta
Managing Member
Simply Red Open Systems
Milnerton ZA
------------------------------

Dears,

After following this interesting discussion about AI and PICK/MultiValue code, I would like to share an approach that I have developed which goes far beyond simply generating BASIC code with ChatGPT or Gemini. I want to clarify that this approach is specifically designed for analyzing information from D3 databases using natural language.

The MCP-Pick Revolution: Connecting Legacy Systems with Conversational AI

I have developed a bidirectional bridge between Rocket D3 systems and advanced AI models such as Claude and Gemini, using the Model Context Protocol (MCP). This technology allows our MultiValue systems to speak directly with AI agents through natural language, without the need for migration or rewriting.

How does my solution work?

 

Structured data exposure: I use standard AQL commands along with mvstoolkit to expose PICK files as HTTP endpoints that return structured JSON:

LIST INVOICE NAMEPROD NAMEBUSNIESS CODE CUSTOMER CODEPROD PRODQUANTITY PRODTOTAL DATEINVOICE

 

Orchestration with n8n: I implement automated workflows in n8n that invoke these endpoints and send the data to the MCP through Server-Sent Events (SSE).

Bidirectional communication: The AI agent (Claude/Gemini) processes this data and can respond to complex queries such as "What is the total billing by customer in April?" with contextualized analysis.

 

Tangible Results

The system I have implemented allows:

 

Natural language queries: Directly ask about data stored in Pick D3

Automated financial analysis: Calculations, monetary formats, and comparative analyses without additional programming

Modern interface generation: Complete dashboards in React/Tailwind automatically generated

 

Instant Analysis of MultiValue Data

The most revolutionary aspect of my approach is the ability to analyze Pick D3 data instantly and in any conceivable way. In my example, I work with a single billing file and a few fields (NAMEPROD, NAMEBUSNIESS, CODE, CUSTOMER, CODEPROD, PRODQUANTITY, PRODTOTAL), but this is just the tip of the iceberg.

The beauty of the system is that I don't need to predefine reports, dashboards, or specific analyses - I simply ask what I need to know in natural language. "Show me customer purchasing behavior by region," "Analyze the seasonality of our sales by product," or "Identify potential fraud patterns in the last 1000 transactions" - all of this is generated in a matter of minutes, without a single additional line of code. It just takes imagination and the AI handles the rest.

The most impactful example: with a single prompt ("Use MCP PICK to build an interactive financial dashboard in React + Tailwind CSS"), I obtained a complete website with graphs, tables, and risk analysis based directly on my MultiValue data.

Key Difference with Code Generation

Unlike asking AI to generate PICK code (with the limitations mentioned by several of you), my approach:

 

Leverages the best of both worlds: the stability and robustness of our legacy systems with the flexibility and capabilities of modern AIs

Does not require PICK programming knowledge to obtain modern interfaces and analyses

Allows real-time updating of dashboards and visualizations

 

This approach mitigates many risks, as the AI does not generate the code that executes critical operations - it only interprets, visualizes, and analyzes data already processed by proven systems.

You can see an example of the generated dashboard here: https://claude.ai/public/artifacts/5d41270f-e6af-4d30-8b08-ca79cc3c4992

 

I am available to answer questions about this implementation and share more technical details.

In my opinion, this is the true potential of AI. Best regards and I hope this implementation is useful and you can apply it in your applications.

I've attached a video showing how I interact in real-time with Claude desktop and the Pick D3 MCP to generate a portal in less than 5 minutes without writing a single line of code.

Best regards,

Fausto Paredes

https://faustoparedesia.com/



------------------------------
Fausto Paredes
GENERAL MANAGER
Admindysad Cia. Ltda.
Quito EC
------------------------------

Hi Fausto,

Very interesting post. I was dabbling in MCP and Claude Desktop myself recently, but just for experimental reasons and with Python. I find it fascinating that it is able to use agents to basically do tasks for you. We also have MVIS, so reading your use case was very cool to see.

On the security side of things though, I was wondering that by asking an LLM like Gemini, ChatGPT, or Claude questions about the JSON files you are producing from MVIS, wouldn't it be exposing that data to them? That has been one of the obstacles for us moving forward. In order for us to use our data, we would need the hardware infrastructure to be able to try and host a local AI model, which aren't as powerful as the big LLMs. On the one hand, we see all the great things that these foundational LLMs can do and we want it to be able to analyze our data and do all these things for us. But at the same time, by doing so, we are basically giving them our data which will then be used by them to further train their models. Then we hear news like how, allegedly, Deepseek used OpenAI's training data to help train their own model, which they then released as open source for everyone to use and fine tune. Then at some point, it was found that Deepseek had a vulnerability in a database they were using that potentially exposed their data to anyone who was looking for it at the time. So in those instances, potentially hackers could scrape that data and get a hold of anything in there to analyze and use for their own purposes.

https://www.darkreading.com/cyberattacks-data-breaches/deepseek-breach-opens-floodgates-dark-web

I think right now, is kind of the Wild Wild West of AI innovation. Both good guys and bad guys are trying to get an edge where they can, and unfortunately, sometimes, corners can be cut. I don't think enough is being done in terms of AI security yet. So there are a lot of potential pitfalls we all need to keep an eye out for. Here are a couple of other examples:

Scrutinize the MCP servers you use:

https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks

This hacker that stole Disney data, posted an AI image generation tool on Github that a Disney employee used. But the tool had malicious components to it that allowed the hacker to harvest credentials and then gain access to Disney's data:

https://www.securityweek.com/man-admits-hacking-disney-and-leaking-data-disguised-as-hacktivist/

In conclusion, as we all continue to strive forth and charter into unknown AI territory, we still need to protect ourselves while doing so. That is until the Matrix enslaves us all... or Terminators.. or Zombie apocalypse... or for the hopeful ones, a Star Trek future! :)



------------------------------
Alex Liu
Systems Engineer
Coppersmith Logistics
El Segundo CA US
------------------------------

Hi Fausto,

Very interesting post. I was dabbling in MCP and Claude Desktop myself recently, but just for experimental reasons and with Python. I find it fascinating that it is able to use agents to basically do tasks for you. We also have MVIS, so reading your use case was very cool to see.

On the security side of things though, I was wondering that by asking an LLM like Gemini, ChatGPT, or Claude questions about the JSON files you are producing from MVIS, wouldn't it be exposing that data to them? That has been one of the obstacles for us moving forward. In order for us to use our data, we would need the hardware infrastructure to be able to try and host a local AI model, which aren't as powerful as the big LLMs. On the one hand, we see all the great things that these foundational LLMs can do and we want it to be able to analyze our data and do all these things for us. But at the same time, by doing so, we are basically giving them our data which will then be used by them to further train their models. Then we hear news like how, allegedly, Deepseek used OpenAI's training data to help train their own model, which they then released as open source for everyone to use and fine tune. Then at some point, it was found that Deepseek had a vulnerability in a database they were using that potentially exposed their data to anyone who was looking for it at the time. So in those instances, potentially hackers could scrape that data and get a hold of anything in there to analyze and use for their own purposes.

https://www.darkreading.com/cyberattacks-data-breaches/deepseek-breach-opens-floodgates-dark-web

I think right now, is kind of the Wild Wild West of AI innovation. Both good guys and bad guys are trying to get an edge where they can, and unfortunately, sometimes, corners can be cut. I don't think enough is being done in terms of AI security yet. So there are a lot of potential pitfalls we all need to keep an eye out for. Here are a couple of other examples:

Scrutinize the MCP servers you use:

https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks

This hacker that stole Disney data, posted an AI image generation tool on Github that a Disney employee used. But the tool had malicious components to it that allowed the hacker to harvest credentials and then gain access to Disney's data:

https://www.securityweek.com/man-admits-hacking-disney-and-leaking-data-disguised-as-hacktivist/

In conclusion, as we all continue to strive forth and charter into unknown AI territory, we still need to protect ourselves while doing so. That is until the Matrix enslaves us all... or Terminators.. or Zombie apocalypse... or for the hopeful ones, a Star Trek future! :)



------------------------------
Alex Liu
Systems Engineer
Coppersmith Logistics
El Segundo CA US
------------------------------

Hi Alex,


Thank you for your thoughtful message and for raising such important concerns-especially around data security in today's fast-moving AI landscape. I fully agree: we're navigating a kind of Wild West, where opportunity and risk evolve at the same speed.

Regarding your concern about using LLMs like Claude, ChatGPT, or Gemini to analyze JSON data extracted from MVIS/PICK, it's important to clarify that these platforms do not use API-submitted data to train their models, as long as you're operating under enterprise agreements or opt-out settings.

Official references:
https://openai.com/enterprise-privacy/
https://www.anthropic.com/legal/privacy

That said, for organizations with high privacy standards, even this assurance may not be sufficient. That's why we advocate for a hybrid approach, which balances AI power with full data control.

Hybrid Architecture: Intelligence with Control
This model allows sensitive data to be handled locally, while external models are used for non-critical tasks.

Key Components:

  1. Windsurf / Cursor (Local Environment):
    Local tools that provide a secure interface for interacting with language models entirely within your machine or private network. Both act as local orchestration hubs, integrating smoothly with databases, agents, mcp,  and automation tools like n8n.
  2. Ollama (Local Model Runner):
    A high-performance engine for running open-source models such as:
  • Mistral
  • DeepSeek
  • Gemma
    These run locally with zero external communication, ensuring full data privacy.
  1. n8n (Orchestrator):
    A workflow engine that dynamically decides whether a query should be resolved locally or sent to a cloud LLM, based on pre-defined data sensitivity rules.

About the links you shared:
The cases you referenced-like Disney and DeepSeek-highlight why performance alone isn't enough. Security must be embedded in the design:

  • MCP endpoints must be encrypted and authenticated.
  • AI tools must go through internal audits or run inside isolated sandboxes.
  • Apply a Zero Trust model across every integration layer.

Final Thoughts:
We can enjoy the best of AI without sacrificing data control. The key is to build with intent: segment risk, determine what stays local, and define what can be analyzed externally. Like you said-this might feel like the Wild West, but with strong foundations, we can build a modern city, not a ghost town.

Great to exchange ideas with you.



------------------------------
Fausto Paredes
GENERAL MANAGER
Admindysad Cia. Ltda.
Quito EC
------------------------------

Now this is a thread I'll enjoy following.

Has anyone had any success with using an AIs to generate MultiValue code ?

Could this help bridge the skills gap with newbies coming in and companies looking at future-proofing their systems.



------------------------------
Elkie Holland
MD / IT Recruiter
Prospectus It Recruitment
SHEPPERTON GB
------------------------------

Here's my fun fact: I'm in the process of suing ChatGPT's parent company and several others for stealing my work.
There's ethical AI out there. How about we stop promoting the worst of them and not encourage others with tips and tricks based on stealing from people in this community.



------------------------------
Charles Barouch
Programming Manager
Sunrise Credit Services
Farmingdale NY US
------------------------------