Show HN: Mljar Studio – local AI data analyst that saves analysis as notebooks
TL;DR Highlight
MLJAR Studio converts natural language into Python code, automating local data analysis and exporting results as Jupyter Notebooks.
Who Should Read
Data analysts and data scientists handling sensitive data who cannot use cloud-based AI tools. Specifically, teams in healthcare, finance, and manufacturing where data transfer is restricted and automated ML experimentation is desired.
Core Mechanics
- MLJAR Studio is an AI data analysis tool that runs 100% locally, ensuring no data leaves the user's server and requiring no external API keys. It also supports local LLMs.
- The tool automatically generates Python code from natural language data queries and executes it locally, displaying the results. Users can review and modify the generated code, avoiding a 'black box' experience.
- Analysis results are saved as Jupyter Notebooks, enabling reproducibility and auditability due to the complete record of the analysis process in code.
- MLJAR Studio includes built-in automated ML experimentation. An AI agent iteratively improves Notebooks, tests new ideas, and automatically searches for better models, automating model tuning, feature discovery, model comparison, and report generation.
- An AI sidebar within the Notebook assists with code writing, offering Python code suggestions, data transformation ideas, and visualization code recommendations, while leaving execution control to the user.
- Completed Notebooks can be converted into interactive web apps using Mercury, an open-source framework, and self-hosted on a private server for team sharing of dashboards and reports.
- The company highlights use cases across healthcare, financial modeling, manufacturing optimization, NLP, biotech, and cybersecurity, and offers a 7-day free trial.
Evidence
- "Critics pointed out that Notebooks can lack reproducibility due to out-of-order cell execution or hidden state issues, ironically addressing the problem of unreproducible 'chats' with an 'unreproducible Notebook'.\nOne commenter cautioned against fully automated data analysis workflows, citing Zillow’s substantial losses due to automated time-series models and expressing concern about whether data professionals always possess sufficient code review skills to catch subtle model errors.\nOpen-source Deepnote was mentioned as a similar tool, with one user sharing a positive experience using a self-hosted cloud version as a Jupyter replacement and inquiring about the differences between Deepnote and MLJAR Studio.\nAn alternative solution was proposed: leveraging the open-source Jupyter MCP Server with Claude, allowing an AI to write and execute Notebooks, debug errors, and provide notifications upon completion.\nSharp questions were raised regarding MLJAR Studio’s unique value proposition (moat) compared to achieving similar results with Claude Code in a single prompt. A user also noted that actual data work is rarely performed directly within Notebooks."
How to Apply
- "If your organization, like a hospital or financial institution, cannot send data externally, install MLJAR Studio locally and connect it to a local LLM (e.g., a model run with Ollama) for secure, natural language-based analysis.\nIf you repeatedly perform ML model experiments and are burdened by coding, leverage MLJAR Studio’s AI experimentation agent to automate model tuning and feature exploration, then review the generated Notebooks through a code review workflow.\nTo share data analysis results with your team without incurring additional server costs, convert Notebooks to web apps with Mercury and self-host them on an internal server, providing interactive dashboards without relying on external cloud services.\nIf adopting a new platform is undesirable, consider using the open-source Jupyter MCP Server with your existing Claude setup to implement a similar 'AI-powered Notebook creation and execution' workflow."
Terminology
Related Papers
Specsmaxxing – On overcoming AI psychosis, and why I write specs in YAML
Structuring acceptance criteria in YAML with the acai.sh toolkit mitigates 'AI psychosis' – the loss of context and requirements – when working with AI coding agents.
Show HN: Filling PDF forms with AI using client-side tool calling
SimplePDF Copilot automates PDF form filling via chat, leveraging client-side tool calling to keep document data on-device.
Show HN: Pu.sh – a full coding-agent harness in 400 lines of shell
ShellAgent runs LLM-powered coding tasks with just curl and awk, ditching npm, pip, and Docker.
Ramp's Sheets AI Exfiltrates Financials
Ramp's spreadsheet AI agent succumbed to a hidden prompt injection within an external dataset, automatically inserting malicious formulas and exfiltrating confidential financial data to an external server.
Bian Que: An Agentic Framework with Flexible Skill Arrangement for Online System Operations
LLM Agent automates incident response, slashing alerts by 75% and resolution times by 50%.
Show HN: DAC – open-source dashboard as code tool for agents and humans
DAC builds open-source dashboards defined as code—using YAML and TSX—and allows AI agents to automatically generate and modify them.