Main Content

Results for

Introduction
This article describes how I have used MATLAB, MCP, and other tools to enable AI desktop apps to communicate with and share information between multiple AIs in performing research and code development. I describe how Claude desktop app (for example) can orchestrate AI-related setups of itself and other AI dektop apps using system calls through MATLAB, access multiple local and cloud AIs to develop and test code, and share with and evaluate results from multiple AIs. If you have been copy-pasting between MATLAB and an AI application or browser page, you may find this helpful.
Warning
When connected to MATLAB App via MCP, an AI desktop application acquires MATLAB App's command line privileges, possibly full system privileges. Be careful what commands you approve.
Setup
Experiments with Claude code and MATLAB MCP Core Server describes how to link Claude App via MCP to a local MATLAB to create MATLAB scripts in your file system, operate MATLAB App to test them, collect errors sent to standard output, view created files, and iterate. Other AI apps can similarly configured as described here.
My setup is an APPLE M1 MacBook with MATLAB v2025a and ollama along with MATLAB MCP Core Server, ollama MCP, filesystem MCP, fetch MCP to access web pages, and puppeteer MCP to navigate and operate webpages like MATLAB Online. I have similarly Claude App, Perplexity App (which requires the PerplexityXPC helper for MCP since it's sandboxed as a Mac App Store app), and LM Studio App. As of this writing, ChatGPT App support for MCP connectors is currently in beta and possibly available to Pro users if setup enabled via a web browser. It is not described here.
The available MCP commands are:
filesystem MCP `read_text_file`, `read_media_file`, `write_file`, `edit_file`, `list_directory`, `search_files`, `get_file_info`, etc.
matlab MCP `evaluate_matlab_code`, `run_matlab_file`, `run_matlab_test_file`, `check_matlab_code`, `detect_matlab_toolboxes`
fetch MCP ‘fetch_html`, `fetch_markdown`, `fetch_txt`, `fetch_json` | Your Mac |
puppeteer MCP `puppeteer_navigate`, `puppeteer_screenshot`, `puppeteer_click`, `puppeteer_fill`, `puppeteer_evaluate`, etc.
ollama MCP ‘ollama_list’, ‘ollama_show’, ’ollama_pull’, ’ollama_push’, ’ollana_copy’, ’ ollama-create’, ’ollama_copy’, ’ollama_delete’, ’ollama_ps’, ’ollama_chat’, ’ollama_web_search’, ’ollama_web_fetch’
Claude App (for example) can help you find, download, install, and configure MCP services for itself and for other Apps. Claude App requires for this setup a json configuration file like
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "ollama-mcp"],
"env": {
"OLLAMA_HOST": "http://localhost:11434"
}
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/duncancarlsmith/Documents/MATLAB"
]
},
"matlab": {
"command": "/Users/duncancarlsmith/Developer/mcp-servers/matlab-mcp-core-server",
"args": ["--matlab-root", "/Applications/MATLAB_R2025a.app"]
},
"fetch": {
"command": "npx",
"args": ["-y", "mcp-fetch-server"]
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
}
}
}
Various options for these services are available through Claude App’s Settings=>Connectors. If you first install and set up the MATLAB MCP with Claude App, then Claude can find and edit its own json file using MATLAB and help you (after quitting and restarting) to complete the installations of all of the other tools. I highly recommend using Claude App as a command post for installations although other desktop apps like Perplexity may serve equally well.
Perplexity App is manually configured using Settings=>Connectors and adding server names and commands as above. Perplexity XPC is included with the Perplexity App download . When you create connectors with Perplexity App, you are prompted to install Perplexity XPC to allow Perplexity App to spawn proceses outside its container. LM Studio is manually configured via its righthand sidebar terminal icon by selecting Install-> Edit mcp.json. The json is like Claude’s. Claude can tell/remind you exactly what to insert there.
One gotcha in this setup concerns the Ollama MCP server. Apparently the default json format setting fails and one must tell the AIs to use “markdown” when commuicating with it. This instruction can be made session by session but I have made it a permanent preference in Claude App by clicking my initials in the lower left of Claude App, selecting settings, and under “What preferences should claude consider in responses,” adding “When using ollama MCP tools (ollama_chat, ollama_generate), always set format="markdown" - the default json format returns empty responses.”
LM Studio by default points at an Open.ai API. Claude can tell you how to download a model, point LM Studio at a local Ollama model and set up LM Studio App with MCP. Be aware that the default context setting in LM Studio is too small. Be sure to max out the context slider for your selected model or you will experience an AI with a very short term memory and context overload failures. When running MATLAB, LM Studio will ask for a project directory. Under presets you can enter something like “When using MATLAB tools, always use /Users/duncancarlsmith/Documents/MATLAB as the project_path unless the user specifies otherwise.” and attach that to the context in any new chat. An alternate ollama desktop application is Ollama from ollama.com which can run large models in the cloud. I encountered some constraints with Ollama App so focus on LM Studio.
I have installed Large Language Models (LLMs) with MATLAB using the recommended Add-on Browser. I had Claude configure and test it. This package helps MATLAB communicate with external AIs via API and also with my native ollama. See the package information. To use it, one must define tools with MATLAB code. A tool is a function with a description that tells an LLM what the function does, what parameters it takes, and when to call it. FYI, Claude discovered my small ollama default model had not been trained to support tool and hallucinated a numerical calculaton rather than using a tool we set up to perform such a calculation with MATLAB with machine precision. Claude suggested and then orchestrated the download of the 7.1 GB model mistral-nemo model which supports tool calling so if using you are going to use tools be sure to use a tool aware model.
To interact with Ollama, Claude can use the MATLAB MCP server to execute MATLAB commands to use ollamaChat() from the LLMs-with-MATLAB package like “chat = ollamaChat("mistral-nemo");response = generate(chat, "Your question here”);” The ollamaChat function creates an object that communicates with Ollama's HTTP API running on localhost:11434. The generate function sends the prompt and returns the text response. Claude can also communicate with Ollama using the Ollama MCP server. Similarly, one can ask Claude to create tools for other AIs.
The tool capability allows one to define and suggest MATLAB functions that the Ollama model can request to use to, for example, to make exact numerical calculations in guiding its response. I have also the MATLAB Add-on MATLAB MCP HTTP Client which allows MATLAB to connect with cloud MCP servers. With this I can, for example, connect to an external MCP service to get JPL-quality (SPICE generated) ephemeris predictions for solar system objects and, say, plot Earth’s location in solar system barycentric coordinates and observe the deviations from a Keplerian orbit due of lunar and other gravitational interactions.
To connect with an external AI API such as openAI or Perplexity, you need an account and API key and this key must be set as an environmental variable in MATLAB. Claude can remind you how to create the environmental variable by hand or how you can place your key in a file and have Claude find and extract the key and define the environmental variable without you supplying it explicitly to Claude.
It should be pointed out that conversations and information are NOT shared between Desktop apps. For example, if I use Claude App to, via MATLAB, make an API transaction with ChatGPT or Perplexity in the cloud, the corresponding ChatGPT App or Perplexity App has no access the the transaction, not even to a log file indicating it occured. There may be various OS dependent tricks to enable communication between AI desktop apps (e.g. AppleScript/AppleEvents copy and paste) but I have not explored such options.
AI<->MATLAB Communication
WIth this setup, I can use any of the three desktop AI apps to create and execute a MATLAB script or to ask any supported LLM to do this. Only runtime standard output text like error messages generated by MATLAB are fed back via MCP. Too evaluate the results of a script during debugging and after successful execution, an AI requires more information.
Perplexity App can “see and understand” binary graphical output files (presented as inputs to the models like other data) without human intervention via screen sharing, obviating the need to save or paste figures. Perplexity and Claude can both “see” graphical output files dragged and dropped manually into their prompt interface. With MacOS, I can use Shift-Command-5 to capture a window or screen selection and paste it into the input field in either App. Can this exchange of information be accomplished programmatically? Yes, using MCP file services.
To test what binary image file formats are supported by Claude with the file server connection, I asked Claude to use MATLAB commands to make and convert a figure to JPEG, PNG, GIF, BMP, TIFF, and WebP and found the filesytem: read_media_file command of the Claude file server connection supported the first three. The file server MCP transmits the file using JSON and base64 text strings. A 64-bit encoding adds an additonal ~30 percent to a bitmap format and there is overhead. A total tranmission limit of 1MB is imposed by Anthropic so the maximum bitmap file size appears to be about 500 KB. If your output graphic file is larger, you may ask Claude to use MATLAB to compress it before reading it via the file service.
BTW, interestingly, the drag and drop method of image file transfer does not have this 500 KB limit. I dragged and dropped an out of context (never shared in my interactions with Claude) 3.3 MB JPEG and received a glowing, detailed description of my cat.
So in in generating a MATLAB script via AI prompts, we can ask the AI to make sure MATLAB commands are included in the script that save every figure in a supported and possibly compressed bitmap format so the command post can fetch it. Claude, for example, can ‘see’ an an annotated plot and and accurately describe the axes labels, understand the legend, extract numbers in an annotation, and also derive approximate XY values of a plotted function. Note that Claude (given some tutoring and an example) can also learn to parse and find all of the figures saved as PNG inside a saved .mlx package and, BTW, create a .mlx from scratch. So an alternate path is to generate a Live Script or ask the AI to convert a .m script to Live Script using a MATLAB command and to save the executed .mlx. Another option is for the AI to ask MATLAB to loop over figures and execute for each issue a command like exportgraphics(fig, ‘figname.png', 'Resolution', 150); and then itself upload the files for processing.
With a sacrifiice of security, other options are available. With the APPLE OS, it is possible to approve matlab_mcp_core_server (and or your Claude App or Perplexity App) to access screen and audio recording. Then for example, Claude can ask MATLAB to issue a system command system('screencapture -x file.png') to capture the entire screen or java.awt.Robot().createScreenCapture() to capture a screen window. I have demonstrated with Claude App capturing a screenshot showing MATLAB App and figures, and, for fun, sending that via API to ChatGPT, and receiving back an analysis of the contents. (Sending the screenshot to Perplexity various ways via API failed for unknown reasons despite asking Perplexity for help.)
One might also try to execute a code like
robot = java.awt.Robot();
toolkit = java.awt.Toolkit.getDefaultToolkit();
screenSize = toolkit.getScreenSize();
rectangle = java.awt.Rectangle(0, 0, screenSize.width, screenSize.height);
bufferedImage = robot.createScreenCapture(rectangle);
% ... convert to MATLAB array and save with imwrite()
to capture and transmit a certain portion of your screen. Going down this path further, according to Claude, it is possible to create a MacOS virtual Desktop containing only say MATLAB, Claude, and Perplexity apps so a screen capture does not accidentally transmit a Mail or Messages window. Given accessiblity permission, one could capture windows by ID and stitch them together with a MATLAB command like composite = [matlab_img; claude_img; perplexity_img]; imwrite(composite, 'ai_workspace.png');
One must be careful an AI does not fake an analysis of an image based on context. Use a prompt preface like ‘Based solely on this image,… .” Note that AI image analysis is useful if you want suggestions for how to improve a figure by say moving a legend location from some default ‘best location’ to another location where it doesn’t hide something important.
What about communicating exact numerical results from MATLAB to an AI? A MATLAB .fig format file contains all of the exact data values used to create the figure. It turns out, Claude can receive a .fig through the manually-operated attach-file option in Claude App. Claude App of course sends received data to Anthropic and can parse the .fig format using python in its Docker container. In this way it can access the exact values behind plot data points and fitted curves and, for example, calculate a statistic describing agreement between a model curve and the data, assess outliers, and in principle suggest actions like smoothing or cleaning. Perplexity App’s manual attach-file handler does not permit upload of this format. There seem to be workarounds to 64-bit encode output files like .fig and transfer them to the host (Anthropic or Perplexity) but are there simpler ways to communicate results of the script execution? Yes.
Unless one has cleared variables during execution, all numerical and other results are contained in workspace variables in MATLAB’s memory. The values of these variables if saved can be accessed by an AI using MATLAB commands. The simplest way to ensure these values are available is to ask the AI that created and tested the script to include in the script itself a command like save('workspace.mat’) or to ask MATLAB to execute this command after executing the script. Then any AI connected to MATLAB can issue a request for variable values by issuing a MATLAB command like ‘data=load(‘workspace.mat’);fprintf(‘somevariablename’);and receive the response as text. An AI connected to MATLAB can also garner data embedded in a saved figure using MATLAB with a command like fig = openfig('MassPlots.fig', 'invisible'); h = findall(fig, 'Type', 'histogram'); data = h.Values .
Example work flow
The screenshot below illustrates a test with this setup. On the right is Perplexity App. I had first asked Perplexity to tell me about Compact Muon Solenoid (CMS) open data at CERN. The CERN server provides access to several data file types through a web interface. I decided to analyze the simplest such files, namely, Higgs boson decay candidate csv files containing the four-momentum vectors of four high energy leptons (two electrons and two muons) in select events recorded in the early years 2011 and 2012. (While the Higgs boson was discovered via its top-quark/W-boson loop-mediated decay to two photons, it can also decay to two Z bosons and each of these to a lepton+antilepton pair of any flavor.)
Screenshot showing Claude App, MATLAB, and Perplexity App
I asked Perplexity to create a new folder and write a MATLAB code to download those two files into the folder. Perplexity asked me to mouse over and copy the URLs of the download links on the appropriate page as these were hidden behind java applications, and voila. (As a test, I asked Claude in vague terms to find and download these files and it just figured it out without my manual intervention.) Next I asked Perplexity to “write and execute a MATLAB script to histogram the invariant mass of the electron pairs, of the muon pairs, and of the entire system of leptons, and to overlay fits of each dilepton mass distribution to a Lorentzian peak near the Z-boson mass (~90 GeV) plus a smooth background, save the script in the same folder, and run it.” It turns out that uncorrelated Z-boson pairs can be created by radiation from uncorrelated processes, that virtual Z bosons and photons with the “wrong” mass can be created, so one does not expect to necessary see a prominence in the 4-lepton invariant mass at the Higgs mass. For this reason I did not ask for a fit to the 4-lepton mass distribution. I did provide the starting estimated value for the location of the peak based on my experience of such nonlinear fits going off the rails.
After approving a few fixes of minor errors, voila. Sweet. In the middle of the screen shot are MATLAB App and the figure created. The format of the figure was all invented by the AI. On the left is Claude App’s analysis of the data, extracted from the manually uploaded figure in .fig format. The prompt was “Extract the histogram and fit values bin by bin an calculate the chisq per degree of freedom using the data in that .fig upload” .
I manually saved the workspace using the MATLAB Command line ( I could have asked Claude to do that) and I issued to Claude App the prompt “OK, I just saved workspace.mat in MATLAB/CMS. List the contents and the fit results.” Below is an image of the response of Claude analyzing the workspace .mat file.
Screenshot of Claude App .fig analysis
I then asked Claude to ask my local Ollama model for a 2nd opinion with the prompt “I want you to ask mistral-nemo for a 2nd opinion on the results on its own, based on all it knows, and to ask it for information about related resources. ” Claude first formulated and sentthe following prompt to the local LLM: “I measured the Z boson in the dimuon channel and got: Mass = 92.46 GeV Width = 12.44 GeV. The PDG values are M_Z = 91.19 GeV and Gamma_Z = 2.50 GeV. Based on your knowledge of particle physics and detector effects, what do you think is going on with my measurement? What could cause the width to be 5x larger than the natural width?” It’s mention of the width discrepency inparticular indicated it knew more about the physics than it had divulged.
Claude used MATLAB to call the local Ollama's REST API to get mistral-nemo's opinion with the path MATLAB MCP webwrite() Ollama Server (localhost:11434) mistral-nemo% calling Ollama's API from MATLAB with the pattern url = 'http://localhost:11434/api/generate';body = struct('model', 'mistral-nemo', 'prompt', 'Your question...', 'stream', false);response = webwrite(url, body, weboptions('MediaType', 'application/json'));
The received response was:
Your measured results for the Z boson mass and width differ significantly from the Particle Data Group (PDG) values. Here's a possible explanation for these discrepancies:
1. **Mass:**
The measured mass of 92.46 GeV is about 1.3 GeV higher than the PDG value of 91.19 GeV. This could be due to:
- **Detector calibration issues**: If your detector's energy scale is slightly miscalibrated, it could lead to an apparent shift in the measured mass.
- **Systematic effects**: Unaccounted-for systematic uncertainties might contribute to this difference.
- **New Physics**: While less likely, if new physics processes affect the Z boson, they could cause a resonant rise or other deviations from the expected behavior.
2. **Width:**
The measured width of 12.44 GeV is roughly five times larger than the natural width (Gamma_Z = 2.50 GeV). This significant difference is concerning and suggests potential issues:
- **Resonance tail**: If there's an unknown resonance near the Z boson mass, it could contribute to a large apparent width.
- **Misestimation of experimental resolution**: If you have underestimated your detector's energy resolution, it might result in a broader observed peak. The difference here seems quite large compared to typical experimental uncertainties (~1-2%).
- **Backgrounds and pile-up**: Higher backgrounds or pile-up events could artificially broaden the width.
- **Analysis method**: There might be an issue with your analysis method leading to a biased estimation of the Z boson width.
Given the discrepancy in both mass and width, it's crucial to perform thorough checks and validations:
- Review detector calibration data.
- Evaluate systematic uncertainties.
- Check for potential instrumental effects or electronic issues with the detector.
- Verify background models and consider alternative fitting methods or techniques like sideband subtraction.
- Re-evaluate your analysis pipeline to ensure no steps lead to biases in the measurements.
- Consult colleagues or experts to help identify possible causes of the discrepancy.
If these checks don't resolve the issue, it might warrant a cautionary note on potential new physics processes affecting the Z boson. However, without solid evidence and careful validation, it would be premature to claim such findings at this stage.
Claude then prompted the Ollama model with “I’m analyzing CMS HZZ4 data to measure Z boson properties. Can you recommend:1. Key papers I should read on Z boson measurements at the LHC 2. Standard analysis techniques for handling detector resolution 3. Any publicly available CMS or ATLAS open data resources 4. Software tools commonly used for this type of analysis” and received a long list of resources.
As Claude pointed out, the local Ollama model operates purely from memory and is subject to URL hallunciation so I asked Claude to check each of the many (too many to list here) URLs that Ollama provided. Claude has several options to accomplish such a task in my setup:
1. web_fetch (Anthropic tool)
2. fetch MCP server (installed on my Mac) commands fetch_html, fetch_markdown, fetch_text, fetch_jason.
3. puppeteer MCP server (installed on my Mac) which can navigate to a URL and take a screen shot and handle JavaScript-rendered pages.
4. web_search (Anthropic tool)
5. MATLAB web access tools including webread, websave, and shell commands
Claude first chose to test Ollama’s references using my installed fetch MCP server and found Mistral-nemo's URL accuracy to be: 5 valid: opendata.cern.ch, root.cern.ch, opendata.atlas.cern, pythia.org (concept, wrong URL), cms-sw.github.io; 6 invalid/hallucinated: atlas-open-data.org, pythia8.hep.yorku.ca, aida.hep.com, mc-code.eu/sherpa, www.cms.gov; 2 exist but WRONG CONTENT: arxiv URLs exist but are completely unrelated papers!
However, as Claude pointed out, use of one tool alone is subject to gotcha’s due to the variety of webpage content and deployment, robot rejection methods, and ability to check the content is valid if the URL is in fact reachable so a more detailed winnowing of the supplied resources ensued, combining results from using all tools.
Puppeteer
So what does the puppeteer server bring to the table? Puppeteer allows an app to access a website and exercise its interface. I used it with Claude App to explore and understand the interactive tools for creating an article submission on this website. Rather than create this submission, based on my own experience and Claude’s help, decided tht rather than have Claude build the submission interactively, it was easiest this time to create the submission in formatted .rtfd and paste that manually into the article submission field retaining all formating, and possibly use MATLAB to downsize the graphics a bit before insertion. WIth more experience, all this could be automated.
Like Perplexity Comet and the new Claude Chrome Extension, with puppeteer, your desktop AI App can presumably operate MATLAB Online but I’ve yet to explore that. If you do, let me know how it goes.
Conclusion
I hope this article encourages you to explore for yourself the use of AI apps connected to MATLAB, your operating system, and to cloud resources including other AIs. I am more and more astounded by AI capabilities. Having my “command post” suggest, write, test, and debug code, anwser my questions, and explore options was essential for me. I could ot have put this together unasisted. Appendix 1 (by Claude) delves deeper into the communications processes and may be helpful. Appendix 2 provides example AI-agent code generated by Claude. A much more extensive one was generated for interaction with Claude. To explore this further, ask Claude to just build and test such tools.
References
Appendix 1 Understanding the Architecture (Claude authored)
What is MCP?
Model Context Protocol (MCP) is an open standard developed by Anthropic that allows AI applications to connect to external tools through a standardized interface. To understand how it works, you need to know where each piece runs and how they communicate.
Where Claude Actually Runs
When you use Claude Desktop App, the AI model itself runs on Anthropic's cloud servers — not on your Mac. Your prompts travel over HTTPS to Anthropic, Claude processes them remotely, and responses return the same way. This raises an obvious question: how can a remote AI interact with your local MATLAB installation?
The Role of Claude Desktop App
Claude Desktop App is a native macOS application that serves two roles:
  1. Chat interface: The window where you type and read responses
  2. MCP client: A bridge that connects the remote Claude model to local tools
When you launch Claude Desktop, macOS creates a process for it. The app then reads its configuration file (~/Library/Application Support/Claude/claude_desktop_config.json) and spawns a child process for each configured MCP server. These aren't network servers — they're lightweight programs that communicate with Claude Desktop through Unix stdio pipes (the same mechanism shell pipelines use).
┌─────────────────────────────────────────────────────────────────┐
Your Mac
Claude Desktop App (parent process)
──[stdio pipe]── node ollama-mcp (child process)
──[stdio pipe]── node server-filesystem (child)
──[stdio pipe]── matlab-mcp-core-server (child)
──[stdio pipe]── node mcp-fetch-server (child)
└──[stdio pipe]── node server-puppeteer (child)
└─────────────────────────────────────────────────────────────────┘
When you quit Claude Desktop, all these child processes terminate with it.
The Request/Response Flow
Here's what happens when you ask Claude to run MATLAB code:
  1. You type your request in Claude Desktop App
  2. Claude Desktop Anthropic (HTTPS): Your message travels to Anthropic's servers, along with a list of available tools from your MCP servers
  3. Claude processes (on Anthropic's servers): The model decides to use the evaluate_matlab_code tool and generates a tool-use request
  4. Anthropic Claude Desktop (HTTPS): The response arrives containing the tool request
  5. Claude Desktop MCP Server (stdio pipe): The app writes a JSON-RPC message to the MATLAB MCP server's stdin
  6. MCP Server executes: The server runs your code in MATLAB and captures the output
  7. MCP Server Claude Desktop (stdio pipe): Results written to stdout
  8. Claude Desktop Anthropic (HTTPS): Tool results sent back to Claude
  9. Claude formulates response (on Anthropic's servers)
  10. Anthropic Claude Desktop You: Final response displayed
The Claude model never directly touches your machine. It can only "see" what MCP servers return, and it can only "do" things by requesting tool calls that your local app executes on its behalf.
MCP Servers vs. Backend Services
It's important to distinguish MCP servers from the services they connect to:
Component
What It Is
Lifecycle
Ollama MCP server
A Node.js process that translates MCP requests into Ollama API calls
Spawned by Claude Desktop, dies when app quits
Ollama server
The actual LLM runtime serving models like mistral-nemo
Runs independently (started manually or via launchd)
MATLAB MCP server
A process that translates MCP requests into MATLAB Engine commands
Spawned by Claude Desktop
MATLAB
The full MATLAB application
Runs independently; MCP server connects to it
If the Ollama server isn't running, the Ollama MCP server has nothing to talk to — its commands will fail. Similarly, the MATLAB MCP server needs MATLAB to be running (or may launch it, depending on implementation).
What About Other AI Apps?
If you run both Claude Desktop and Perplexity App with MCP configurations, each app spawns its own set of MCP server processes:
Claude Desktop (PID 1001) Perplexity App (PID 2001)
── ollama-mcp (PID 1002) ── ollama-mcp (PID 2002)
── server-filesystem (PID 1003) ── server-filesystem (PID 2003)
└── matlab-mcp-server (PID 1004) └── matlab-mcp-server (PID 2004)
HTTP to same endpoints
└────────────────────────────────────┘
┌────────────────────┐
Shared Services
• Ollama Server
• MATLAB Engine
└─────────────────────┘
Key points:
  • No cross-talk: Claude Desktop cannot communicate with Perplexity's MCP servers (or vice versa). Each app only talks to its own children via stdio pipes.
  • Shared backends: Both apps' MCP servers can make requests to the same Ollama server or MATLAB instance — they're just independent clients of those services.
  • No app launching: Claude cannot launch, control, or send commands to Perplexity App. They are peer applications, not parent-child.
How Claude "Talks To" Perplexity
When I say Claude can query Perplexity, I mean Claude calls Perplexity's cloud API — not Perplexity App. The path looks like this:
Claude model (Anthropic servers)
│ requests tool use
Claude Desktop App
│ stdio pipe
MATLAB MCP Server
│ MATLAB Engine API
MATLAB running webwrite() or perplexityAgent()
│ HTTPS
api.perplexity.ai (Perplexity's cloud)
Perplexity App isn't involved at all. The same applies to OpenAI, Anthropic's own API (for nested calls), or any other service with an HTTP API.
One App to Rule Them All?
Claude Desktop doesn't control other apps, but it can:
  1. Orchestrate local tools via MCP servers it spawns and controls
  2. Call any cloud API (Perplexity, OpenAI, custom services) via HTTP through fetch MCP or MATLAB
  3. Share backend services (Ollama, MATLAB) with other apps that happen to use them
  4. Coordinate multi-AI workflows by sending prompts to local models (via Ollama) and cloud APIs, then synthesizing their responses
The "ruling" is really about Claude serving as a command post that can dispatch requests to many AI backends and tools, not about controlling other desktop applications.
Appendix 2 Example AI agent (Claude authored)
The following is an example of Claude-generated code generated for an AI agent to handle requests to access Perplexity. It receives the users prompt, if needed, discovers the users Perplexity API key hidden in a local text file, posts a message to Perplexity API, and then receives and returns the response.
function response = perplexityAgent(prompt)
%PERPLEXITYAGENT Query Perplexity AI using their Sonar API
% response = perplexityAgent(prompt)
%
% Requires: PERPLEXITY_API_KEY environment variable
% Get your key at: https://www.perplexity.ai/settings/api
apiKey = getenv('PERPLEXITY_API_KEY');
if isempty(apiKey)
% Try to load from file
keyFile = fullfile(userpath, 'PERPLEXITY_API_KEY.txt');
if isfile(keyFile)
fid = fopen(keyFile, 'r');
raw = fread(fid, '*char')';
fclose(fid);
match = regexp(raw, 'pplx-[a-]+', 'match');
if ~isempty(match)
apiKey = match{1};
setenv('PERPLEXITY_API_KEY', apiKey);
end
end
end
if isempty(apiKey)
error('PERPLEXITY_API_KEY not set. Get one at perplexity.ai/settings/api');
end
url = 'https://api.perplexity.ai/chat/completions';
% Build request
data = struct();
data.model = 'sonar';
msg = struct('role', 'user', 'content', prompt);
data.messages = {msg};
jsonStr = jsonencode(data);
% Use curl for reliability
curlCmd = sprintf(['curl -s -X POST "%s" ' ...
'-H "Authorization: Bearer %s" ' ...
'-H "Content-Type: application/json" ' ...
'-d ''%s'''], url, apiKey, jsonStr);
[status, result] = system(curlCmd);
if status == 0 && ~isempty(result)
resp = jsondecode(result);
if isfield(resp, 'choices')
response = resp.choices(1).message.content;
elseif isfield(resp, 'error')
response = sprintf('API Error: %s', resp.error.message);
else
response = result;
end
else
response = sprintf('Request failed with status %d', status);
end
end
Luis
Luis
Last activity on 29 Dec 2025 at 12:15

I’m currently developing a multi-platform viewer using Flutter to eliminate the hassle of manual channel setup. Instead of adding IDs one by one, the app uses your User API Key to automatically discover and list all your ThingSpeak channels instantly.
Key Highlights (Work in Progress):
  • Automatic Sync: All your channels appear in seconds.
  • Multi-platform: Built for Web, Android, Windows, and Linux.
  • Privacy-Focused: Secure local storage for your API keys.
I’ve installed Claude-code, MATLAB MCP Core Server, and now Puppeteer on my MacBook Pro. Puppeteer can navigate and operate web pages like Perplexity Comet or the new Claude Chrome Extension. The new wrinkle is MATLAB in the loop.
Claude-code and MATLAB MCP installation are described at Experiments with Claude code and MATLAB MPC Core Server . To install and configure Puppeteer, I used Claude App and its ability to use my MATLAB’s access to system files. The installation includes a Google Chrome for testing browser that is independent of (and does not interfere with) my normal Chrome browser. Puppeteer installation took just minutes of my approving various steps, and quitting and relaunchinbg Claude App. A minor hiccough was overwriting a special fetch connector configuration but that was readily fixed. The resulting linkage is Claude (cloud) ↔ Claude Desktop App ↔ MCP Server (local) ↔ Puppeteer ↔ Chrome for Testing (local) as well as the link to MATLAB on my laptop.
As a very first test, I selected Wikipedia from the Claude App suggestions. We navigated to a page Lorenz system in the Chrome for testing browser where Claude dismissed a prompt for donations to Wikipedia. ( I was like “What is Anthropic’s valuation and why didn’t you donate?” but said nothing.) and extracted content and summarized, many of the equations beautifully formated. I issued the following prompt: “Take a look at the differential equations there and at the example solutions and their parameter values. Then create a MATLAB script in my folder MATLAB/ClaudePuppeteer to reproduce the illustrations and run the script.” After clicking to approve various steps, presto.
The screen shot below shows 1) Claude App (upper left) after the process completed, 2) a MacOS Finder window showing the Lorentz Attractor.m in a folder ClaudePuppeteer that Claude had previously created for me to test Puppeteer functionality, 3) the Lorentz system wiki page in the Google Chrome for testing browser (lower right), 4) the LorentzAttractor script open in MATLAB (upper right), and 5) various figures created by the MATLAB Script.
Screenshot of successful creation of  MATLAB script based on Lorenz system Wikipedia page
Um, wow!
Introduction
MCP is an open protocol that can link Claude and other AI Apps to MATLAB using MATLAB MCP Core Server (released in Nov 2025). For an introduction, see Exploring the MATLAB Model Context Protocol (MCP) Core Server with Claude Desktop. Here, I describe my experience with installation and testing Claude-Code and MATLAB, a security concern, and in particular how I "taught" Claude to handle various MATLAB file formats.
Setup
A basic installation requires you download for your operating system claude-code, matlab-mcp-core-server, and node.js. One configuration is a terminal-launched claude connected to MATLAB. To connect Claude App to MATLAB requires an alternate configuration step and I recommend it for interative use. The configuration defines the default node/folder and MATLAB APP location.
I recommend using Claude itself to guide you through the installation and configuration steps for your operating system by providing terminal commands. I append Claude’s general description of installation for my APPLE Silicon laptop. Once set up, just ask in Claude App to do something in MATLAB and MATLAB App will be launched.
Security warning: Explore the following at your own risk.
When working with Claude App, Claude code, and MATLAB, you are granting Claude AI access to read and write files. By default, you must approve (one time or forever) any action so you hopefully don’t clobber files etc. Claude App believes it can not directly access file outside the top node defined in the setup. For this reason, I set the top node to be a folder ..../Documents/MATLAB. However, Claude inherits MATLAB App's command line privileges, typically your full system privileges. Claude can describe for you some work-arounds like a Docker container which might still be license validation compatible. I have not explored such options. During my setup, Claude just provided me terminal commands to copy and run. After setup, I've demonstrated it can run system level commands via matlab:evaluate_matlab_code and the MCP server. Be careful out there!
My first test
Claude can write a text-based .m script, execute it, collect text standard output from it, and open files it makes (or any file). It cannot access figures that you might see in MATLAB App unless they are saved as files or embedded in files. As we will see, the figures generated by a Live Script are saved in an Claude-accessible format when the Live Script is saved so the code need not itself export them.
In the screen shot below, the window at left is the Claude App after a successful connection. The MATLAB App window shows a script in the MATLAB editor that simulates a ballistics experiment, the script created successfully with a terminal-interfaced Claude and a simple prompt on the first try.
I deliberately but trivially broke this script using MATLAB App interactively by commenting out a needed variable g (acceleration of gravity) and saving the script to the edit was accessible to Claude. Using Claude App after its connection, I fixed the script with a simple prompt and ran it successfully to make the figure you see. The visible MATLAB didn’t know the code had been altered and fixed by Claude until I reloaded the file. Claude recommends plots be saved in PNG or JPEG, not PDF. It can describe in detail a plot in a PNG and thusly judge if the code is functioning correctly.
Live Scripts with Claude
What about Live Scripts (.mlx) and the (2025a) .m live? A .mlx file is a zipped package of files mixing code and images wtih XML markup. You can peek inside one and edit it directly without unzipping and rezipping it using a tool like BBEdit on a Mac, as shown below. This short test script has two interactive slider controls. You can in v2025+ now save a .mlx in a transportable .m Live text file format. The .mlx and .m Live formats have special markup for formatted text, interactive features like sliders, and figures.
Claude can convert a vanilla .m file to .mlx using matlab.internal.liveeditor.openAndSave(source.m, dest.mlx) and the reverse matlab.internal.liveeditor.openAndConvert('myfile.mlx', 'myfile.m’).
These functions do not support .m Live yet apparently. It would be great if they did.
Vanilla Claude doesn’t know all the details of the .mlx and .m formats yet. To teach Claude these formats, I created a simple .mlx with interactive features (sliders), saved it, and exported it to .m Live. I then sketched for Claude these structures. Claude digested these files and was then able to create such formats from scratch immediately. Wow! In the figure below, you see Claude on the left after creating a .mlx with sliders from scratch. The successfully executed Live Script appears in the MATLAB App.
Once Claude knows the .mlx format, it can peek inside a saved .mlx, process a PNG figure saved there, and describe it, as illustrated below. Here you see Claude describing a plot it found in the .mlx shown in the MATLAB window.
Wrap up
This summarizes my first look at using claude-code. The generation of simple MATLAB codes with the latest models is increasingly trivial with Claude, Perplexity, Open.ai GPT and other apps and the direct method of code and other file transfer between a native MATLAB and Claude seems to beat a copy-paste workflow. New methods to connect MATLAB Online with commercial AI's like Comet and Anthropic's Chrome extension seem to me more fluid and intuitive in that the user and the AI are co-working the same interface without copy-paste steps, but seem slower.
Appendix
Here are installation and setup instructions for Mac from Claude for reference.
Setting Up Claude Code + MATLAB MCP Server on Mac
Part 1: Claude Code
Install via npm:
bash
npm install -g @anthropic-ai/claude-code
Or via Homebrew:
bash
brew install claude-code
Run it:
bash
cd /path/to/your/project
claude
You'll need to authenticate with your Anthropic account on first run. Claude Code works in your terminal and can read/write files, run commands, and do agentic coding tasks.
Part 2: MATLAB MCP Server
Step 1: Download the server binary
Go to the MathWorks GitHub or File Exchange and download matlab-mcp-core-server for macOS (Apple Silicon or Intel depending on your Mac).
Step 2: Make it executable
bash
chmod +x ~/Downloads/matlab-mcp-core-server
Step 3: Create/edit Claude Desktop config
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
json
{
"mcpServers": {
"matlab": {
"command": "/Users/YOURUSERNAME/Downloads/matlab-mcp-core-server",
"args": ["--matlab-root", "/Applications/MATLAB_R2025a.app"]
},
"filesystem": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"/Users/YOURUSERNAME/Documents/MATLAB"
]
}
}
}
Replace YOURUSERNAME with your actual username, and adjust the MATLAB version if needed.
Step 4: Install Node.js (if not already)
bash
brew install node
Step 5: Restart Claude Desktop
Quit fully (Cmd+Q) and reopen. You should see a hammer/tools icon indicating MCP servers are connected.
Part 3: Verify Connection
In Claude Desktop, ask me to run MATLAB code. I should be able to execute:
matlab
disp('Hello from MATLAB!')
Troubleshooting
Check logs:
bash
cat ~/Library/Logs/Claude/mcp-server-matlab.log
cat ~/Library/Logs/Claude/mcp.log
Common issues:
  • Missing --matlab-root argument → "no valid MATLAB environments found"
Connecting Claude App to MATLAB via MCP Server
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/YOURUSERNAME/Documents/MATLAB"
]
},
"matlab": {
"command": "/Users/YOURUSERNAME/Downloads/matlab-mcp-core-server",
"args": [
"--matlab-root", "/Applications/MATLAB_R2025a.app"
]
}
}
}
Then fully quit Claude Desktop (Cmd+Q) and reopen.
Comet browser can figure out and operate a user interface on the web including MATLAB Online. The screen shot shows MATLAB online to the left of the Comet AI. You see a test Live Script with sliders thjat Comet created in a folder (that it created). Comet is summarizing suggested improvements it requested of MATLAB Online's Copilot. Comet can plow into the arcane NASA astrophysical database interface SIMBAD, figure out how to grab information about, say, a star orbiting the black hole in the center of our galaxy and structure that information into a MATLAB data structure in a MATLAB script and run the script in MATLAB Online and display the results in the structure - it succeeded on the first try. It can do a Google Scholar citation tree search and park the results in MATLAB (success first try) or presumably in say MS Word in Office online. Presumably it can switch to a JuypterHub tab to then run a notebook or what you will. It can make mistakes, explore and discover GUI options, and recover e.g. use undo paste in MATLAB, operate in the command window, get help via the GUI if needed. Not at light speed but this seems enabling. Just released, Claude Chrome Extension Beta has similar capabilities.
Paul
Paul
Last activity on 23 Dec 2025 at 15:15

Inspired in part by Christmas Trees, I'm curious about people's experience using AI to generate Matlab code.
1. Do you use AI to generate production code or just for experimentation/fun code?
2. Do you use the AI for a complete solution? Or is it more that the AI gets you most of the way there and you have to apply the finishing touches manually?
3. What level of quality would you consider the generated code? Does it follow "standard" Matlab coding practices? Is it well commented? Factored into modular functions? Argument checking? Memory efficient? Fast execution? Etc.?
4. Does the AI ever come up with a good or clever solution of which you wouldn't have thought or maybe of which you weren't even aware?
5. Is it easy/hard to express your requirements in a manner that the AI tool effectively translates into something useful?
6. Any other thoughts you'd care to share?
Give your LLM an easier time looking for information on mathworks.com: point it to the recently released llms.txt files. The top-level one is www.mathworks.com/llms.txt, release changes use www.mathworks.com/help/relnotes. How does it work for you??
(Requested for newer MATLAB releases (e.g. R2026B), MATLAB Parallel Processing toolbox.)
Lower precision array types have been gaining more popularity over the years for deep learning. The current lowest precision built-in array type offered by MATLAB are 8-bit precision arrays, e.g. int8 and uint8. A good thing is that these 8-bit array types do have gpuArray support, meaning that one is able to design GPU MEX codes that take in these 8-bit arrays and reinterpret them bit-wise as other 8-bit array types, e.g. FP8, which is especially common array type used in modern day deep learning applications. I myself have used this to develop forward pass operations with 8-bit precision that are around twice as fast as 16-bit operations and with output arrays that still agree well with 16-bit outputs (measured with high cosine similarity). So the 8-bit support that MATLAB offers is already quite sufficient.
Recently, 4-bit precision array types have been shown also capable of being very useful in deep learning. These array types can be processed with Tensor Cores of more modern GPUs, such as NVIDIA's Blackwell architecture. However, MATLAB does not yet have a built-in 4-bit precision array type.
Just like MATLAB has int8 and uint8, both also with gpuArray support, it would also be nice to have MATLAB have int4 and uint4, also with gpuArray support.
Mike Croucher
Mike Croucher
Last activity on 16 Dec 2025 at 14:26

I thought it would be fun to vibe code Christmas trees.
Model: Claude Sonnet4.5
Prompt: Write a MATLAB script that draws a Christmas tree. There should be decorations and presents too.
Result:
christmasTree()
function christmasTree()
% Christmas Tree with Decorations and Presents
% Creates a festive scene with a decorated tree and gift boxes
figure('Color', 'w', 'Position', [100, 100, 800, 900]);
hold on;
axis equal;
axis off;
xlim([-3, 3]);
ylim([0, 10]);
% Draw the tree (three layered triangular sections)
drawTreeSection(0, 2, 2.5, 2.5, [0.1, 0.5, 0.1]);
drawTreeSection(0, 3.5, 2, 2, [0.15, 0.55, 0.15]);
drawTreeSection(0, 4.8, 1.5, 1.8, [0.2, 0.6, 0.2]);
% Tree trunk
rectangle('Position', [-0.3, 0, 0.6, 2], 'FaceColor', [0.4, 0.2, 0.1], ...
'EdgeColor', 'none');
% Add decorations
addOrnaments();
addTinsel();
% Star on top
drawStar(0, 7.2, 0.4, [1, 0.84, 0]);
% Add presents under the tree
drawPresent(-1.8, 0, 0.6, 0.5, [0.8, 0.1, 0.1], [1, 0.84, 0]);
drawPresent(-0.8, 0, 0.5, 0.6, [0.1, 0.3, 0.8], [0.9, 0.9, 0.9]);
drawPresent(1.2, 0, 0.7, 0.4, [0.7, 0.5, 0.9], [0.2, 0.8, 0.2]);
drawPresent(0.2, 0, 0.4, 0.5, [1, 0.7, 0.1], [0.8, 0.1, 0.1]);
% Add some snow on the ground
scatter(randn(1, 100)*2.5, rand(1, 100)*0.3, 20, 'w', 'filled', ...
'MarkerEdgeColor', [0.8, 0.8, 0.8]);
title('Merry Christmas!', 'FontSize', 24, 'Color', [0.8, 0.1, 0.1], ...
'FontWeight', 'bold');
hold off;
end
function drawTreeSection(x, y, width, height, color)
% Draw a triangular section of the tree
xPoints = [x - width, x + width, x];
yPoints = [y, y, y + height];
fill(xPoints, yPoints, color, 'EdgeColor', [0.05, 0.3, 0.05], ...
'LineWidth', 1.5);
end
function addOrnaments()
% Add colorful baubles to the tree
ornamentColors = [
1, 0, 0; % Red
0, 0, 1; % Blue
1, 0.84, 0; % Gold
0.9, 0.1, 0.5;% Pink
0.5, 0, 0.5 % Purple
];
% Ornament positions (x, y coordinates)
positions = [
-1.5, 2.5; 1.5, 2.5; 0, 3.2;
-1.2, 3.8; 1.2, 3.8; -0.6, 4.2; 0.6, 4.2;
-0.8, 5.2; 0.8, 5.2; 0, 5.8; -0.4, 6.3; 0.4, 6.3
];
for i = 1:size(positions, 1)
colorIdx = mod(i-1, size(ornamentColors, 1)) + 1;
x = positions(i, 1);
y = positions(i, 2);
% Draw bauble
theta = linspace(0, 2*pi, 50);
radius = 0.15;
xCirc = x + radius * cos(theta);
yCirc = y + radius * sin(theta);
fill(xCirc, yCirc, ornamentColors(colorIdx, :), ...
'EdgeColor', ornamentColors(colorIdx, :)*0.7, 'LineWidth', 1.5);
% Add highlight for shine
highlightX = x - radius*0.4 + radius*0.3*cos(theta(1:15));
highlightY = y + radius*0.4 + radius*0.3*sin(theta(1:15));
fill(highlightX, highlightY, [1, 1, 1], 'EdgeColor', 'none', ...
'FaceAlpha', 0.6);
end
end
function addTinsel()
% Add sparkly tinsel effect
tinselX = [-1.8, -1.2, -0.5, 0.2, 0.9, 1.6];
tinselY = [2.8, 3.5, 4.0, 4.5, 5.0, 5.5];
for i = 1:length(tinselX)
plot(tinselX(i) + randn(1, 5)*0.1, tinselY(i) + rand(1, 5)*0.2, ...
'*', 'Color', [1, 0.84, 0], 'MarkerSize', 8, 'LineWidth', 1.5);
end
end
function drawStar(x, y, size, color)
% Draw a 5-pointed star on top of the tree
angles = linspace(0, 2*pi, 11);
radii = repmat([size, size*0.4], 1, 5);
radii = [radii, radii(1)];
xStar = x + radii .* cos(angles - pi/2);
yStar = y + radii .* sin(angles - pi/2);
fill(xStar, yStar, color, 'EdgeColor', [0.8, 0.6, 0], 'LineWidth', 2);
% Add sparkle
plot(x, y, '*', 'Color', [1, 1, 1], 'MarkerSize', 15, 'LineWidth', 2);
end
function drawPresent(x, y, width, height, boxColor, ribbonColor)
% Draw a gift box with ribbon and bow
% Box
rectangle('Position', [x, y, width, height], 'FaceColor', boxColor, ...
'EdgeColor', boxColor*0.7, 'LineWidth', 2);
% Vertical ribbon
ribbonWidth = width * 0.15;
rectangle('Position', [x + width/2 - ribbonWidth/2, y, ribbonWidth, height], ...
'FaceColor', ribbonColor, 'EdgeColor', 'none');
% Horizontal ribbon
ribbonHeight = height * 0.15;
rectangle('Position', [x, y + height/2 - ribbonHeight/2, width, ribbonHeight], ...
'FaceColor', ribbonColor, 'EdgeColor', 'none');
% Bow on top
bowX = x + width/2;
bowY = y + height;
bowSize = width * 0.2;
% Left loop
theta = linspace(0, pi, 30);
fill(bowX - bowSize*0.3 + bowSize*0.5*cos(theta), ...
bowY + bowSize*0.5*sin(theta), ribbonColor, 'EdgeColor', 'none');
% Right loop
fill(bowX + bowSize*0.3 + bowSize*0.5*cos(theta), ...
bowY + bowSize*0.5*sin(theta), ribbonColor, 'EdgeColor', 'none');
% Center knot
theta = linspace(0, 2*pi, 30);
fill(bowX + bowSize*0.25*cos(theta), bowY + bowSize*0.25*sin(theta), ...
ribbonColor*0.8, 'EdgeColor', 'none');
end
I like this quote, what do you think?
"If the part of programming you enjoy most is the physical act of writing code, then agents will feel beside the point. You’re already where you want to be, even just with some Copilot or Cursor-style intelligent code auto completion, which makes you faster while still leaving you fully in the driver’s seat about the code that gets written.
But if the part you care about is the decision-making around the code, agents feel like they clear space. They take care of the mechanical expression and leave you with judgment, tradeoffs, and intent. Because truly, for someone at my experience level, that is my core value offering anyway. When I spend time actually typing code these days with my own fingers, it feels like a waste of my time."
— Obie Fernandez, What happens when the coding becomes the least interesting part of the work
Hi everyone
I've been using ThingSpeak for several years now without an issue until last Thursday.
I have four ThingSpeak channels which are used by three Arduino devices (in two locations/on two distinct networks) all running the same code.
All three devices stopped being able to write data to my ThingSpeak channels around 17:00 CET on 4 Dec and are still unable to.
Nothing changed on this side, let alone something that would explain the problem.
I would note that data can still be written to all the channels via a browser so there is no fundamental problem with the channels (such as being full).
Since the above date and time, any HTTP/1.1 'update' (write) requests via the REST API (using both simple one-write GET requests or bulk JSON POST requests) are timing out after 5 seconds and no data is being written. The 5 second timeout is my Arduino code's default, but even increasing it to 30 seconds makes no difference. Before all this, responses from ThingSpeak were sub-second.
I have recompiled the Arduino code using the latest libraries and that didn't help.
I have tested the same code again another random api (api.ipify.org) and that works just fine.
Curl works just fine too, also usng HTTP/1.1
So the issue appears to be something particular to the combination of my Arduino code *and* the ThingSpeak environment, where something changed on the ThingSpeak end at the above date and time.
If anyone in the community has any suggestions as to what might be going on, I would greatly appreciate the help.
Peter
The first round of the the Cody Contest 2025 is drawing to an end, and those who have tried to tackle Problem 61069. Clueless - Lord Ned in the Game Room with the Technical Computing Language probably didn’t think, like me initially, that a vectorized solution was feasible.
Indeed, the problem is difficult enough, so that the first solution is more easily drafted using lots of for loops and conditionals.
Yet studying in depth how to vectorize the solution and get rid of redundancies helped me uncover the deeper mechanics of the algorithm and see the problem in a new light, making it progressively appear simpler than on its first encounter.
Obstacles to overcome
Vectorization depends highly on the properties of the knowledge matrix, a 3D-matrix of size [n, 3, m] storing our current knowledge about the status for each card of each category for all players.
I remember that initially, I was intent on keeping close together these two operations: assigning a YES to a player for a given card and category, and consequently assigning NOs to all other players.
I did not want to set them apart. My fear was that, if you did not keep track and updated the knowledge matrix consistently, you might end up with a whole mess making it impossible to guess what’s in the envelope!
That seemed important because, as one gradually retrieves information from the turns and revisits them, one assigns more and more YESs and narrows down the possible candidates for the cards hidden in the envelope.
For example, @JKMSMKJ had successifully managed to combined those two instructions in one line (Solution 14889208), like this (here 0 encodes NO and 1 encodes YES):
allplayers = 1:(m+1);
K(card, category,:) = allplayers == player;
For some time, I thought that was the nicest way to express it, even though you had to handle the indivual card, category and player with lots of loops and conditionals.
Watching @JKMSMKJ’s repeated efforts to rewrite and improve his code showed me differents ways to arrange the same instructions. It appeared to me that there was indeed a way to vectorize the solution, if only we accept to separate the two distinct operations of assigning a value of YES and updating the knowledge matrix for consistency.
So let’s see how this can be done. We will use the following convention introduced by @Stefan Abendroth: NO = 0, MAYBE= 1, YES = values > 1. The reason for that choice is that it will greatly simplify computations, as it will become apparent later.
Initialisation
First, initialising a matrix of MAYBEs and adding in the information from our own cards is pretty straightforward:
K = ones(m,3,n);
K(:,:,pnum) = 0;
allcategories = 1:3;
for category = allcategories
K(yourcards{deck},deck,pnum) = 2; % = YES
end
The same thing can be done for the common cards considered as the (m+1)th player.
Next, we’d like to retrieve information for the turns and insert it into the matrix.
The 1rst column of the turn matrix gives us a vector of the players.
The 2nd to 4th columns conveniently give us a 3 column matrix of the values of the cards asked.
players = turns(:,1);
cards = turns(:,2:4);
result = turns(:,5);
Now suppose we have similar 3-column matrices of the exactsame size for the players and for the categories, such as:
categories =
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
1 ...
players =
5 5 5
6 6 6
1 1 1
2 2 2
6 6 6
4 ...
It would then be nice to be able to write something like:
K(cards, categories, players) = 0; % or 1 or 2 depending on the desired assignment according to result
Unfortunately that is not possible when we have multiple indexes that are not scalars.
A workaround is to use what is called linear indices, which are the indices of the matrix when considering a matrix as a very long 1-column vector, and which can be computed with the function sub2ind:
[categories, players] = meshgrid(1:3, turns(:,1));
sz = [n, 3, m];
ind = sub2ind(sz, cards, categories, players);
K(ind) = 0; % or 1 or 2 depending on the desired assignment according to result
Ensuring everybody else has a NO
Next, let’s see how to update the matrix. We now suppose the YESs have been correctly assigned into the matrix.
Wherever we’ve identified the location of a card (”a YES”), all other players must be assigned a NO. For each card, there can be only one YES across all players.
Because of that, that YES is the maximum value across all layers of the matrix K. Using the function max on K along it’s third dimension reduces the 3rd dimension to 1, yielding a 2d matrix. To locate it, we can then compare the value of that 2d matrix with all the layers of K.
maxcard = max(K,[],3); % returns a n-by-3 matrix
is_a_yes = K == maxcard;
K == maxcard compares maxcard with each layer of K, yielding a 3d matrix of logicals of the same size as K, where 1 indicates a YES and 0 indicates “not a YES”.
Ten years ago, we’d have needed to use the function bsxfun to perform that operation, but since then, Matrix Size Array Compatibility has been extended in MATLAB. Isn’t it nice?
Now, to transform any “MAYBE” (a 1) into a NO (a 0), while keeping the existing YESs, MAYBEs, and NOs unmodified, we need only need only multiply that matrix element-by-element with K !
%% Update knowledge matrix: if someone has a >1 ("YES"), everyone else must have a 0 ("NO")
maxcard = max(ans,[],3);
K = K .* (K == maxcard);
That expression can be read as “keep the value of K wherever K is equal to its max, but set 0 elsewhere”. If the maximum is a MAYBE, it will stay a MAYBE.
Getting one’s head around such an expression may take some getting used to. But such a one-liner is immensely powerful. Imagine that one day, the rules of the game change, or that this requirement is not useful any more (that happens all the time in real life), then we can very easily just comment out just that one line without impacting the rest of the program.
Confirming a player’s card hand when we determined (3n - ncards) they don’t have
After information was retrieved from the turns, we can examine each player’s hand and if we have narrowed a player’s cards to ncard possible candidates, excluding all others, then these must the cards that they hold. That means that their MAYBE cards becomes YESs.
Locating a player’s hand amounts to locating all the strictly positive values in the matrix:
playerhand = K(:,:,p);
player_complete = sum(playerhand(:)>0)) == ncards;
That operation can actually be vectorized along all players. Summing the matrix of logicals (K>0) along the first two dimensions yields a 1-by-1-by-(m+1) matrix, akin to a vector containing the number of card candidates for each player, which we can compare to ncards.
player_complete = sum(K>0, 1:2) == ncards;
We need to transform into YESs the MAYBEs of the players for which we have successfully deduced ncards, which can be written as a simple multiplication by 2:
K(:,:,player_complete) = 2 * K(:,:,player_complete)
The 0s (NOs) will remain 0s, the MAYBEs will become 2s, and the YESs will be multiplied too, but still stay YESs (>1).
But since 2 .^ 0 = 1 and 2 .^ 1 = 2, there’s an even nicer way to write that calculation:
K = K .* 2 .^ player_complete;
which reads like “we multiply K by 2 wherever a player’s hand is complete”. All thanks to Array Size Compatibility!
That expression is nicer because we need not explicitly assign the operation to the 3rd dimension of K. Suppose that one day, for whatever reason (performance optimisation or change of requirements), information about the players is not stored along the 3rd dimension any more, that code would NOT need to change, whereas K(:,:,player_complete) would need to be ajusted.
That’s how elegant MATLAB can be!
Checking whether a player’s hand is complete
What we checked previously is equivalent to checking that the number of NOs (the number of cards a player has not) was equal to 3*n - ncards.
What we didn’t do is check whether the sum of YESs if equal to ncards and then transform all MAYBEs for that player into NOs.
That will not be necessary because of the implementation of the next rule.
Because the information provided to play the game is assumed to be sufficient to guess the missing cards, it means that the YESs and NOs will gradually populate the matrix, so that any remaining MAYBE will be determined.
Identifying each category's missing card when (n-1) cards are known
Each category only has n cards, which means that once (n-1) cards are correctly located, the remaining card can only be a NO for everyone.
Because a card can only be in only one player’s hand, we can reuse the maximum of K across all players that we previously computed. It is a n-by-3 2d matrix where the values > 1 are the YESs. Using the function sum adds up all the YESs found for each category, yielding a vector of 3 values, containing the number of cards correctly located.
maxcard = max(K,[],3);
category_complete = sum(maxcard > 1) == n-1;
When a category is complete, the last remaining MAYBE should become a NO, without modifying the YESs. A clever way is to multiply the value by itself minus one:
K(:,category_complete,:) = K(:,category_complete,:) .* (K(:,category_complete,:) - 1)
which, using the same exponentiation technique as previously, can be nicely and compactly rewritten as:
K = K .* (K-1) .^ category_complete;
Because the YESs are > 1, we can even compute that more simply like this (as Stefan Abendforth put it in Solution 14900340):
K = K .* (category_complete < K);
Extracting the index of the missing cards
After looping several times to extract all possible information, the last thing that remains to be done is computing the values of the missings cards. They are the only NOs left in the knowledge matrix, and in the 2d matrix maxcard as well:
maxcard = max(K,[],3);
[sol,~] = find(maxcard == 0);
Conclusion
I previously mentioned being bothered by matrix indexing such as K(:,:,player) because it is code that seems fragile in case of change in the organisation of the matrix. Such an instruction would benefit from being "encapsulated" if the need arises.
One of my main concerns has always been writing maintainable MATLAB code, having worked in organisations where code piled up almost everyday, making it gradually more difficult and time-consuming to add and enhance functionalities if not properly managed.
On the one hand, elegant vectorization leads us to group things together and handle them uniformly and efficiently, “in batches”. On the other hand, “separation of concerns”, one of Software Development’s principles and good practices, would advise us to keep parts small and modular and that can take care of themselves on their own if possible, achieving higher abstraction.
How do we keep different requirements independent, so that they do not impact each other if any one of them needs to change? But how do we exploit vectorization extensively for performance? These two opposing forces is what makes developing modular and efficient MATLAB code a challenge that no other language faces in the same way, in my opinion.
Seeing the rules of the game as a sequence of multiplications applied to the matrix K simultaneously reduces code size and reveals a deeper aspect of the algorithm: because multiplication is commutative and associative, we can apply them in any order, and we could also see them as independant “operators” that we could apply elsewhere.
---
I hope those explanations can help you better appreciate the beauty of vectorization and make it seem less daunting.
There are many other strokes of inspiration that emerged from different solvers solving Problem 61069. Clueless - Lord Ned in the Game Room with the Technical Computing Language, and I am the first one to be amazed by them.
I wish to see more of such cooperative brilliance and sound emulation everywhere! Thanks so much to Cody Contest team for setting up such a fun and rewarding experience.

Building Transition Matrices for the Royal Game of Err

King Neduchadneddar the Procrastinator has devised yet another scheme to occupy his court's time, and this one is particularly devious. The Royal Game of Err involves moving pawns along a path of n squares by rolling an m-sided die, with forbidden squares thrown in just to keep things interesting. Your mission, should you choose to accept it, is to construct a transition matrix that captures all the probabilistic mischief of this game. But here's the secret: you don't need nested loops or brute force. With the right MATLAB techniques, you can build this matrix with the elegance befitting a Chief Royal Mage of Matrix Computations.

The heart of this problem lies in recognizing that the transition matrix is dominated by a beautiful superdiagonal pattern. When you're on square j and roll the die, you have a 1/m chance of moving to each of squares j+1, j+2, up to j+m, assuming none are forbidden and you don't overshoot. This screams for vectorized construction rather than element-by-element assignment. The key weapon in your arsenal is MATLAB's ability to construct multiple diagonals simultaneously using either repeated calls to diag with offset parameters, or the more powerful spdiags function for those comfortable with advanced matrix construction.

Consider this approach: start with a zero matrix and systematically add 1/m to each of the m superdiagonals. For a die with m sides, you're essentially saying "from square j, there's a 1/m probability of landing on j+k for k = 1 to m." This can be accomplished with a simple loop over k, using T = T + diag(ones(1,n-k)*(1/m), k) for each offset k from 1 to m. The beauty here is that you're working with entire diagonals at once, not individual elements. This vectorized approach is not only more elegant but also more efficient and less error-prone than tracking indices manually.

Figure 1: Basic transition matrix structure for n=8, m=3, no forbidden squares. Notice the three superdiagonals carrying probability 1/3 each.

Now comes the interesting part: handling forbidden squares. When square j is forbidden, two things must happen simultaneously. First, you cannot land ON square j from anywhere, which means column j should be entirely zeros. Second, you cannot move FROM square j to anywhere, which means row j should be entirely zeros. The naive approach would involve checking each forbidden square and carefully adjusting individual elements. The elegant approach recognizes that MATLAB's logical indexing was practically designed for this scenario.

Here's the trick: once you've built your basic superdiagonal structure, handle all forbidden squares in just two lines: T(nogo, :) = 0 to eliminate all moves FROM forbidden squares, and T(:, nogo) = 0 to eliminate all moves TO forbidden squares. But wait, there's more. When you zero out these entries, the probabilities that would have gone to those squares need to be redistributed. This is where the "stay put" mechanism comes in. If rolling a 3 would land you on a forbidden square, you stay where you are instead. This means adding those lost probabilities back to the main diagonal.

The sophisticated approach uses logical indexing to identify which transitions would have violated the forbidden square rule, then redirects those probabilities to the diagonal. You can check if a move from square j to square k would hit a forbidden square using ismember(k, nogo), and if so, add that 1/m probability to T(j,j) instead. This "probability conservation" ensures that each row still sums to 1, maintaining the stochastic property of your transition matrix.

Figure 2: Transition matrix with forbidden squares marked. Left: before adjustment. Right: after forbidden square handling showing probability redistribution. Compare the diagonal elements.

The final square presents its own challenge. Once you reach square n, the game is over, which in Markov chain terminology means it's an "absorbing state." This is elegantly represented by setting T(n,n) = 1 and ensuring T(n, j) = 0 for all j not equal to n. But there's another boundary condition that's equally important: what happens when you're on square j and rolling the die would take you beyond square n?

The algorithm description provides a clever solution: you stay put. If you're on square n-2 and roll a 4 on a 6-sided die, you don't move. This means that for squares near the end, the diagonal element T(j,j) needs to accumulate probability from all those "overshooting" scenarios. Mathematically, if you're on square j and rolling k where j+k exceeds n, that 1/m probability needs to be added to T(j,j). A clean way to implement this is to first build the full superdiagonal structure as if the board were infinite, then add (1:m)/m to the last m elements of the diagonal to account for staying put.

There's an even more elegant approach: build your superdiagonals only up to where they're valid, then explicitly calculate how much probability should stay on the diagonal for each square. For square j, count how many die outcomes would either overshoot n or hit forbidden squares, multiply by 1/m, and add to T(j,j). This direct calculation ensures you've accounted for every possible outcome and maintains the row-sum property.

Figure 3: Heat map showing probability distributions from different starting squares. Notice how probabilities "pile up" at the diagonal for squares near the boundary.

Now that you understand the three key components, the construction strategy becomes clear. Initialize your n-by-n zero matrix. Build the basic superdiagonal structure to represent normal movement. Identify and handle forbidden squares by zeroing rows and columns, then redistributing lost probability to the diagonal. Finally, ensure boundary conditions are met by setting the final square as absorbing and handling the "stay put" cases for near-boundary squares.

The order matters here. If you handle forbidden squares first and then build diagonals, you might overwrite your forbidden square adjustments. The cleanest approach is to build all m superdiagonals first, then make a single pass to handle both forbidden squares and boundary conditions simultaneously. This can be done efficiently with a vectorized check: for each square j, count valid moves, calculate stay-put probability, and update T(j,j) accordingly.

Figure 4: Complete transition matrix for a test case with n=7, m=4, nogo=[2 5]. Spy plot showing the sparse structure alongside a color-coded heat map. Notice the complex pattern of probabilities.

Before declaring victory over King Neduchadneddar, verify your matrix satisfies the fundamental properties of a transition matrix. First, every element should be between 0 and 1 (probabilities, after all). Second, each row should sum to exactly 1, representing the fact that from any square, you must end up somewhere (even if it's staying put). You can check this with all(abs(sum(T,2) - 1) < 1e-10) to account for floating-point arithmetic.

The provided test cases offer another validation opportunity. Start with the simplest cases where patterns are obvious, like n=8, m=3 with no forbidden squares. You should see a clean superdiagonal structure. Then progress to cases with forbidden squares and verify that columns and rows are properly zeroed. The algorithm description even provides example matrices for you to compare against. Pay special attention to the diagonal elements, as they're where most of the complexity hides.

Figure 5: Validation dashboard showing row sums (should all be 1), matrix properties, and comparison with expected structure for a simple test case.

For those seeking to optimize their solution, consider that for large n, explicitly storing an n-by-n dense matrix becomes memory-intensive. Since most elements are zero, MATLAB's sparse matrix format is ideal. Replace zeros(n) with sparse(n,n) at initialization. The same indexing and diagonal operations work seamlessly with sparse matrices, but you'll save considerable memory for large problems.

Another sophistication involves recognizing that the transition matrix construction is fundamentally about populating a banded matrix with some modifications. The spdiags function was designed for exactly this scenario. You can construct all m superdiagonals in a single call by preparing a matrix where each column represents one diagonal's values. While the syntax takes some getting used to, the resulting code is remarkably compact and efficient.

For debugging purposes, visualizing your matrix at each construction stage helps immensely. Use imagesc(T) with a colorbar to see the probability distribution, or spy(T) to see the non-zero structure. If you're not seeing the expected patterns, these visualizations immediately reveal whether your diagonals are in the right positions or if forbidden squares are properly handled.

Figure 6: Performance comparison showing construction time and memory usage for dense vs sparse implementations as n increases.

King Neduchadneddar may have thought he was creating an impossible puzzle, but armed with MATLAB's matrix manipulation prowess, you've seen that elegant solutions exist. The key insights are recognizing the superdiagonal structure, handling forbidden squares through logical indexing rather than explicit loops, and carefully managing boundary conditions to ensure probability conservation. The transition matrix you've constructed doesn't just solve a Cody problem; it represents a complete probabilistic model of the game that could be used for further analysis, such as computing expected game lengths or steady-state probabilities.

The beauty of this approach lies not in clever tricks but in thinking about the problem at the right level of abstraction. Rather than considering each element individually, you've worked with entire diagonals, rows, and columns. Rather than writing conditional logic for every special case, you've used vectorized operations that handle all cases simultaneously. This is the essence of MATLAB mastery: letting the language's strengths work for you rather than against you.

As Vasilis Bellos demonstrated with the Bridges of Nedsburg , sometimes the most satisfying part of a Cody problem isn't just getting the tests to pass, but understanding the mathematical structure deeply enough to implement it elegantly. King Neduchadneddar would surely be impressed by your matrix manipulation skills, though he'd probably never admit it. Now go forth and construct those transition matrices with the confidence of a true Chief Royal Mage of Matrix Computations. The court awaits your solution.

Note: This article provides strategic insights and techniques for solving Problem 61067 without revealing the complete solution. The figures reference MATLAB Mobile script created by me that demonstrate key concepts. For the full Cody Contest 2025 experience and to test your implementation, visit the problem page and may your matrices always be stochastic.

David
David
Last activity on 3 Dec 2025

In a recent blog post, @Guy Rouleau writes about the new Simulink Copilot Beta. Sign ups are on the Copilot Beta page below. Let him know what you think.
As @Vasilis Bellos has neatly summarized here, in order to solve Problem 61069. Clueless - Lord Ned in the Game Room with the Technical Computing Language from the Cody Contest 2025, there are 4 rules to take into account and implement:
  1. If a player has a card, no other player has it
  2. If a player has ncards confirmed, they have no other cards
  3. If (n - 1) cards in a category are located, the nth card is in the envelope
  4. If a player has (3n - ncards) confirmed cards that they don't have, they must have the remaining unknown cards
As suggested in the problem statement, one natural way to attempt to solve the problem leads to storing the status of our knowledge about all the cards in an array, specifically a 3d matrix of size n by 3 by m.
Such a matrix is especially convenient because K(card, category, player) directly yields the knowledge status we have about a given card and category for a given player.
It also enables us to check the knowledge status:
  • across all players for a given card and category, with K(card, category, :) (needed for rule 1)
  • about the cards that a given player holds in his hand: K(:, :, player) yields a 2d slice of size n by 3 (needed for rules 2 and 4)
  • of the location of the n-1 cards for each category: K(:, category, :) (needed for rule 3)
The question then arises of how to encode the information about the status of cards of the players : whether unknown, maybe, definitely have and definitely have not.
It quickly appears that there is no difference between “unknown” and “maybe”.
Therefore only three distinct values are needed, to encode “YES”, “NO”, and “MAYBE”.
I would like to discuss the way these values are chosen has an impact on how we can manage to vectorize the solution to the problem (especially since a vectorized solution does not immediately appear) and make computations more elegant and easier to follow.
The 3D-matrix naturally suggest the use of the functions sum, max, and min across any of its 3 dimensions to perform the required computations. As such, the values 0, 1, NaN, and Inf can all play a very important role in storing our knowledge about the presence or absence of the cards throughout our deductions.
However, after having a look at the submitted solutions, what has struck me is that the majority of solvers (about two thirds) chose to encode MAYBE = 0, NO = -1, and YES = 1.
I wonder if that was because they were influenced by the way the problem is stated, or whether because they are “naturally” inclined to consider “MAYBE” to be “between” NO and YES.
The hierarchy we choose is important because it will influence the way we can make use of max and min. Also, 0 is a very important value because it "absorbs" all multiplied values during computations. Why give "MAYBE" such an important value?
My personal first intuition was to encode NO = 0 and YES = 1, and then something complety appart for MAYBE, either NaN or (-1). The advantage of -1 being that it can be easily transformed into 0 or 1.
In my mind, that way makes it easier:
  • to count the YESs : sum( K > 0)
  • to count the NOs : sum( K == 0 )
  • to find the last remaining NOs : find( K(…) == 0)
  • to count the MAYBEs or YESs (the “not NOs”) : sum( abs(K(…)) ) or sum( K(…) ~= 0 )
  • to convert MAYBE into YES with information from the turns without modifying other cards’ statuses : K( … ) = abs(K( … )) or K(…) = K( … ).^2
  • to convert MAYBE into NO once a card is located elsewhere without modifying other cards’ statuses : K(…) = max(0, K( … ))
(You can have a look at the vectorized solution I devised using that encoding: Solution 14893448)
Of course, we can devise similar operations if we choose to encode MAYBE = 0, NO = -1, and YES = 1, such as:
  • to count the YESs : sum( K > 0)
  • to count the NOs : sum( K < 0 )
  • to find the last remaining NOs : find( K(…) < 0)
  • to count the MAYBEs or YESs (the “not NOs”) : sum( K(…) >= 0)
  • to convert MAYBE into YES with information from the turns without modifying other cards’ statuses : K(… ) = min(1, 2*K(…) + 1)
  • to convert MAYBE into NO once a card is located elsewhere without modifying other cards’ statuses : K(…) = max(-1, (2*K(…) - 1 )) (already used in Matt Tearle’s Solution 14843428)
I find those functions somewhat more cumbersome and of course they don’t help reducing Cody size. I tried devising a solution using that encoding that you can check there too and see how twisted it looks: Solution 14904420 (it can still be optimised, I believe, but I find it hard to get my head around it...)
At some point, I also considered devising a solution combining 0, 1 and Inf or -Inf, but the problem was that 0 * Inf = NaN, not very practical in the end.
The real breakthrough came when @Stefan Abendroth submitted a solution using the following convention: MAYBE = 1, NO = 0, and YES = any number > 1 (Solution 14896297).
He used the following functions :
  • to convert MAYBE into YES with information from the turns without modifying other cards’ statuses : K(…) = 2 * K(…) (such a simple function!)
  • to convert MAYBE into NO once a card is located elsewhere without modifying other cards’ statuses : K(…) = bitand(K(…), 254), which was later optimised and became even simpler after several iterations.
The current leading solution uses that encoding and is really worth a close examination in my opinion, because it actually compacts the computation in such an elegant way, in just a few instructions.
Opening up the space of the values that encode YES and exploiting the properties of 0 and 1 for algebraic operations, shows in a profound way how to use the set of natural numbers, an idea that doesn’t come immediately to my mind as I am so used to thinking in vector spaces and linear algebra.
Interestingly enough, the first solution that Stefan submitted (Solution 14848390) already encoded MAYBE as 1, NO as 0 and YES as 2. I wonder where that intuition comes from.
I have seen two others solvers use MAYBE = 2 / NO = 0 / YES = 1, (at least) three that used the MAYBE = -1 / NO = 0 / YES = 1, and several others using various systems of their own.
I hope this example showcases how different matrix encoding reveal different thinking processes and how the creative search for a more efficient matrix encoding (motivated by the reduction in Cody size) has (unexpectedly ?) led to a brilliant and elegant vectorized solution.
Another proof of how Cody can provide so much instruction and fun!
Having tackled a given problem is not the end of the game, and the fun is far from over. Thanks to the test suite in place, we can continue tweaking our solutions ("refactoring") so that it still passes the tests while improving its ranking with regard to "Cody size".
Although reducing the Cody size does not necessarily mean a solution will perform more efficiently nor be more readable (quite the contrary, actually…), it is a fun way to delve into the intricacies of MATLAB code and maybe win a Cody Leader badge!
I am not talking about just basic hacks. The size constraint urges us to find an “out-of-the box” way of solving a problem, a way of thinking creatively, of finding other means to achieve a desired computation result, that uses less variables, that is less cumbersome, or that is more refined.
The past few days have taught me several useful tricks that I would like to share with anyone wishing to reduce the solution size of their Cody submission. Happy to learn about other tricks you may know of, please share!
  1. Use this File Exchange submission to get the size of your solution: https://fr.mathworks.com/matlabcentral/fileexchange/34754-calculate-size
  2. Use existing MATLAB functions that may already perform the desired calculations but that you might have overlooked (as I did with histcount and digraph).
  3. Use vectorization amply. It’s what make the MATLAB language so concise and elegant!
  4. Before creating a matrix of replicated values, check if your operation requires it. See Compatible Array Sizes for Basic Operations. For example, you can write x == y with x being a column vector and y a row vector, thereby obtaining a 2 by 2-matrix.
  5. Try writing out for loops instead of vectorization. Sometimes it’s actually smaller from a Cody point of view.
  6. Avoid nested functions and subfunctions. Try anonymous functions if used in several places. (By all means, DO write nested functions and subfunctions in real life!)
  7. Avoid variable assignments. If you declare variables, look for ones you can use in multiples places for multiple purposes. If you have a variable used only in one place, replace it with its expression where you need it. (Do not do this in real life!)
  8. Instead of variable assignments, write hardcoded constants. (Do not do this in real life!)
  9. Instead of indexed assignments, look for a way to use multiplying or exponentiating with logical indexes. (For example, compare Solution 14896297 and Solution 14897502 for Problem 61069. Clueless - Lord Ned in the Game Room with the Technical Computing Language).
  10. Replace x == 0 with ~x if x is a numeric scalar or vector or matrix that doesn’t contain any NaN (the latter is smaller in size by 1)
  11. Instead of x == -1, see if x < 0 works (smaller in size by 1).
  12. Instead of [1 2], write 1:2 (smaller in size by 1).
  13. sum(sum(x))” is actually smaller than “sum(x, 1:2)
  14. Instead of initialising a matrix of 2s with 2 * ones(m,n), write repmat(2,m,n) (smaller in size by 1).
  15. If you have a matrix x and wish to initialize a matrix of 1s, instead of ones(size(x)), write x .^ 0 (works as long as x doesn’t contain any NaN) (smaller in size by 2).
  16. Unfortunately, x ^-Inf doesn’t provide any reduction compared to zeros(size(x)), and it doesn’t work when x contains 0 or 1.
  17. Beware of Operator Precedence and avoid unnecessary parenthesis (special thanks to @Stefan Abendroth for bringing that to my attention ;)) :
  • Instead of x * (y .^ 2), write x * y .^2 (smaller in size by 1).
  • Instead of x > (y == z), write y == z < x (smaller in size by 1).
18. Ask help from other solvers: ideas coming from a new pair of eyes can bring unexpected improvements!
That’s all I can see for now.
Having applied all those tips made me realise that a concise yet powerful code, devoid of the superfluous, also has a beauty of its own kind that we can appreciate.
Yet we do not arrive at those minimalist solutions directly, but through several iterations, thanks to the presence of tests that allow us to not worry about breaking anything, and therefore try out sometimes audacious ideas.
That's why I think the main interest lies in that it prompts to think of our solutions differently, thereby opening ways to better understand the problem statement at hand and the inner workings of the possible solutions.
Hope you’ll find it fun and useful!
P.S.: Solvers, please come help us reduce even more the size of the leading solution for Problem 61069. Clueless - Lord Ned in the Game Room with the Technical Computing Language!
Hi Creative Coders!
I've been working my way through the problem set (and enjoying all the references), but the final puzzle has me stumped. I've managed to get 16/20 of the test cases to the right answer, and the rest remain very unsolvable for my current algorithm. I know there's some kind of leap of logic I'm missing, but can't figure out quite what it is. Can any of you help?
What I've Done So Far
My current algorithm looks a bit like this. The code is doing its best to embody spaghetti at the moment, so I'll refrain from posting the whole thing to spare you all from trying to follow my thought processes.
Step 1: Go through all the turns and fill out tables of 'definitely', 'maybe', and 'clue' based on the information provided in a single run through the turns. This means that the case mentioned in the problem where information from future turns affecting previous turns does not matter yet. 'Definitely' information is for when I know a card must belong to a certain player (or to no-one). 'Maybe' starts off with all players in all cells, and when a player is found to not be able to have a card, their number is removed from the cell. Think of Sudoku notes where someone has helpfully gone through the grid and put every single possible number in each cell. 'Clue' contains information about which cards players were hinted about.
Example from test case 1:
definitelyTable =
6×3 table
G1 G2 G3
____________ ____________ ____________
{[ 0]} {0×0 double} {0×0 double}
{0×0 double} {[ -1]} {[ 1]}
{0×0 double} {[ 6]} {[ 0]}
{[ 3]} {[ 4]} {0×0 double}
{0×0 double} {[ 0]} {0×0 double}
{[ 5]} {[ 3]} {0×0 double}
maybeTable =
6×3 table
G1 G2 G3
_________ _______ _______
{[ 0]} {[2 5]} {[1 2]}
{[ 4]} {[ 0]} {[ 0]}
{[2 4 6]} {[ 0]} {[ 0]}
{[ 0]} {[ 0]} {[1 4]}
{[ 1 4]} {[ 0]} {[ 1]}
{[ 0]} {[ 0]} {[2 4]}
clueTable =
6×3 table
G1 G2 G3
____________ ____________ ____________
{0×0 double} {[ 5 6]} {[ 2 4]}
{[ 4 6]} {[ 4 6]} {0×0 double}
{[ 2 6]} {[ 5 6]} {0×0 double}
{0×0 double} {[ 4]} {[ 4 5 6]}
{[ 4]} {0×0 double} {[ 1 4 6]}
{[ 2 5]} {0×0 double} {[ 2 4 5 6]}
(-1 indicates the card is in the envelope. 0 indicates the card is commonly known.)
Step 2: While a solution has not yet been found, loop through all the turns again. This is the part where future turn info can now be fed back into previous turns, and where my sticky test cases loop forever. I've coded up each of the implementation tips from the problem statement for this stage.
Where It All Comes Undone
The problem is, for certain test cases (e.g., case 5), I reach a point where going through all turns yields no new information. I either end up with an either-or scenario, where the potential culprit card is one of two choices, or with so little information it doesn't look like there is anywhere left to turn.
I solved some of the either-or cases by adding a snippet that assumes one of the values and then tries to solve the problem based on that new information. If it can't solve it, then it tries the other option and goes round again. Unfortunately, however, this results in an infinite flip-flop for some cases as neither guess resolves the puzzle.
Essentially guessing the solution and following through to a logical inconsistency for however many combinations of players and cards sounds a) inefficient and b) not the way this was intended to be solved. Does anyone have any hints that might get me on track to solve this mystery?
% Recreation of Saturn photo
figure('Color', 'k', 'Position', [100, 100, 800, 800]);
ax = axes('Color', 'k', 'XColor', 'none', 'YColor', 'none', 'ZColor', 'none');
hold on;
% Create the planet sphere
[x, y, z] = sphere(150);
% Saturn colors - pale yellow/cream gradient
saturn_radius = 1;
% Create color data based on latitude for gradient effect
lat = asin(z);
color_data = rescale(lat, 0.3, 0.9);
% Plot Saturn with smooth shading
planet = surf(x*saturn_radius, y*saturn_radius, z*saturn_radius, ...
color_data, ...
'EdgeColor', 'none', ...
'FaceColor', 'interp', ...
'FaceLighting', 'gouraud', ...
'AmbientStrength', 0.3, ...
'DiffuseStrength', 0.6, ...
'SpecularStrength', 0.1);
% Use a cream/pale yellow colormap for Saturn
cream_map = [linspace(0.4, 0.95, 256)', ...
linspace(0.35, 0.9, 256)', ...
linspace(0.2, 0.7, 256)'];
colormap(cream_map);
% Create the ring system
n_points = 300;
theta = linspace(0, 2*pi, n_points);
% Define ring structure (inner radius, outer radius, brightness)
rings = [
1.2, 1.4, 0.7; % Inner ring
1.45, 1.65, 0.8; % A ring
1.7, 1.85, 0.5; % Cassini division (darker)
1.9, 2.3, 0.9; % B ring (brightest)
2.35, 2.5, 0.6; % C ring
2.55, 2.8, 0.4; % Outer rings (fainter)
];
% Create rings as patches
for i = 1:size(rings, 1)
r_inner = rings(i, 1);
r_outer = rings(i, 2);
brightness = rings(i, 3);
% Create ring coordinates
x_inner = r_inner * cos(theta);
y_inner = r_inner * sin(theta);
x_outer = r_outer * cos(theta);
y_outer = r_outer * sin(theta);
% Front side of rings
ring_x = [x_inner, fliplr(x_outer)];
ring_y = [y_inner, fliplr(y_outer)];
ring_z = zeros(size(ring_x));
% Color based on brightness
ring_color = brightness * [0.9, 0.85, 0.7];
fill3(ring_x, ring_y, ring_z, ring_color, ...
'EdgeColor', 'none', ...
'FaceAlpha', 0.7, ...
'FaceLighting', 'gouraud', ...
'AmbientStrength', 0.5);
end
% Add some texture/gaps in the rings using scatter
n_particles = 3000;
r_particles = 1.2 + rand(1, n_particles) * 1.6;
theta_particles = rand(1, n_particles) * 2 * pi;
x_particles = r_particles .* cos(theta_particles);
y_particles = r_particles .* sin(theta_particles);
z_particles = (rand(1, n_particles) - 0.5) * 0.02;
% Vary particle brightness
particle_colors = repmat([0.8, 0.75, 0.6], n_particles, 1) .* ...
(0.5 + 0.5*rand(n_particles, 1));
scatter3(x_particles, y_particles, z_particles, 1, particle_colors, ...
'filled', 'MarkerFaceAlpha', 0.3);
% Add dramatic outer halo effect - multiple layers extending far out
n_glow = 20;
for i = 1:n_glow
glow_radius = 1 + i*0.35; % Extend much farther
alpha_val = 0.08 / sqrt(i); % More visible, slower falloff
% Color gradient from cream to blue/purple at outer edges
if i <= 8
glow_color = [0.9, 0.85, 0.7]; % Warm cream/yellow
else
% Gradually shift to cooler colors
mix = (i - 8) / (n_glow - 8);
glow_color = (1-mix)*[0.9, 0.85, 0.7] + mix*[0.6, 0.65, 0.85];
end
surf(x*glow_radius, y*glow_radius, z*glow_radius, ...
ones(size(x)), ...
'EdgeColor', 'none', ...
'FaceColor', glow_color, ...
'FaceAlpha', alpha_val, ...
'FaceLighting', 'none');
end
% Add extensive glow to rings - make it much more dramatic
n_ring_glow = 12;
for i = 1:n_ring_glow
glow_scale = 1 + i*0.15; % Extend farther
alpha_ring = 0.12 / sqrt(i); % More visible
for j = 1:size(rings, 1)
r_inner = rings(j, 1) * glow_scale;
r_outer = rings(j, 2) * glow_scale;
brightness = rings(j, 3) * 0.5 / sqrt(i);
x_inner = r_inner * cos(theta);
y_inner = r_inner * sin(theta);
x_outer = r_outer * cos(theta);
y_outer = r_outer * sin(theta);
ring_x = [x_inner, fliplr(x_outer)];
ring_y = [y_inner, fliplr(y_outer)];
ring_z = zeros(size(ring_x));
% Color gradient for ring glow
if i <= 6
ring_color = brightness * [0.9, 0.85, 0.7];
else
mix = (i - 6) / (n_ring_glow - 6);
ring_color = brightness * ((1-mix)*[0.9, 0.85, 0.7] + mix*[0.65, 0.7, 0.9]);
end
fill3(ring_x, ring_y, ring_z, ring_color, ...
'EdgeColor', 'none', ...
'FaceAlpha', alpha_ring, ...
'FaceLighting', 'none');
end
end
% Add diffuse glow particles for atmospheric effect
n_glow_particles = 8000;
glow_radius_particles = 1.5 + rand(1, n_glow_particles) * 5;
theta_glow = rand(1, n_glow_particles) * 2 * pi;
phi_glow = acos(2*rand(1, n_glow_particles) - 1);
x_glow = glow_radius_particles .* sin(phi_glow) .* cos(theta_glow);
y_glow = glow_radius_particles .* sin(phi_glow) .* sin(theta_glow);
z_glow = glow_radius_particles .* cos(phi_glow);
% Color particles based on distance - cooler colors farther out
particle_glow_colors = zeros(n_glow_particles, 3);
for i = 1:n_glow_particles
dist = glow_radius_particles(i);
if dist < 3
particle_glow_colors(i,:) = [0.9, 0.85, 0.7];
else
mix = (dist - 3) / 4;
particle_glow_colors(i,:) = (1-mix)*[0.9, 0.85, 0.7] + mix*[0.5, 0.6, 0.9];
end
end
scatter3(x_glow, y_glow, z_glow, rand(1, n_glow_particles)*2+0.5, ...
particle_glow_colors, 'filled', 'MarkerFaceAlpha', 0.05);
% Lighting setup
light('Position', [-3, -2, 4], 'Style', 'infinite', ...
'Color', [1, 1, 0.95]);
light('Position', [2, 3, 2], 'Style', 'infinite', ...
'Color', [0.3, 0.3, 0.4]);
% Camera and view settings
axis equal off;
view([-35, 25]); % Angle to match saturn_photo.jpg - more dramatic tilt
camva(10); % Field of view - slightly wider to show full halo
xlim([-8, 8]); % Expanded to show outer halo
ylim([-8, 8]);
zlim([-8, 8]);
% Material properties
material dull;
title('Saturn - Left click: Rotate | Right click: Pan | Scroll: Zoom', 'Color', 'w', 'FontSize', 12);
% Enable interactive camera controls
cameratoolbar('Show');
cameratoolbar('SetMode', 'orbit'); % Start in rotation mode
% Custom mouse controls
set(gcf, 'WindowButtonDownFcn', @mouseDown);
function mouseDown(src, ~)
selType = get(src, 'SelectionType');
switch selType
case 'normal' % Left click - rotate
cameratoolbar('SetMode', 'orbit');
rotate3d on;
case 'alt' % Right click - pan
cameratoolbar('SetMode', 'pan');
pan on;
end
end
Hello,
I have Arduino DIY Geiger Counter, that uploads data to my channel here in ThingSpeak (3171809), using ESP8266 WiFi board. It sends CPM values (counts per minute), Dose, VCC and Max CPM for 24h. They are assignet to Field from 1 to 4 respectively. How can I duplicate Field 1, so I could create different time chart for the same measured unit? Or should I duplicate Field 1 chart, and how? I tried to find the answer here in the blog, but I couldn't.
I have to say that I'm not an engineer or coder, just can simply load some Arduino sketches and few more things, so I'll be very thankfull if someone could explain like for non-IT users.
Regards,
Emo
Matt Tearle
Matt Tearle
Last activity on 13 Dec 2025 at 11:50

Fittingly for a Creative Coder, @Vasilis Bellos clearly enjoyed the silliness I put into the problems. If you've solved the whole problem set, don't forget to help out your teammates with suggestions, tips, tricks, etc. But also, just for fun, I'm curious to see which of my many in-jokes and nerdy references you noticed. Many of the problems were inspired by things in the real world, then ported over into the chaotic fantasy world of Nedland.
I guess I'll start with the obvious real-world reference: @Ned Gulley (I make no comment about his role as insane despot in any universe, real or otherwise.)