Link Notes
Don’t Let AI Write For You1. Nicely distilled essay on writing.
Markdown Ate The World1. Recommended reading. Always bet on plain text!
Some Things Just Take Time1. Recommended reading. About the value of slowing down in this fast-changing world.
Updating the design of Notion pages. Notion is refining the spacing rules across its pages, and that feels like the right direction. I use Notion for private writing and personal databases, so this kind of change could noticeably improve the writing experience for me. I have also spent a long time thinking about spacing while designing my own personal website, so I like that this article treats spacing as a serious design problem. It is a good deep-dive reference.
OpenAI to acquire Astral1. This one is big. uv, Ruff, and ty are all best-in-class tools from Astral that have significantly improved the Python developer experience. After the acquisition, the Astral team will join OpenAI’s Codex team. Given how quickly Codex has been improving recently, this feels like a strong combination for the future of developer tooling.
Here’s the blog post from Astral’s side: Astral to join OpenAI. Here’s the Hacker News discussion.
Another interesting angle is that Anthropic acquired Bun last year. Anthropic now has deep JS/TS tooling expertise, while OpenAI is making a similar move around Python.
Update Mar 20, 2026: Simon Willison published a detailed post on this acquisition: Thoughts on OpenAI acquiring Astral and uv/ruff/ty.
Introducing GPT-5.4 mini and nano1. OpenAI finally refreshed its smaller and cheaper models. The previous mini/nano release was GPT-5 in August 2025, while GPT-5.1-Codex-Mini in November 2025 was a more coding-specific branch.
It feels like OpenAI is gradually tidying up its model lineup. At least in the GPT-5.4 family, there is no separate “Codex”-suffixed specialized model, and the mini and nano variants seem to be catching up too.
I checked the Codex app this morning and saw that its built-in subagents are already using GPT-5.4 Mini.
Subagents using GPT-5.4 Mini
Use subagents and custom agents in Codex1. Subagents are now generally available in Codex, after previously being introduced as an experimental feature called “multi-agents”.
In short, you can ask Codex to spawn subagents that run in parallel with one another and with the main agent. They use their own context windows and report back to the main agent when they are done. Subagents can have their own models and instructions. There are built-in agents like explorer, and you can define your own ones with TOML configuration files under .codex/agents.
One fun detail is that Codex picks a random nickname for each subagent. My first try gave me an explorer subagent called “Galileo”.
Galileo explored the repo for my websites
I first learned about subagents through Amp, which ships several curated custom agents like Oracle and Librarian. What I like about Amp’s approach is that it presents subagents as recommended workflows, not just as a feature to try on your own. By pairing carefully designed agents with specific models and tools, it also shows how subagents fit into everyday work. That has been a consistent strength of Amp: it tends to package what it sees as the current best practice, and that is also where I first started to understand the idea of subagents.
Here are other comments on this Codex release:
- Simon Willison’s post
- Vaibhav (VB) Srivastav’s article: You Should Be Using Subagents in Codex!
Now that major coding agents like Claude Code, OpenCode, and Cursor all support subagents in broadly similar ways, will subagents become a standard, like skills?
The Value of Things. Another article about the AI trend (see the previous one). This time from Bob Nystrom, one of my favorite writers.
I Started Programming When I Was 7. I’m 50 Now, and the Thing I Loved Has Changed. The AI trend makes everyone who loves programming as a craft wonder if what we love is disappearing. Emptiness, though there’s still room for optimism.
Beautiful Mermaid1. Mermaid is the de facto tool for describing diagrams in plain text and embedding them in Markdown. GitHub supports it, for example2. But I’ve never liked the default theme — that’s why I still haven’t adopted it.
Today I found out the Craft team felt the same way, and they released a new rendering engine for Mermaid diagrams. It outputs both SVG and ASCII art, and the default theme looks great.
I haven’t looked into it deeply yet, but it looks promising at a glance. I hope it becomes a catalyst for better-looking diagrams — either by maturing into a drop-in replacement that the ecosystem adopts, or by pushing the Mermaid team to ship a better default theme.
Beautiful Mermaid
Mermaid's default theme
-
GitHub blog post: Include diagrams in your Markdown files with Mermaid; GitHub documentation: Creating diagrams ↵
Tw93, and his Mole. I heard of this macOS cleaner app called Mole before, but today I finally tried it out — a neat CLI utility that digs through and cleans up your macOS.
I checked out its author, Tw93. He’s also a Chinese programmer, and keeps a blog that caught my eye immediately. I’m glad to see programmers like him sharing tech thoughts and personal life, reminding me I’m not alone. He’s doing a great job — another role model to look up to. From his GitHub profile, I believe Simon Willison influenced him too.
Follow him on X: @HiTw93.
Simon Willison on Technical Blogging. Simon was the direct catalyst for me starting my own blog (see my post), so it’s great to see him share more about his blogging experience.
Zig’s new juicy main is here. Not quite following Zig’s new features recently, and a quick check yesterday made me find that the juicy main has landed!
Andrew Kelley proposed it directly (see #24510) to enhance the main function by providing useful variables like memory allocators, I/O instance, environment variables, and command line arguments as parameters. It reduces the boilerplate we previously needed to set up these variables.
Now there are three allowed argument signatures for the main function:
pub fn main() !voidpub fn main(init: std.process.Init.Minimal) !voidpub fn main(init: std.process.Init) !void
The definition of std.process.Init is as follows:
pub const Init = struct {
/// `Init` is a superset of `Minimal`; the latter is included here.
minimal: Minimal,
/// Permanent storage for the entire process, cleaned automatically on
/// exit. Not threadsafe.
arena: *std.heap.ArenaAllocator,
/// A default-selected general purpose allocator for temporary heap
/// allocations. Debug mode will set up leak checking if possible.
/// Threadsafe.
gpa: std.mem.Allocator,
/// An appropriate default Io implementation based on the target
/// configuration. Debug mode will set up leak checking if possible.
io: std.Io,
/// Environment variables, initialized with `gpa`. Not threadsafe.
environ_map: *std.process.Environ.Map,
/// Named files that have been provided by the parent process. This is
/// mainly useful on WASI, but can be used on other systems to mimic the
/// behavior with respect to stdio.
preopens: std.process.Preopens,
/// Alternative to `Init` as the first parameter of the main function.
pub const Minimal = struct {
/// Environment variables.
environ: std.process.Environ,
/// Command line arguments.
args: std.process.Args,
};
};The changeset is in #30644 and there’s a follow-up issue #30677 for a minimal CLI parsing mechanism.
Astro is joining Cloudflare. I always had a feeling Astro would be acquired — and now it’s happening. Astro has been my favorite framework for building static websites, and it’s my choice for my personal websites now. It reminds me of when I first discovered static site generators like Jekyll — just basic build-time templating and composition, making creating a blog very easy. Features like content collections and islands are genuinely innovative. I hope it stays productive and keeps its simplicity.
Ralph Wiggum as a “software engineer”. The AI field is evolving so fast like your math classes in high school, that if you miss a week, you’re suddenly lost. For me recently, it’s Ralph, a new pattern for coding agents that pushes them to a higher automation level.
Its name comes from a character called Ralph Wiggum in the show The Simpsons, who somehow captures the spirit of this technique.
To get familiar with Ralph, I skimmed (and watched) these materials, in addition to the original post by Geoffery Huntley:
- Matt Pocock’s walkthroughs: Ship working code while you sleep with the Ralph Wiggum technique, and 11 Tips For AI Coding With Ralph Wiggum
- Greg Isenberg’s video: “Ralph Wiggum” AI Agent will 10x Claude Code/Amp
- Ryan Carson’s article on X: Step-by-step guide to get Ralph working and shipping code
In short, Ralph is a technique that runs your coding agent sessions in a loop. It pushes the typical coding agent workflow — you give it a task, watch it work, and then a new task based on its output — forward by making the agent itself assess the outputs and decide what’s next. Back in 2025, we’ve got the agreement that an “agent” is simply an AI program running tools in a loop to achieve a goal1. Ralph extends that idea naively: It’s a bash script running agent sessions in a loop to achieve a goal.
To run agents the Ralph way, you basically need the following harnesses:
- A bash script that simply runs your coding agent in a for loop
- A PRD file that lists and tracks the tasks, commonly organized as
prd.json - A progress note that the agent appends to when completing tasks, providing relevant context to the next agent session, commonly organized as
progress.txt
These elements reveal what’s truly valuable about the Ralph idea: It formalizes a context engineering approach when tackling large scale development requirements. And that’s why Ralph differs from just using a single agent session for all tasks. Every time the session completes a task, it checks the tasks in prd.json, appends notes to progress.txt, and usually makes a git commit. Then a new agent session starts with the context window cleared, so the files the last session updated serve as the only memory of the Ralph loop.
Rough notes here. If you’re interested in the details, check the materials above. It’s indeed a new idea in the field and the community will explore it further to see if it’ll truly stand out.
-
Simon Willison’s well-known article: I think “agent” may finally have a widely enough agreed upon definition to be useful jargon now ↵
Paul Graham’s post on X about taste. Another interesting post from Paul Graham1 — what struck me is how his posts spark real discussion. All the comments are worth reading. I’m following him now.
Oddly enough, I first learned about Paul Graham through his essays, and only later realized he co-founded Y Combinator and is such a central figure in Silicon Valley.
-
My previous note: Paul Graham’s post on X about writing ↵
Paul Graham’s post on X about writing. I started writing recently (as you can probably tell), so I’ve been reading a lot about it. What’s interesting are the comments under this post. People are sharing their own thoughts on writing, and many are surprisingly inspiring. Reading them makes me feel less alone in my writing journey.
How uv got so fast1. I haven’t followed the Python ecosystem for maybe five years. But I know uv has taken off. I have it on my Mac, and it’s my go-to when I occasionally want to play with Python. It feels like pnpm or Cargo — fast and modern.
I assumed Rust was the main reason uv is so fast. Turns out, that’s actually the least important factor. Years of PEP standards made uv possible in the first place. Intentionally limited compatibility with pip, plus smart language-agnostic optimizations, did most of the heavy lifting. It’s the design choices, not the language choice, that really matter.
Zilong Liang / Hack