HomeAboutPostsTagsProjectsRSS
┌─
ARTICLE
─┐

└─
─┘

Emacs’s markdown-mode offers several preview options, but finding one that “just works” took some exploration.

The xwidget-webkit Approach

My first attempt used markdown-live-preview-window-function with xwidget-webkit—Emacs’s embedded WebKit browser. The idea: render markdown to HTML and display it in a split window.

(defun my/markdown-live-preview-window-xwidget (file)
  (xwidget-webkit-browse-url (concat "file://" file))
  (xwidget-webkit-buffer))

Four problems killed this approach:

  1. External dependency — HTML generation requires markdown (default) or pandoc binary
  2. Emacs build requirement — xwidget support must be compiled in (--with-xwidgets), which isn’t universal
  3. Temp file pollution — Live preview generates HTML files that require cleanup
  4. Complexity — Managing the xwidget buffer lifecycle adds code I’d rather not maintain

The grip-mode Solution

grip-mode provides GitHub-flavored markdown preview using a local server. The key insight: use go-grip instead of Python’s grip to avoid GitHub API rate limits and work fully offline.

Setup

Install go-grip:

go install github.com/chrishrb/go-grip@latest

Configure Emacs:

(use-package grip-mode
  :config
  (setopt grip-command 'go-grip)
  (setopt grip-preview-use-webkit nil)
  (setopt grip-update-after-change nil)
  :bind (:map markdown-mode-map
         ("C-c C-c p" . grip-mode)))

Now C-c C-c p launches a local server and opens the preview in your default browser. The preview updates on save.

Why go-grip Over Python grip

The original Python grip uses GitHub’s Markdown API, which has rate limits (60 requests/hour unauthenticated). You can add a GitHub token, but that’s extra configuration.

go-grip renders locally using a Go markdown library—no network requests, no rate limits, no authentication.

Comparison

Concernxwidget-webkitgrip-mode + go-grip
External binaryRequires pandocgo-grip (single binary)
Emacs buildRequires –with-xwidgetsAny build
Temp filesGenerates HTMLNone
RenderingBasic HTMLFull GFM
OfflineYesYes

Sometimes the best solution is a small, focused tool that does exactly one thing well.

┌─
ARTICLE
─┐

└─
─┘

I frequently work with escaped SQL strings from API logs and debugging sessions. The typical workflow involved copying the string, finding an online unescaper, pasting, copying the result, then finding a SQL formatter, pasting again… you get the idea. Too many context switches.

I wanted something like M-| (shell-command-on-region) but with predefined commands I could invoke by name. CLI2ELI made this trivial.

The Problem

When debugging data pipelines, I often encounter JSON-escaped SQL like this:

"SELECT date_trunc('month', at_timezone(kpis.\"time\",'UTC')) AS time\nFROM \"prod_analytics\".\"public\".\"machines_kpis_30_min\" kpis\nWHERE kpis.\"machine_id\" IN (UUID '3ee28f49-6792-48f9-9ca9-ba6f86d73753')"

I need to:

  1. Unescape the JSON string
  2. Format the SQL for readability

With M-|, I’d have to type jq -r '.' | sqlfmt - every time. Not hard, but tedious when you do it dozens of times a day.

The Solution: CLI2ELI with stdin Support

CLI2ELI wraps CLI tools as named Emacs commands. With the new stdin property, I can pipe buffer or region content directly to commands.

Here’s my configuration in cli-transform.json:

{
  "tool": "cli-transform",
  "cwd": "default",
  "commands": [
    {
      "name": "unescape SQL",
      "description": "Unescape JSON-escaped SQL string",
      "command": "jq -r '.'",
      "stdin": "region"
    },
    {
      "name": "format SQL",
      "description": "Format SQL using sqlfmt",
      "command": "sqlfmt -",
      "stdin": "region"
    },
    {
      "name": "unescape and format SQL",
      "description": "Unescape and format in one step",
      "command": "jq -r '.' | sqlfmt -",
      "stdin": "region"
    }
  ]
}

That’s it. Three lines per command.

Usage

  1. Select the escaped SQL string
  2. M-x cli-transform-unescape-and-format-sql
  3. Formatted SQL appears in the output buffer

The output buffer shows the command in the header line, making it clear what ran. Copy the result and move on.

The stdin Property

The stdin field accepts two values:

  • "region": Selected text, or entire buffer if no selection
  • "buffer": Always uses entire buffer content

This covers most text transformation use cases.

More Examples

Once you have the pattern, adding more transforms is trivial:

{
  "name": "format JSON",
  "command": "jq '.'",
  "stdin": "region"
},
{
  "name": "minify JSON",
  "command": "jq -c '.'",
  "stdin": "region"
},
{
  "name": "base64 decode",
  "command": "base64 -d",
  "stdin": "region"
},
{
  "name": "url decode",
  "command": "python3 -c 'import sys,urllib.parse;print(urllib.parse.unquote(sys.stdin.read()))'",
  "stdin": "region"
}

Any CLI tool that reads from stdin works.

Why CLI2ELI?

Named commands: M-x cli-transform-format-json is discoverable and memorable. No need to recall exact command syntax.

JSON configuration: No Elisp required. Adding a new transform takes 30 seconds. More importantly, JSON is trivial for AI coding agents to generate. Ask Claude or Copilot to “add a command that converts CSV to JSON” and it can produce the correct JSON config immediately. Try asking it to write the equivalent Elisp—much harder to get right.

Composable: Pipe multiple tools together in the command field. Unix philosophy meets Emacs.

Consistent interface: All transforms work the same way—select text, run command, get output.

Getting Started

  1. Install CLI2ELI from GitHub
  2. Create a JSON config file with your transforms
  3. Load it: (cli2eli-load-tool "~/path/to/config.json")
  4. Start transforming

The barrier to entry is low. Define a command in JSON, reload, use it. When you find yourself typing the same shell pipeline repeatedly, wrap it in CLI2ELI.

┌─
ARTICLE
─┐

└─
─┘

Markdown files deserve the same format-on-save treatment we give to code. I recently integrated https://github.com/rvben/rumdl , a Rust-based markdown linter, into my Emacs setup using Apheleia. Here’s what I learned.

Why rumdl?

rumdl is fast—benchmarks show it processing 478 markdown files in under a second. It implements 54 lint rules, supports automatic fixing, and provides stdin/stdout support for editor integration. That last feature is key for Apheleia.

Apheleia Configuration

Apheleia expects formatters to read stdin and write to stdout. Configuration follows a two-step pattern:

;; 1. Define the formatter command
(setf (alist-get 'rumdl apheleia-formatters)
      '("rumdl" "fmt" "--stdin"))

;; 2. Associate with major modes
(setf (alist-get 'markdown-mode apheleia-mode-alist) 'rumdl)
(setf (alist-get 'gfm-mode apheleia-mode-alist) 'rumdl)

With apheleia-global-mode enabled, markdown files now format automatically on save.

Format-on-save for markdown eliminates the mental overhead of consistent formatting. rumdl handles it fast enough that you won’t notice it’s running.

┌─
ARTICLE
─┐

└─
─┘

I recently explored an interesting architecture pattern: using Claude Code to invoke Gemini CLI for large codebase analysis. The idea was compelling—combine Claude’s superior instruction-following with Gemini’s massive context window. Gemini reads everything, Claude thinks and acts. reddit post

After building it out, I deleted it. Here’s what I learned.

The Pitch

The setup is straightforward. Gemini CLI supports a non-interactive mode (gemini -p) that accepts a prompt and returns a response. You can include files with @ syntax:

gemini -p "@src/ @lib/ Find all authentication patterns. Return file:line for each."

The theory: Claude’s context fills up fast when exploring large codebases. Gemini can ingest everything at once. Let Gemini do the bulk reading, get structured results back, then let Claude reason about what to do.

The Critical Design Insight

If you’re orchestrating one AI to call another, output format is everything.

Gemini’s natural response looks like this:

The authentication system appears to be implemented across several files, primarily in the src directory, where we can observe patterns suggesting a JWT-based approach combined with session management…

Useless. You need this:

src/middleware/auth.ts:15 - JWT validation middleware src/services/user.ts:42 - user lookup by token src/db/sessions.ts:8 - session storage interface

The fix is explicit format instructions in every query:

gemini -p "@src/ @lib/ <QUESTION>

Return findings as:
- file:line - description
- Include relevant code snippets (brief)
- Direct answers, no preamble"

This transforms vague prose into actionable data Claude can immediately use with its Read tool.

Why I Killed It

For my actual workflow, the gains didn’t materialize. Here’s the honest breakdown:

TaskNative approachDoes Gemini help?
Find specific patternast-grep or GrepNo—these are precise
Read known filesRead toolNo
Trace end-to-end flowExplore agentMarginal at best
“Does X exist anywhere?”GrepMaybe, if pattern is fuzzy
First pass on unfamiliar massive codebaseMultiple searchesYes—genuine win

The problem: my codebase is well-structured and familiar. Targeted search followed by reading specific files already works well. The Explore agent (a subagent that investigates across files and reports back) already handles the “understand how X works” case.

The deeper issue: Gemini “seeing everything at once” sounds powerful, but understanding code flow is inherently sequential. A request hits middleware, then a handler, then a service, then a database. I need to trace that chain. Dumping all files into context doesn’t shortcut the reasoning.

And there’s the output problem—even with structured results, I still need to Read the files Gemini identified before I can act. I’ve added a step, not removed one.

When It Actually Helps

The pattern works when:

  • The subordinate has a capability the primary lacks (Gemini’s context window genuinely is larger)
  • The task requires bulk access (onboarding to a 500-file unfamiliar codebase)
  • You’ve solved the output problem with structured format enforcement

If you build it, bake format instructions into every query template and always verify by reading the files the subordinate identifies before acting.

The Takeaway

Before adding orchestration complexity, ask: “What’s actually the bottleneck?” If it’s reasoning, more data access won’t help. For most daily work on a familiar codebase, targeted search plus following the import graph wins.

┌─
ARTICLE
─┐

└─
─┘

Rendering

After years of using WezTerm, I decided to try Ghostty—the new GPU-accelerated terminal that’s been generating buzz. The installation was simple enough, but getting it to look like my carefully tuned WezTerm setup turned into a journey through terminal rendering differences.

Here’s what I learned, and the configuration that finally got Ghostty looking right (almost).

The Problem: Everything Looks Wrong

Opening Ghostty for the first time on my Macbook, something felt off. The colors appeared washed out, the fonts looked thin, and the text seemed more spread out than in WezTerm. Same font (IosevkaTerm Nerd Font Mono), same size—completely different appearance.

This wasn’t just my imagination. These are documented issues in the Ghostty community.

Fix #1: Washed Out Colors

The most jarring difference was color saturation. My Gruvbox Light theme looked faded, like viewing it through a fog.

The fix:

window-colorspace = display-p3

That’s it. One line. Ghostty defaults to sRGB, but on macOS displays, display-p3 provides the color saturation you expect. This is a https://github.com/ghostty-org/ghostty/discussions/3470 that trips up many new users.

Fix #2: Thin Font Rendering

With colors fixed, the fonts still looked anemic. In WezTerm, I use:

config.font = wezterm.font({ family = "IosevkaTerm Nerd Font Mono", weight = "Bold" })

This loads the actual Bold variant of the font. Ghostty doesn’t support specifying font weight this way. Instead, it offers synthetic bolding:

font-thicken = true
font-thicken-strength = 150

The font-thicken option (macOS only) artificially adds stroke weight. The font-thicken-strength parameter (0-255) lets you dial in exactly how much—a feature https://github.com/ghostty-org/ghostty/discussions/3492 .

The catch: Synthetic bold isn’t true bold. A properly designed Bold font variant has intentionally adjusted proportions. Synthetic bold just makes everything uniformly thicker. You’ll notice subtle differences—the bowl of a “d” becomes rounder, letterforms feel slightly different. Whether this matters depends on your sensitivity to typography.

Fix #3: Wide Letter Spacing

Even after the above fixes, text in Ghostty appeared more spread out horizontally. This is another https://github.com/ghostty-org/ghostty/discussions/3842 roughly 1 pixel difference in letter spacing.

adjust-cell-width = -5%

This tightens the horizontal spacing. Some users go as far as -10%, but I found -5% matched WezTerm closely enough.

The Complete Configuration

Here’s my final Ghostty config, matching my WezTerm setup as closely as possible:

# Font
font-family = IosevkaTerm Nerd Font Mono
font-size = 16
font-thicken = true
font-thicken-strength = 150
adjust-cell-width = -5%

# Fix washed out colors on macOS
window-colorspace = display-p3

# Gruvbox Light Soft (base16) - matching WezTerm
background = f2e5bc
foreground = 504945
cursor-color = 504945
selection-background = d5c4a1
selection-foreground = 504945

palette = 0=#f2e5bc
palette = 1=#9d0006
palette = 2=#79740e
palette = 3=#b57614
palette = 4=#076678
palette = 5=#8f3f71
palette = 6=#427b58
palette = 7=#504945
palette = 8=#bdae93
palette = 9=#9d0006
palette = 10=#79740e
palette = 11=#b57614
palette = 12=#076678
palette = 13=#8f3f71
palette = 14=#427b58
palette = 15=#282828

# Window
background-opacity = 0.96
window-padding-x = 8
window-padding-y = 0
macos-titlebar-style = hidden
confirm-close-surface = false

Ghostty vs WezTerm: Honest Comparison

AspectWezTermGhostty
Font weight controlTrue bold via weight = “Bold”Synthetic via font-thicken
Color saturation“Correct” by defaultRequires window-colorspace = display-p3
Letter spacingTighterWider (fix with adjust-cell-width)
Config formatLua (powerful, verbose)INI-style (simple, limited)
Hot reloadAutomaticManual (Cmd+Shift+,)

Should You Switch?

Ghostty is fast, lightweight, and under active development—it has real potential. But after all the tweaking needed to match WezTerm’s appearance (and still not quite getting there), I’m sticking with WezTerm for now. It just works out of the box.

I’ll revisit Ghostty in a few months when it’s more mature. For now, the config above gets it 90% of the way there. Whether that last 10% matters is up to you.


Resources:

┌─
ARTICLE
─┐

└─
─┘

Teaching Claude Code to Use ast-grep

I wanted Claude Code to understand the difference between searching for code structure and searching for plain text. That meant teaching it when to reach for ast-grep instead of ripgrep—and to make that decision automatically.

This is how I taught it to do that.

Figuring Out the Right Tools

Claude Code can be extended in several ways—skills, commands, hooks, sub-agents, MCP servers, and plugins.

Each serves a different purpose, but in this case, I needed two working together:

  • MCP (Model Context Protocol) to connect the actual ast-grep binary as an external tool.
  • Skill to teach Claude when and why to use that tool.

Think of it like this:

  • MCP gives Claude new abilities.
  • Skills give it judgment.

I didn’t want to type /ast-grep every time. I wanted Claude to decide on its own.

Step 1: Adding the ast-grep MCP Server

The first step was to register the ast-grep MCP server with Claude Code.

Once connected, it exposes several tools that Claude can call directly:

  • mcp__ast-grep__find_code — search code using structural patterns
  • mcp__ast-grep__find_code_by_rule — advanced YAML-based rule matching
  • mcp__ast-grep__dump_syntax_tree — inspect syntax trees
  • mcp__ast-grep__test_match_code_rule — test custom rule

After this, Claude had full access to ast-grep—but it still didn’t know when to use it.

That’s where the Skill came in.

Step 2: Teaching Strategy with a Skill

I created a new skill file at:

~/.claude/skills/ast-grep/SKILL.md

The goal was simple: teach Claude how to decide when to use ast-grep versus ripgrep.

Here’s the essence of what I wrote:

---
name: ast-grep
description: Use ast-grep for structural code search. Fall back to ripgrep for plain-text searches.
---

# ast-grep: Strategic Code Search Guidance

## Core Principle

**ast-grep = Code structure** (syntax-aware, AST-based)  
**ripgrep = Plain text** (fast, content-based)

## Decision Tree

Is this about CODE STRUCTURE?
├─ YES → Use ast-grep MCP tools
│   Examples:
│   ✓ Find function or method definitions
│   ✓ Locate class declarations
│   ✓ Search for loops or conditional patterns
│   ✓ Refactor code using syntax patterns
└─ NO → Use ripgrep
    Examples:
    ✓ Search comments or docs
    ✓ Find TODO or FIXME markers
    ✓ Scan config files or logs

This gave Claude a clear rule of thumb:

  • ast-grep for anything syntax-aware
  • ripgrep for everything else

I also added a few anti-patterns—things Claude should avoid:

  • ❌ Don’t use ast-grep for plain text
  • ❌ Don’t use ripgrep for structured code
  • ✅ Use the right tool based on intent, not habit

That’s it. The skill didn’t try to re-document every ast-grep parameter.

It just provided strategic guidance—the kind of context a human developer would know instinctively.

Step 3: Telling Claude Code to Use the Skill

Add this line to your project’s CLAUDE.md:

Prefer ast-grep over Grep for structural code searches.

Or use the quick memory shortcut—type # Prefer ast-grep over Grep for structural code searches. and Claude Code will prompt you to save it.

What I Learned

The key takeaway was separation of responsibility:

  • MCP handles what tools exist and how they work.
  • Skills handle when and why to use them.

Keeping those layers distinct made everything easier to maintain:

  • MCP updates don’t break the skill.
  • Skill logic evolves independently.
  • Claude only loads the skill when it’s relevant.

It also keeps context light—since Skills use progressive disclosure, they load only when Claude detects the topic applies.

Final Thoughts

Teaching Claude to use ast-grep wasn’t just about wiring up another tool.

It was about teaching judgment.

By combining an MCP server (for capability) with a Skill (for reasoning), I gave Claude the intuition to pick the right search tool for the job—without me telling it what to do.

That’s the essence of extending Claude Code effectively:

tools give power, skills give intelligence.

References

┌─
ARTICLE
─┐

└─
─┘

Before arriving at a clean, stable setup in #Obsidian, I went through a long phase of experimentation. I tried to make it everything at once — a note-taking tool, an archive library, a task tracker, a project manager, even a reading queue.

Most of those ideas didn’t last. Some sounded clever but created friction in the wrong places. Others blurred the purpose of the tool entirely. Over time, I abandoned what didn’t align with how I think and create.

This post documents those abandoned ideas — the design dead ends that taught me what my Obsidian system shouldn’t be.

Separating the Archive Vault

At first, I kept everything in a single vault with an inbox/ directory for saved articles, videos, and tweets. It seemed efficient, but I quickly noticed I was mixing consumption with creation. My vault bloated with unread material, and it became unclear whether the inbox was a reading queue or an archive.

The easier it was to save things, the less I thought about them — violating my guiding principle that friction is good.

Now I maintain two vaults:

  • Main vault for thinking and note-making
  • ONote-Archive for processed reference materials

Only notes that have passed through active thinking make it to the archive. The main vault stays lightweight and focused, while the archive can grow endlessly without guilt. Two vaults, two purposes.

Quick Capture: Apple Notes Over Obsidian

In the beginning, I captured every fleeting thought directly in Obsidian. It filled quickly with incomplete fragments that blurred the line between brainstorming and structured writing. There was no natural filtering step.

Switching to Apple Notes for quick capture introduced just enough friction to make me pause. I can jot thoughts instantly, review them weekly, and only promote the valuable ones to Obsidian.

This keeps Obsidian intentional — a space for developed ideas, not raw fragments. Apple Notes is my fast capture buffer; Obsidian is where thinking happens.

Projects: Tags Instead of Folders

My first project structure had a projects/ folder with a subdirectory for each project. It looked organized but violated my preference for tags over subfolders.

The result was artificial separation — projects felt isolated from related notes, and my graph view lost context. I was constantly wondering, “Is this in main/ or projects/?”

Now, everything lives in main/, and I tag projects with #project/<name>. This makes projects appear naturally in the graph view and allows a project note to evolve into a general reference by simply removing a tag. It’s flexible, consistent, and matches how I think.

Short-Term Tasks Don’t Belong in Obsidian

For a while, I logged daily tasks in Obsidian using a day/ folder. It didn’t take long before my notes became cluttered with low-value content — grocery lists and quick reminders mixed with long-term ideas.

That’s when I realized: Obsidian is for thinking, not executing.

Now, short-term tasks live in dedicated apps (Apple Reminders, Things, Todoist), while long-term goals and context stay in Obsidian. This separation keeps my knowledge space focused and prevents mental overload.

No Reading Queue in Obsidian

I briefly considered managing a reading queue in Obsidian using Bases, complete with metadata and progress tracking. But it quickly became clear: that approach encouraged hoarding.

It made capturing effortless — and processing rare. Smooth workflows aren’t always better ones. I reminded myself: Obsidian isn’t a task manager or read-it-later app.

Now I rely on external tools or Apple Notes for temporary saves. The archive only grows after I’ve thought about something, not before.

The Core Principle: Friction Is Good

Every decision here reflects one underlying idea:

Friction is a good thing.

Friction between capture and archive prevents hoarding.

Friction between fleeting and permanent prevents clutter.

Friction between reading and saving forces evaluation.

Friction between thinking and executing preserves purpose.

My system evolved by removing convenience where it hurt and adding structure where it helped. Each abandoned idea taught me something about balance — how to build a system that supports deliberate thinking, not just organized information.

┌─
ARTICLE
─┐

└─
─┘

Note

Note format is important, use proper template to provide insights.

Write learning note using your own words, for the mind to digest material.!

DO NOT copy and paste, otherwise you are not learning and thinking, only collecting information.

“If you’re thinking without writing, you only think you’re thinking.”

— Leslie Lamport

Adopt atomic note philosophy. When discussing a new concept, create new note and link back to it. Do not write it inside the note concept, in order to form a network of ideas, instead of a few big notes.

Give context when discussing information, either by using embedded text or adding callout.

Tag and Link

Use tag only as is-a relationship, exercise restraint when tagging.

Use tag can will be reused, not too vague, not too specific.

Use nested tags as a virtual file system

Use link for connecting, it is OK to use HighOrderNote which is just empty note for connecting notes.

A Guide On Links vs. Tags In Obsidian

General suggestions

Do not edit notes in a Wikipedia style, it loses all the points that matter.

Note taking is different from information collection, distinguish between these two, do not only do information collection. proper 2024-06-10-friction-is-a-good-thing|friction is a good thing

write about the why the motivation

┌─
ARTICLE
─┐

└─
─┘

Per-Display Layout Configuration in Yabai Using spacespy

The Problem

When running yabai with multiple displays, you often want different layouts per screen:

  • Built-in laptop display: Stack layout (one window at a time)
  • External displays: BSP tiling (multiple windows side-by-side)

The challenge? yabai -m query --displays gives you display indices but doesn’t tell you which is the built-in screen.

The Solution: spacespy

spacespy is a lightweight macOS utility that provides display information as JSON, including whether a display is “Built-in” or external.

Install from source:

git clone https://github.com/nohzafk/spacespy.git
cd spacespy
make
sudo make install

The Configuration Script

Create ~/.config/yabai/configure-displays.sh:

#!/usr/bin/env bash

# Get display information
spacespy_output=$(spacespy)
displays=$(yabai -m query --displays)

# Configure layout for all spaces on a display
configure_display() {
  local display_index=$1
  local layout=$2

  spaces=$(echo "$displays" | jq -r ".[] | select(.index == $display_index) | .spaces[]")

  for space in $spaces; do
    yabai -m config --space "$space" layout "$layout"
  done
}

# Find built-in display
builtin_display_number=$(echo "$spacespy_output" | jq -r '.monitors[] | select(.name | contains("Built-in")) | .display_number')
all_display_numbers=$(echo "$spacespy_output" | jq -r '.monitors[].display_number')
display_count=$(echo "$displays" | jq 'length')

if [ "$display_count" -eq 1 ]; then
  # Single display - use stack
  configure_display 1 "stack"
else
  # Multiple displays
  [ -n "$builtin_display_number" ] && configure_display "$builtin_display_number" "stack"

  # External displays - use BSP
  for display_num in $all_display_numbers; do
    [ "$display_num" != "$builtin_display_number" ] && configure_display "$display_num" "bsp"
  done
fi

Make it executable:

chmod +x ~/.config/yabai/configure-displays.sh

Event-Driven Reconfiguration

Add to your .yabairc:

# Configure on startup
bash ~/.config/yabai/configure-displays.sh

# Reconfigure when displays change
yabai -m signal --add event=display_added \
  action="bash ~/.config/yabai/configure-displays.sh"

yabai -m signal --add event=display_removed \
  action="bash ~/.config/yabai/configure-displays.sh"

yabai -m signal --add event=display_moved \
  action="bash ~/.config/yabai/configure-displays.sh"

yabai -m signal --add event=mission_control_exit \
  action="bash ~/.config/yabai/configure-displays.sh"

Why This Works

  1. Automatic: Connect/disconnect displays → layouts reconfigure automatically
  2. Clean separation: spacespy handles display detection, yabai handles window management
  3. SIP-compatible: Works with System Integrity Protection enabled
  4. Simple: One script, a few signal handlers

The Result

On the go (laptop only) → stack layout maximizes limited screen space At my desk (external monitors) → BSP tiling on big screens, stack on laptop

The transition happens automatically. No manual intervention needed.

The key insight: spacespy provides the missing piece—identifying which display is built-in. Once you know that, per-display layout configuration becomes trivial.

┌─
ARTICLE
─┐

└─
─┘

2025-11-11-checkpointing-conversations-with-claude

When I’m deep in a long Claude session, I use what I call a checkpoint strategy. It’s basically a conversational anchor.

Let’s say Claude throws out a list of ideas or questions. Before diving into one of them, I’ll drop a quick line like:

Use this as Checkpoint A for our conversation — when we come back, track the decisions made.

That way, I can explore one branch in depth, make a few choices, then jump back to the checkpoint and pick a different path — without losing the thread of what’s already been decided.

It’s like branching in Git, but for chat. Each checkpoint keeps the flow organized while I experiment with different directions.