<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Matt Venables</title><description>A humble Astronaut’s guide to the stars</description><link>https://venabl.es/</link><item><title>Your Agent Still Needs an Identity. But the Reason Has Changed.</title><link>https://venabl.es/your-agent-still-needs-an-identity/</link><guid isPermaLink="true">https://venabl.es/your-agent-still-needs-an-identity/</guid><description>People started running their agents on their home computers. Now how the hell do we tell them apart?</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate><content:encoded>_A follow-up to
[Your Agent Needs an Identity. And It Should Be Decentralized.](https://venabl.es/your-agent-needs-an-identity)_

---

A few months ago, I wrote about
[why AI agents need decentralized, cryptographic identity](https://venabl.es/your-agent-needs-an-identity).
My argument was simple: agents are going to be acting on your behalf across the
internet, so they&apos;ll need a way to prove who they are without leaking your
personal information.

I still believe all of that. But something has shifted underneath the premise,
and it&apos;s worth talking about.

## My agent is me

When I wrote that post, the mental model was: your agent runs in the cloud,
talks to APIs, and acts _on your behalf_. It needs credentials, permissions,
identity — everything to prove it should be there — because it&apos;s operating
remotely, autonomously, in cloud environments that don&apos;t know you.

That was the trajectory. And for enterprise and cloud-native agents, it still
is.

But the thing that actually happened (and I didn&apos;t see this coming) is that
agents started running on your Mac Mini. On your home machine. Logged into your
browser. Using your cookies, your OAuth tokens, your everything. Not acting _on
your behalf_ in some delegated sense. Acting _as you_.

![On the internet, nobody knows you&apos;re a lobster.](nobody-knows-youre-a-lobster.png)

[OpenClaw](https://openclaw.ai) is probably the best example of this shift. It
runs on your machine, connects to your messaging apps, browses the web in your
browser sessions, reads your files, manages your calendar — all using your
existing credentials. It didn&apos;t wait for an identity layer. It just shipped. And
it works remarkably well precisely because it sidesteps the identity problem
entirely. Your agent doesn&apos;t need to prove it&apos;s authorized to use your Gmail —
it&apos;s already logged in. **It&apos;s not operating on your behalf. It _is_ you.**

[Claude Code](https://docs.anthropic.com/en/docs/claude-code) and a growing wave
of similar tools are doing the same thing. It&apos;s same pattern again: agents are
inheriting their operator&apos;s identity by default.

This is incredibly pragmatic — and in hindsight, inevitable. Why would we expect
every service on the internet to drop everything and support agents natively,
especially while the entire definition of &quot;agent&quot; is rapidly evolving? So here
we are.

## But Your Agent Still Needs an Identity

So if agents are just running as you, why bother with identity at all? Because
the moment you have more than one agent, &quot;it was me&quot; stops being a useful
answer. And the reasons only grow from there.

### Cryptographic Guardrails

Right now, if your agent is running with your credentials, it can do _anything
you can do_. Send emails. Delete files. Make purchases. Post on social media.
There&apos;s no granularity.

Cryptographic identity fixes this. Give your agent a keypair, issue it signed
credentials that scope its permissions, and now it can&apos;t accidentally spend
money because it can&apos;t cryptographically sign the transaction. The agent can&apos;t
escalate its own permissions because it can&apos;t forge the signature. Least
privilege, enforced by math instead of a config file.

And if the credential is _bound_ to the agent via cryptographic proof then even
if the agent fumbles the credential and exposes it, nobody else can use it.

### The Audit Trail

If you&apos;re running one agent, identity is simple. You know who did it. Done.

When you have two or three — a coding agent, a personal assistant, something
monitoring your email — things get murky. Which one sent that message? Which one
made that API call that burned through your rate limit? OpenClaw alone can spawn
sub-agents for different tasks. When you&apos;ve got multiple agents operating with
the same credentials, &quot;show me the receipts&quot; becomes a real need.

And once agents start touching money — making purchases, managing subscriptions,
executing transactions — you need a clear, auditable trail of which agent did
what, when, and under what authority. This is where identity infrastructure like
[ACK-ID](https://www.agentcommercekit.com/ack-id/introduction), or a variation
of it, becomes a pain-killer rather than a vitamin.

## SSL: Yes, that analogy again

In my original post, I compared agent identity to SSL certificates for the web.
That comparison holds up even better now.

SSL didn&apos;t launch with the web. The early web was plaintext everything. HTTP,
not HTTPS. And it worked fine — until it didn&apos;t. Until people started doing
commerce, entering passwords, transmitting sensitive data. Then SSL went from
&quot;nice to have&quot; to &quot;the browser literally won&apos;t let you visit this site without
it.&quot;

Agents are in their HTTP era right now. Everything is running on ambient trust —
your credentials, your machine, your implicit authorization. And it works great.
OpenClaw users aren&apos;t losing sleep over agent identity. They&apos;re getting things
done.

But agents are about to start doing commerce. Making purchases. Managing money.
Coordinating with other agents they&apos;ve never interacted with before. Your agent
will need to verify that it&apos;s really dealing with Amazon&apos;s agent and not a
phishing agent. Amazon&apos;s agent will need to verify that your agent is authorized
to make purchases on your behalf. Ambient trust stops being enough.

The tools for this already exist. It&apos;s just cryptography -- keypairs and
signatures, the same primitives we already use for everything else. Protocols
like ACK-ID package this up for agents. The question is just about _when_ we
start using them.

## What I Think Happens Next

Bear with me while I speculate a bit, I&apos;m only an oracle-in-training. Here&apos;s how
I imagine it goes down:

**Short term (now):** Agents continue running locally with ambient credentials.
Identity is implicit. The ecosystem grows fast because there&apos;s zero friction.

**Medium term:** Multi-agent setups become common. People want audit trails,
permission scoping, and the ability to distinguish between agents. Local
identity — even if it&apos;s just &quot;this agent has a keypair and signs its actions&quot; —
starts to matter. Agent runtimes begin building this in.

**Long term:** Cloud-based agent explosion. Personal agents will still run
locally, but corporate agents won&apos;t be running on Mac Minis. Agent-to-agent
communication scales up. The external trust problem from my original post comes
roaring back, but now there&apos;s a foundation of local identity to build on.

The thing I missed in my original post wasn&apos;t the destination — it was the path.
I assumed agents would need identity _first_, as a prerequisite for operating.
Instead, they&apos;re operating first and identity is following. Which is, if you
think about it, exactly how the internet has always worked.

## So ... keypairs?

Your agent still needs an identity. And the cryptographic foundation is simpler
than it sounds. Forget the jargon — DIDs, VCs, W3C — nobody cares about the spec
name. It&apos;s just keypairs and signatures.

Give your agent a keypair. Sign its actions. Build audit trails. Scope
permissions with signed credentials. Start local. Extend outward.

We&apos;re in a golden age of agent development, one we&apos;ll look back upon with
nostalgia, but your agents are running around wearing your clothes. Sure, it&apos;s
pragmatic for now, but soon we&apos;ll want to know which one is which.

---

_At [Catena Labs](https://catenalabs.com), we&apos;re building agent-native financial
infrastructure on top of the [Agent Commerce Kit](https://agentcommercekit.com).
The identity layer —
[ACK-ID](https://www.agentcommercekit.com/ack-id/introduction) — is designed for
exactly this evolution: from local agent identity to internet-scale trust. If
you&apos;re building in this space, [let&apos;s talk](https://catenalabs.com/)._</content:encoded></item><item><title>wt: A Git Worktree Helper</title><link>https://venabl.es/wt/</link><guid isPermaLink="true">https://venabl.es/wt/</guid><description>A simple wrapper for managing git worktrees, built for running multiple coding agents in parallel.</description><pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate><content:encoded>Git worktrees are having a moment. With tools like
[Claude Code](https://code.claude.com), [opencode](https://opencode.ai), and
[Amp](https://ampcode.com), you can run multiple coding agents in parallel. But
they step on each other&apos;s toes when they run in the same directory (and I&apos;m
looking for maximum efficiency here!)

Git worktrees are a clean way to give each agent its own space but they suffer
from a major problem: they don&apos;t copy over `.env` (and other gitignored) files,
so you can&apos;t test the changes without some manual setup.

So I created a git worktree wrapper called
[`wt`](https://github.com/venables/wt).

![wt git worktree helper](./wt.png)

## What it does

```sh
# Create a worktree for an existing branch
wt feature/login

# Worktree ends up at ../myproject-feature/login
```

It handles the path for you and copies over everything that was in your
`.gitignore`. Worktrees live at `../&lt;repo&gt;-&lt;branch&gt;`, which keeps them organized
and out of the way. And you can have it auto-cd you into the new worktree
directory, too.

A few other commands:

```sh
# List worktrees
wt ls

# Clean up
wt rm feature/login
```

## .worktreeinclude

Building on what Claude Desktop (oddly not Claude Code, yet) and Cline already
promote, `wt` supports a `.worktreeinclude` file, which is the same format as
`.gitignore` but allows you to copy over just a subset of items.

Create a `.worktreeinclude` file to specify what gets copied:

```
.env
.env.local
```

If the file doesn&apos;t exist, `wt` falls back to copying everything in `.gitignore`
that exists in the source worktree. If you don&apos;t want any files copied, you can
always opt-out with `--no-copy`.

## Bonus: Even more speed

Add a `wtc` function to your `.zshrc` or `.bashrc` that creates a worktree and
launches Claude Code in one go:

```sh
wtc() { wt &quot;$@&quot; &amp;&amp; claude; }
```

## Install

```sh
brew install venables/tap/wt
```

[Check out the repo on GitHub.](https://github.com/venables/wt)</content:encoded></item><item><title>Your Agent Needs an Identity. And It Should Be Decentralized.</title><link>https://venabl.es/your-agent-needs-an-identity/</link><guid isPermaLink="true">https://venabl.es/your-agent-needs-an-identity/</guid><description>The internet is going agent-native. Identity and trust need to scale with it.</description><pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate><content:encoded>By the time ChatGPT launched at the end of 2022, nearly half of all internet
traffic was driven by bots[^1]. And today, we&apos;ve already blown past that number.

Now, fast forward a few years and it&apos;s not hard to imagine how this story goes:
as more and more people bring AI agents into their lives, the lion&apos;s share of
traffic on the internet will be automated. Some of these agents will be
interacting directly with you on your phone, and others will operate behind the
scenes while you sleep.

And here&apos;s the kicker: agents won&apos;t operate in a vacuum. They&apos;ll coordinate and
interact with each other. Your scheduling agent will work with your shopping
agent. And those agents will be negotiating with outside agents and services.

Call them whatever you want: agents, bots, clankers. The outcome is the same:
The internet will become more of an agent-to-agent network than a human-to-human
one.

![On the Internet, nobody knows you&apos;re an agent](./nobody-knows-youre-an-agent.png)

## The Identity Challenge

When agents start operating at this scale, performing real work and making real
transactions, they&apos;re going to need to prove who they are. Not just to you
(making sure you&apos;re talking to your own agent and not a phony), but they&apos;ll also
need to prove who they are to every service they interact with. Beyond that,
they&apos;ll often need to prove that they&apos;re not operating on behalf of a bad actor.

Humans solve this through social constructs - driver&apos;s licenses, social security
numbers, credit scores, the whole bureaucratic apparatus. But these are
essentially workarounds we created because we can&apos;t really do more than that in
our heads.

Agents are different. They&apos;re computers. And computers are pretty damn good at
signing and verifying cryptographic messages. They don&apos;t suffer from signature
fatigue like we do -- they won&apos;t accidentally sign away valuable assets due to
laziness. They can verify every signature, every time, with perfect consistency.

## Why Centralized Systems Won&apos;t Scale

The natural impulse is to think: &quot;We need a big, centralized identity system for
all these agents.&quot; And I&apos;ve spoken to a few BigCo&apos;s who think this is the
answer. But that fundamentally misunderstands how the internet actually works.

The internet doesn&apos;t have a single point of control. When one ISP goes down,
traffic routes around the failure. The architecture is distributed and
decentralized by design - it has to be to handle global scale and maintain
resilience.

Agent identity needs to work the same way. It has to be internet-native,
globally scalable, and decentralized. Any centralized approach will inevitably
become a bottleneck, a security risk, or both. Verifying an agent identity
should not be beholden to some centralized API&apos;s rate limits, uptime, or license
agreement.

The only way to make identity work at internet scale is to decentralize it. Not
using closed, proprietary systems, but using **open standards that _anyone_ can
tap into**, like Verifiable Credentials
([VCs](https://www.w3.org/TR/vc-data-model-2.0/)) and Decentralized Identifiers
([DIDs](https://www.w3.org/TR/did-1.0/)). These serve as globally-unique,
verifiable, and self-controlled usernames, profiles, and permission slips.
They&apos;re already used in the wild and growing: social apps like
[BlueSky](https://bsky.social/about) are built upon DIDs via the
[AT Protocol](https://atproto.com/), while governments and traditional
OpenID/OAuth companies are issuing VCs via
[OIDC4VC](https://openid.net/specs/openid-4-verifiable-credential-issuance-1_0.html).

These decentralized, cryptographic &quot;proofs&quot; let any agent prove that they have
permission to do something, that they have done something in the past, that they
have a specific attribute, or that they are operating on behalf of a person or
business.

All while respecting the owner&apos;s privacy.

## Privacy by Design

No agent should ever leak their owner&apos;s personally identifiable information
(PII). Full stop. Sounds pretty obvious when you say it out loud, but you&apos;d be
surprised at how much of your personal information is transmitted in plaintext
on a day-to-day basis.

The only way to guarantee that an agent can&apos;t leak your information is to never
give it to them in the first place. Through **cryptographic trust chains** built
on Verifiable Credentials, agents can prove they&apos;re authorized to act while
simultaneously proving their owner is legitimate - all without exposing any
personal information.

When you visit Amazon.com, your browser shows you a green lock icon and you know
you&apos;re on the real Amazon website. Behind that icon is a whole set of
cryptographic operations and proofs that your browser went through to verify
that this website is really Amazon&apos;s website and not an imposter. But it doesn&apos;t
need to give you Jeff Bezos&apos;s address to prove it.

Emerging protocols like
[ACK-ID](https://www.agentcommercekit.com/ack-id/introduction) do this same
thing for agents. It&apos;s like SSL for agents. Recipients can verify they’re
dealing with a legitimate agent backed by a verified owner — **all in a legally
compliant way**, while the agent owner’s privacy remains protected. When
regulations require additional information (e.g., in payments), the owner can
selectively disclose the minimum necessary details.

This cryptographic approach is far superior to traditional identity systems that
rely on names and static identifiers. Instead of using a 9-digit social security
number as proof of your identity (one which has undoubtedly been leaked
already), these cryptographic identities are essentially impossible to
impersonate.

![I am agent spartacus](./i-am-spartacus.png)

## The Moment is Now

We&apos;re at exactly the right moment for this transformation. Agent traffic is
about to explode across the internet. Gartner predicts that within 3 years, a
third of user experiences will shift from native applications to agentic front
ends[^2]. The scale of this change will be enormous.

Traditional identity systems were not designed for this new wave of actors. We
have a rare opportunity to build global identity infrastructure correctly. Not
the &quot;Equifax leaked 143 million social security numbers&quot; way. Not the &quot;sorry,
your identity was stolen because we stored everything in plaintext&quot; way.

Instead: cryptographic proof that can&apos;t be faked. Verifiable credentials based
on open standards. Decentralized architecture that can&apos;t be compromised because
there&apos;s no central repository to attack.

Decentralized agent identity isn&apos;t just technically superior - given the
trajectory we&apos;re on, it&apos;s a requirement. The only question is whether we build
it intentionally or let it emerge from yet-another centralized identity failure.

At [Catena Labs](https://catenalabs.com), we&apos;ve been building exactly this kind
of infrastructure on top of the
[Agent Commerce Kit (ACK)](https://agentcommercekit.com). The identity layer -
[ACK-ID](https://www.agentcommercekit.com/ack-id/introduction) - establishes
verifiable links between agents and their owners using these proven
cryptographic standards, creating the foundation for trusted agent interactions
at internet scale.

The agents are coming. They need an identity system that works at internet
scale, preserves privacy, and enables new forms of economic interaction. Let&apos;s
build them right.

[^1]:
    [Imperva Bad Bot Report 2025](https://www.imperva.com/resources/gated/reports/2025-Bad-Bot-Report.pdf)

[^2]:
    [Gartner Predictions Aug 26, 2025](https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025)

---

_We&apos;re building agent-native financial infrastructure at
[Catena Labs](https://catenalabs.com/). If this resonates with you, we should
talk._</content:encoded></item><item><title>Key Repeat for Vim in Cursor</title><link>https://venabl.es/key-repeat-in-cursor-vim/</link><guid isPermaLink="true">https://venabl.es/key-repeat-in-cursor-vim/</guid><description>How to enable key-repeat on MacOS in Cursor for tools like VSCode Vim</description><pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate><content:encoded>&gt; NOTE: This post is primarily a reminder for myself for when I set up a new
&gt; laptop.

I love vim (and neovim). But I also love the niceities of a modern editor,
especially ones with AI agents built-in, like [Cursor](https://cursor.com).

![The agents are in the computer?](./ai-agents-in-the-computer.jpg)

But using Vim keybindings on MacOS and Cursor (or VSCode) needs a bit of
teweaking. Specifically, you have to enable key-repeat so you can hold a key to
move around.

Here&apos;s how to do it:

```sh
defaults write $(osascript -e &apos;id of app &quot;Cursor&quot;&apos;) ApplePressAndHoldEnabled -bool false
```</content:encoded></item><item><title>Type-safe API Routes in Next.js</title><link>https://venabl.es/type-safe-api-routes-in-nextjs/</link><guid isPermaLink="true">https://venabl.es/type-safe-api-routes-in-nextjs/</guid><description>Introducing &apos;typed-route-handler&apos;, the easiest way to add type-safety to Next.js Route Handlers</description><pubDate>Tue, 09 Jan 2024 00:00:00 GMT</pubDate><content:encoded>Since the release of Next.js 14, I have been converting all of my existing (and
new) products to use the app directory and Server Components. The journey has
been exciting, albeit with it&apos;s fair share of bumps in the road. But one of the
features I&apos;ve most enjoyed was the creation of **Route Handlers**. Built on web
standards, these incredibly powerful handlers allow us to return _anything_ from
a specific route -- a JSON API response, an image, a stream, React Components...
_anything_!

While these route handlers offer incredible flexibility, it also means they
don&apos;t offer many guard-rails. In fact, these handlers are extremely minimal by
default. And that&apos;s where they can bite you. Because you can return anything
from these handlers, there&apos;s no contract that you&apos;re actually responding with
the right thing, or even responding at all. And if you are responding with
something, is it what you promised?

## The missing API features for Next.js

Enter `typed-route-handler`, a new node module I built for Next.js Route
Handlers. It&apos;s a lightweight module that packs some serious Developer Experience
(DX) and productivity punch. With a couple of lines, you can have a fully
type-safe API endpoint, automatic validation handling, with request logging and
timing.

Here&apos;s a minimal example of how it works:

```ts
import { handler } from &quot;typed-route-handler&quot;

type ResponseData = {
  result: string
  over: number
}

export const GET = handler&lt;ResponseData&gt;((req) =&gt; {
  // This response much match ResponseData
  return NextResponse.json({
    result: &quot;this response is type-checked&quot;,
    over: 9000
  })
})
```

Or, a more complete example with URL parameter types:

```ts
import { auth } from &quot;@/auth&quot;
import { handler } from &quot;typed-route-handler&quot;

type ResponseData = {
  result: string
  over: number
}

type Context = {
  params: {
    id: string
  }
}

/**
 * GET /api/articles/:id
 */
export const GET = handler&lt;ResponseData, Context&gt;(async (req, context) =&gt; {
  // If the user is not authenticated, we can call `unauthorized()` which will
  // automatically be caught by the handler, and respond with an HTTP 401.
  const session = await auth()
  if (!session) {
    unauthorized()
  }

  // `context` is type-safe here
  const article = findArticle(context.params.id)

  // This response much match ResponseData
  return NextResponse.json({
    result: &quot;this response is type-checked&quot;,
    over: 9000
  })
})
```

## Going further with zod

I&apos;m a big user of `zod` for schema validation, and wanted to bring some of that
power to `typed-route-handler`.

First and foremost, if you use zod within a handler, any validation errors will
automatically get caught and returned to the client as such. This works even if
you don&apos;t specify any return types. For example:

```ts
import { z } from &quot;zod&quot;

const bodySchema = z.object({
  name: z.string().min(3),
  age: z.number()
})

/**
 * POST /api/settings
 */
export const POST = handler(async (req) =&gt; {
  // Any parsing error here will throw a Validation Error
  const { name, age } = bodySchema.parse(await req.json())

  return NextResponse.json({
    ok: true
  })
})
```

You can use zod with route parameters, search params, and basically everything
else; it all will be properly handled by `typed-route-handler`.

## Making your APIs better

Developing this, I realized this was **the missing API endpoint layer for
Next.js**. By simply wrapping my existing API routes with `handler`, I got a
slew of basic features that I would expect Next.js to ship with. Namely:

- **Response type-checking:** Make sure your API endpoints are honoring their
  side of the contract.
- **Request Parameter type-checking:** Your route can be defined with a `zod`
  schema, allowing you to validate and coerce URL parameters. You can validate a
  token is a uuid/cuid, or even coerce an ID to a number.
- **Deep `zod` compatibility:** When you call a zod schema&apos;s `parse` method
  within a handler, any exceptions are caught and returned as a
  `400 Validation Error`
- **Helpful error methods:** Following Next&apos;s `notFound()` standard, we exposed
  `unauthorized()` (HTTP 401) and `validationError()` (HTTP 400) methods, which
  are of type `never` so they can be used anywhere in the handler.
- And, at a more basic level, I get request logging and timing.

## Give it a try

```sh
npm i typed-route-handler

# or better yet:
pnpm add typed-route-handler
```

[Check out the README for more examples](https://github.com/venables/typed-route-handler),
and let me know what you think.

The code is MIT-licensed and free to use. As always, pull requests and
improvements are welcome!</content:encoded></item><item><title>Say hello to `npx hello`</title><link>https://venabl.es/npx-hello/</link><guid isPermaLink="true">https://venabl.es/npx-hello/</guid><description>Introducing &apos;npx hello&apos;, the easiest way to browse GitHub profiles from the command line.</description><pubDate>Mon, 20 Nov 2023 00:00:00 GMT</pubDate><content:encoded>If you&apos;re anything like me, you spend your entire day in and out of the
terminal. And, often you want to learn more about the developers behind some of
your favorite code. But that requires the immense effort of opening up a browser
and going to GitHub and looking through their profile.

Suffer no more, my friend. Because today, I put together a tiny, fun npm package
called [`hello`](https://www.npmjs.com/package/hello) which lets you browse
Github profiles directly from the command line. Now you have one less reason to
leave the terminal.

The package is intentionally limited and simple. Just type: `npx hello` followed
by a Github username and it will print out a summary of the user&apos;s profile,
including their name, bio, location, and links to their website, Twitter, and
Github.

For example, my profile:

```sh
npx hello venables
```

![npx hello venables](./npx-hello.png)

## But ... why?

That&apos;s a great question. I had reserved the `hello` name quite some time ago,
and hand never found a good use-case for it. At first, it was going to be a
websocket-based web framework (think: a node.js version of rails that runs
entirely via sockets), but the world has plenty of frameworks. Then I thought it
would be a good name for a package that will just print &quot;Hello, world!&quot; in a
random language. But that&apos;s not very useful outside of maybe a demo or two.

So the name laid dormant.

Until today.

While browsing Github profiles for candidates at
[Catena Labs](https://catena.xyz) (psst: **We&apos;re hiring!**), the thought
occurred to me: **wouldn&apos;t it be great if everyone had a consistent CV?** And
even better, it should be accessible via the command line! So, I tossed together
v0 of this project.

Check out the [source code](https://github.com/hello-js/hello) if you&apos;re
interested. It&apos;s a simple Typescript project built from the
[`startkit typescript`](https://github.com/startkit-dev/typescript) starter
template.</content:encoded></item><item><title>This stack</title><link>https://venabl.es/stack/</link><guid isPermaLink="true">https://venabl.es/stack/</guid><description>How this site is built.</description><pubDate>Sat, 04 Nov 2023 00:00:00 GMT</pubDate><content:encoded>**Update (Oct 15 2025):** This site is now running on Cloudflare instead of
Vercel.

I&apos;m always curious about the tech stack for products I use daily. Sometimes the
most wonderful user experiences are backed by the wildest tech choices (but,
like I&apos;ve said before, an ugly backend that ships is better than perfect backend
that nobody sees).

For my personal site (this site), I&apos;ve changed stacks several times over the
years. It started retro with static HTML, then I migrated it to Ruby using
Jekyll, then to Javascript with Gatsby, then to TypeScript with Next.js. But
once I discovered Astro, I knew it&apos;d be the perfect match for me.

### Tech used:

- [Astro](https://astro.build) using Typescript, hosted on
  [Cloudflare](https://cloudflare.com). You can&apos;t beat Astro for content sites.
  It has it all built-in, and bundles everything into a tiny package.
- [Tailwind CSS](https://tailwindcss.com) for styling. Nothing compares.

The stack couldn&apos;t be simpler, thanks to Astro.

---

**Previous stack, circa 2023-&gt;2024:**

- [Next.js](https://nextjs.org) using Typescript, hosted on
  [Vercel](https://vercel.com). I don&apos;t care that Vercel is &quot;expensive&quot;, it&apos;s a
  dream to work with. And Next for building static content, like it was
  originally intended. The dream.
- [StartKit](https://startkit.dev) for the Next.js boilerplate, running on
  CloudFlare. I wrote it, so I&apos;m biased. But it got me running quickly and lets
  me git pull the latest and greatest with no headaches.
- [Tailwind CSS](https://tailwindcss.com) for styling. I have too much love for
  Tailwind.
- [Contentlayer](https://contentlayer.dev) for markdown content. I previously
  used [MDX](https://mdxjs.com) directly but Contentlayer was just too easy to
  use.

I once wrote that &quot;we&apos;ve gone too far&quot; with all the tech needed to host a static
site like this. I still believe that to an extend, but I love to write in
[markdown](https://daringfireball.net/projects/markdown/), and I love how this
site is built.

So I may be a hypocrite, but I&apos;m a happy hypocrite.</content:encoded></item></channel></rss>