I did not build this version of my portfolio by disappearing for a weekend and emerging with a grand master plan. I built it the way a lot of real product work happens now: in small, opinionated iterations with Codex inside the repo, tightening one layer at a time until the site started to feel like mine.
This project began as an AstroPaper-based site. That gave me a solid content and layout foundation, but it still looked like a starter. The work in this repo became a running conversation about how to push it from “good Astro template” to “personal portfolio with a distinct point of view.”
The workflow was conversation first, code second
The pattern that worked best was simple:
- I described what felt wrong or unfinished.
- Codex inspected the actual codebase before suggesting changes.
- We made targeted edits, verified the result, and then refined again.
That sounds obvious, but it matters. The useful part was not “AI writes code.” The useful part was having a coding agent that could stay anchored to the current repository state, keep momentum between iterations, and translate vague direction into concrete edits.
In practice, that meant conversations like:
- This still feels too much like AstroPaper.
- The homepage needs more identity and stronger structure.
- The header/navigation is doing too much in the wrong way.
- Verify this in a browser instead of assuming the layout works.
- Tighten the content so it reads more like an operator profile than a generic developer bio.
Those are not fully formed tickets. They are directional prompts. Codex was most useful when turning that direction into specific implementation steps.
What changed in the site
Looking at the evolution of this repo, a few themes stand out.
1. The template became a portfolio
One of the earliest shifts was moving away from the stock AstroPaper framing and converting the site into a personal portfolio for my own work. That meant changing site metadata, trimming template baggage, updating the README and config, and replacing generic blog defaults with something aligned to my name and use case.
That was the easy part. The harder part was making the site feel authored rather than merely re-labeled.
2. The homepage got a stronger visual model
The most visible redesign was the terminal-inspired homepage. Instead of a standard hero with a few cards underneath it, the landing page moved toward a more intentional “operator console” layout with status rows, runtime panels, proof points, and sharper visual hierarchy.
This was a good example of where conversational iteration helped. The first pass only needs to be directionally right. After that, Codex could keep reshaping spacing, labels, copy, panel structure, and motion until the page felt less like a generic SaaS landing page and more like a portfolio for someone who works across product, systems, and delivery.
3. Verification became part of the loop
One of the better decisions in this project was wiring in agent-browser as the default browser verification workflow. Instead of trusting that a refactor “should be fine,” we added a repeatable way to open the site, wait for the page to settle, capture screenshots, and inspect console errors.
That matters because AI-assisted development gets much better when the feedback loop is not just “the code compiles.” For frontend work especially, verification has to include what the page actually looks like and whether it behaves cleanly in a browser.
What Codex was actually good at
The best use of Codex here was not one-shot generation. It was repo-aware acceleration.
It helped most when I needed to:
- inspect existing Astro components and find the right edit point quickly
- make cohesive multi-file changes without losing the current shape of the app
- preserve the established visual direction while refining details
- trace how blog content, pagination, and page layouts were wired together
- make a change, verify it, then narrow the next improvement
That last part is the important one. The value came from shortening the path between “this still feels off” and “here is the next concrete version.”
Using multiple Codex threads in one project
One of the more practical patterns in a project like this is using multiple Codex threads against the same repo, each with a narrow objective.
That works especially well when the tasks are adjacent but not identical. One thread can stay focused on homepage structure and copy, while another handles a contained infrastructure or config change. In this repo, that kind of split was useful for things like adjusting Astro config to make local preview work cleanly through Tailscale MagicDNS or other environment-specific access patterns without derailing the main design conversation.
The benefit is not just speed. It is separation of concerns. A thread working on content and layout can keep its own context tight, while a second thread can inspect config, host settings, or verification flow without dragging all of that detail into the main UI iteration loop.
Used well, multiple threads make Codex feel less like a single chat window and more like a set of active workstreams sharing the same codebase. That is a better fit for real software projects, where design, content, configuration, and verification often move in parallel.
What still required judgment from me
Codex sped up implementation, but taste and direction still had to come from me.
I still had to decide:
- what kind of portfolio this should be
- what tone the writing should have
- which parts of the interface felt too generic
- when a section added clarity versus noise
- what was actually worth keeping after each iteration
That division of labor felt right. I do not want an agent deciding my voice for me. I want it compressing the distance between intention and execution.
Why this approach worked
This project is a good example of how I think AI tools are most useful in real software work: not as replacements for engineering judgment, but as collaborators that make iteration cheaper.
Codex worked because it could operate inside the actual repo, inspect the current implementation, make precise edits, and stay with the thread of the work across multiple rounds. That let me move faster without pretending the first answer was the final answer.
The result is not “a site built by AI.” It is a portfolio shaped through a series of practical conversations with a coding agent, backed by real code changes, browser checks, and steady refinement.
That feels closer to how modern software is actually getting built.