Reviving bettertyping.org with AI Coding Agents
bettertyping.org is a typing practice site I built years ago. You can type quotes to measure your speed, and there’s a multiplayer mode where you race others in real time. The entire project is written in Elixir — a language I didn’t know when I started and learned along the way.
Over time, the project stalled. The Elixir version was badly outdated, the frontend was still built with Webpack, and I no longer had a working development setup. I tried once to upgrade everything manually, but between limited time and limited Elixir experience, I failed. From that point on, the codebase was effectively frozen.
Recently, I decided to try again — this time using AI coding agents.
Initial state
Before starting:
- Outdated Elixir and Phoenix versions
- Webpack-based frontend
- Broken local setup
- No realistic path to incremental changes
In short, I couldn’t confidently run, build, or deploy the project.
Tooling
I mainly used AMP, and switched to Cursor whenever I hit AMP’s usage limits.
AMP uses Claude Opus 4.5. The usage allowance is roughly $10 per day — currently free, and for the amount of work completed this turned out to be extremely generous.
Upgrading the Elixir project
The first task was upgrading the Elixir and Phoenix stack.
This is usually where older Elixir projects die: dependency conflicts, breaking changes, cryptic errors, and a lot of guesswork. I gave AMP a clear description of the project state and asked it to upgrade step by step.
Here are the first two threads that kicked off the revival:
- The first AMP thread understanding the project
- Handoff thread to update the Elixir version — I gave a very simple task: “The project uses an old version of elixir. I want it to be updated to the current elixir version.” After that, it did a really good job, and I just had to tell it to fix a few things along the way.
Most of the upgrade worked one-shot. When it didn’t, the agent diagnosed failures correctly and proposed concrete fixes. After a small number of iterations, the project compiled and ran again.
At this point, I had something I hadn’t had in years: a working baseline.
Frontend: Webpack → Vite
Next, I replaced Webpack with Vite.
I didn’t want to manually reason through legacy Webpack config, so I let the agent handle the migration. AMP converted the build pipeline, updated asset handling, and integrated everything with Phoenix.
The result built cleanly. I made a few stylistic adjustments afterward, but the hard part — getting off Webpack — was done entirely by the agent.
Dockerization and deployment
With backend and frontend working, I dockerized the project and deployed it to my server.
AMP generated the Docker setup, and with minimal adjustments I was able to deploy a new version of bettertyping.org for the first time in a long while.
At this point, the project was alive again.
Building new features
Once the foundation was stable, I continued using AMP and Cursor to build new features.
Importantly, the agent did most of the implementation work. I rarely wrote code myself. My role was mostly to:
- describe behavior
- validate results
- decide what to build next
New features include:
- Dark mode
- Migration from Bootstrap to Tailwind
- A multiplayer bot opponent when no human is available
- A completely new typing lesson mode:
- finger usage visualization
- level-based progression
- unlocking letters step by step
- Visual cleanup of several neglected pages
What worked well
A few patterns consistently produced good results:
- Planning features in a Markdown file, then letting the agent implement them step by step
- Keeping threads small and focused
- Giving the agent access to all build tools, so it could diagnose issues itself
- Letting the agent run tests for both new and existing code
- Using the browser extension so the agent could inspect and debug issues directly in the browser
This made the agent far more autonomous than I expected.
What didn’t work well
Some pitfalls became obvious:
- I once asked the agent to commit changes. After that, every subsequent change in the same thread was also committed. Old prompts persist, so you have to be careful what you ask for early on.
- Implementing the multiplayer bot required the most back-and-forth. This was the hardest problem in the project, and the agent spent a lot of time trying approaches, testing them, failing, and retrying — mostly without my intervention, but with noticeable iteration cost.
Conclusion
The agents were better than I expected.
They worked well in an existing, non-trivial codebase. They handled upgrades, refactors, and feature development without needing constant guidance. Claude Opus 4.5 is a very strong model, and AMP and Cursor are effective agentic tools when used with the right workflow.
This didn’t feel like “AI-assisted autocomplete.”
It felt closer to delegating real implementation work — and reviewing the results.
For an old project that was effectively abandoned, that made all the difference.
I am excited to continue improving bettertyping.org with AI coding agents, and I am looking forward to seeing how these tools evolve.