Building a Modern Personal Website with Claude, Cloudflare, and GitHub
How I leveraged Claude's assistance to build a serverless personal website using TypeScript, Tailwind CSS, and Cloudflare's edge services
TL;DR: Turned a 10-page LaTeX resume into a modern website by collaborating with Claude, an AI assistant. Beyond just coding, the key to success was establishing clear development patterns early, maintaining thorough documentation, and treating AI as a thoughtful collaboration partner rather than just a code generator. This post shares practical lessons learned about effective AI collaboration in software development. 🚀
The Challenge 🌐
For academics and professionals in technology, maintaining an up-to-date online presence is more than a nicety—it’s a necessity. I found myself in a common situation: maintaining a comprehensive LaTeX document that had evolved over a decade to include hundreds of publications, talks, patents, and other professional accomplishments. While LaTeX excelled at producing formatted documents, it created friction whenever I needed to use this information in other contexts.
Just spent 6h filling out an EB1A intake form. Why cant I upload my CV/resume which already has the information, with links. It is simpler to
— Varun Singh (@vr000m) January 18, 2023
- provide Google Scholar = papers, patents
- provide linkedin
- provide links to press and awards with URLs
Parse, collate, and organise
This tweet captured my frustration perfectly. The process of maintaining and reusing professional information was broken. Every time I gave a talk or published a paper, I would append it to my BibTeX file. This worked great for LaTeX compilation but meant manually copying and reformatting this information for other uses—visa applications, collaboration requests, or online profiles. The process was time-consuming and error-prone.
What I needed wasn’t just a website, but a system that could:
- Accept updates through familiar tools (text editor, git)
- Store information in a structured, queryable format
- Maintain the single-source-of-truth principle I had with LaTeX
This was where AI collaboration became interesting. The challenge wasn’t primarily about web development—I’d built websites before. The real opportunity was to explore how AI could help build a system that would evolve with my needs while maintaining the simplicity of my current workflow. Working with Claude presented a unique opportunity to rethink not just the technical solution, but the entire development approach. The tool open-sesame facilitated this interaction with Claude 3.5 Sonnet, setting the stage for an experiment in AI-assisted development that would prove more illuminating than I initially expected. 🤖
Technical Decisions: Building for Simplicity 🛠️
The technical architecture for this project emerged from a simple premise: minimize infrastructure complexity while maintaining flexibility for content updates. Rather than getting caught up in complex technology choices, I wanted the architecture discussions with Claude to focus on solving the core problem - managing professional information effectively.
Three key requirements drove our technical decisions. First, I needed a database that could be updated via CLI tools, maintaining my existing git-based workflow. Second, I needed a way to handle blog posts and profile images without managing a complex CDN setup. Finally, the site needed to be easily deployable and maintainable. These requirements led us to a serverless approach using Cloudflare’s edge services.
graph TD A[GitHub Repository] -->|GitHub Actions| B[Build Pipeline] B -->|Deploy| C[Cloudflare Pages] B -->|Migrate| D[Cloudflare D1] E[Content Updates] -->|Push| A F[Blog Images] -->|Upload| G[Github /images/blog/] C -->|Serve| H[Website] D -->|Data| H G -->|Assets| H I[Cloudflare KV] -->|Rate Limiting| H
This architecture aligned naturally with my workflow: I could continue maintaining content in text files and use simple CLI commands to sync updates to the website. More importantly, it provided a foundation for building tooling that matched my existing practices rather than forcing adaptation to a new content management paradigm.
The real challenge, however, wasn’t in choosing technologies—it was in effectively collaborating with AI to build this system in a maintainable way. As we began implementing features, it became clear that the technical decisions themselves were less important than how we approached the development process. This journey of collaboration would evolve through three distinct phases, each building upon lessons from the previous one:
Establishing the Basics: Learning to communicate effectively with AI
Developing Systematic Patterns: Creating repeatable processes
Mastering Complex Development: Leveraging AI’s strengths for sophisticated features
This progression from simple interactions to sophisticated collaboration would prove crucial in building a robust and maintainable system. 🔄
Evolution of AI Collaboration: From Code Generator to Development Partner 🤝
The journey of working with AI evolved naturally through distinct phases, each building upon lessons from the previous one. What began as simple code generation requests transformed into a sophisticated development partnership that improved both code quality and development practices.
My initial interactions with Claude followed a common pattern among developers new to AI collaboration - directly requesting code implementations. "I need an API endpoint for managing publications," I would say, and while the resulting code was functional, it often required significant refinement and didn’t leverage the AI’s full capabilities.
The first breakthrough came from a simple shift in approach. Instead of jumping straight to implementation, I began starting each feature with requirements discussions. "Let’s think about what we need for publications," I would begin. "How should we structure the data to match our LaTeX format? How will we handle different publication types? What search capabilities might we need?" This seemingly small change led to more thoughtful solutions and fewer revisions. More importantly, it established a pattern where Claude would ask clarifying questions before suggesting implementations.
Our development process evolved into a systematic approach:
graph LR A[Problem Definition] --> B[Solution Exploration] B --> C[Test Design] C --> D[Implementation] D --> E[Validation] E --> A
As the project grew more complex, the need for more structured ways to maintain context and ensure consistency became apparent. Each development session began with a brief status update: "We’re working on search functionality. In our last session, we chose SQLite FTS5 for full-text search and implemented the basic schema. Now we need to handle result ranking and highlighting." This context-setting became crucial for maintaining continuity across sessions.
A particularly valuable pattern emerged around testing. Claude’s approach to test generation was systematic and thorough, often catching edge cases before they became issues in production. For instance, when implementing publication validation, what started as a simple schema check expanded into comprehensive test coverage:
describe('Publication Validation', () => {
// Basic field validation
test('requires title and type', () => {});
test('validates publication date format', () => {});
// Type-specific validation
describe('Patent Publications', () => {
test('requires status to be pending or granted', () => {});
test('requires patent number for granted patents', () => {});
test('validates patent number format', () => {});
});
// URL validation
describe('Publication URLs', () => {
test('handles multiple versions (preprint, published)', () => {});
test('validates URL format for each type', () => {});
test('maintains URL order', () => {});
});
// Edge cases
test('handles unicode characters in titles', () => {});
test('validates dates across timezone boundaries', () => {});
test('handles malformed JSON in URL array', () => {});
});
Claude didn’t just list test cases; it explained the rationale behind each one. "We should test timezone handling," it suggested, "because publication dates might be entered in different timezones during international conferences." This kind of contextual thinking about testing scenarios helped prevent issues that might have only surfaced in production.
Documentation evolved from an afterthought to a real-time activity. Important decisions were captured as they were made, creating a living reference for future discussions. When deciding how to handle publication URLs, for example, we documented not just the decision to store them as a JSON array, but also the rationale - publications often have multiple versions like preprints and final versions - and the implementation details around JSON validation in the data layer.
The real power of AI collaboration emerged when tackling complex features like search implementation. Rather than jumping straight to code, we began with thorough problem definition. "Let’s outline exactly what we need from search," I would say. "We need to search across publications, talks, and blog posts, handle partial matches, support filtering by type and date, and implement relevance ranking." This led to rich discussions about potential approaches, from using a single FTS table with type discrimination to implementing separate FTS tables with a unified API.
Each potential solution was evaluated through focused questions: "How would this handle cross-type relevance ranking? What about updates to primary records? How would it perform at scale?" This structured approach led to catching potential issues early and producing more maintainable code. Claude’s suggestions became increasingly nuanced, often identifying edge cases I hadn’t considered.
The process wasn’t always smooth. Managing context across sessions proved challenging - a simple request to "update the search implementation" needed to become "update search ranking for publications, which currently uses basic FTS5 ranking, to prioritize recent publications." Scope creep was a constant concern, with Claude sometimes suggesting ambitious additions like automatic tagging and citation parsing. Learning to guide these conversations back to core functionality became an important skill.
The challenge of maintaining simplicity emerged repeatedly. When Claude suggested implementing complex caching mechanisms, I learned to redirect the discussion: "Before we add caching, what’s our actual performance bottleneck? How could we solve this with our existing tools?" These moments taught us to stay focused on immediate needs while maintaining a clear path for future enhancements.
More importantly, the testing-driven approach we had established began influencing our design decisions. Each feature discussion now naturally included consideration of edge cases and error conditions, with Claude proposing test scenarios that often revealed potential issues in our planned implementation. This "test-first" thinking helped us build more robust features from the start, rather than adding error handling as an afterthought.
Through this evolution, our collaboration with Claude progressed from basic code generation to sophisticated system design. Each phase taught valuable lessons about effective AI collaboration, from managing context to guiding complex discussions. But beyond the specific journey of this project, clear patterns emerged that could apply to any AI-assisted development work. These patterns, distilled from both successes and challenges, offer a framework for leveraging AI as a genuine development partner rather than just a coding tool. 🎯
Practical Patterns & Lessons in AI-Assisted Development 📚
Building a website might seem like a straightforward task, but collaborating with AI to do so revealed insights that could apply to any software project. The most profound lesson emerged early: time invested in establishing clear communication patterns with the LLM pays enormous dividends throughout the project lifecycle. Much like onboarding a new team member, those early conversations shape all future interactions. But unlike human teammates, AI assistants need this context-setting in each session. What could have been a limitation instead became a strength, forcing clarity and precision in our technical discussions. We discovered "Writing" as a common ground for communication.
The practice of documenting decisions in real-time transformed from a project requirement into a powerful development tool. Each major decision created a reference point for future discussions. When we later needed to extend the publication schema to handle multiple paper versions, having documented our initial reasoning about JSON storage for URLs made the decision pathway clear. This documentation served not just as a record but as a thinking tool, forcing us to articulate and examine our assumptions.
Testing became a crucial aspect of our collaboration pattern. Rather than treating tests as verification tools, they became design sessions in themselves. The systematic way Claude approached test generation helped us think through features more thoroughly. For search functionality, what started as basic query testing evolved into a comprehensive test suite:
- Testing search across different content types (publications, talks, posts)
- Verifying relevance ranking with mixed content
- Edge cases like partial matches and special characters
- Performance testing with large result sets
- Handling malformed queries and invalid filters
Each test case Claude proposed revealed potential edge cases or user scenarios we hadn’t considered, transforming testing from a validation exercise into a design tool that shaped implementation before writing production code.
Counterintuitively, embracing AI’s context limitations led to better code organization. The need to explain feature context in each session naturally pushed us toward more modular, well-documented code. When adding blog support, for instance, each session focused on a specific aspect - data modeling, markdown processing, or search integration. This forced modularity made the code more maintainable and easier to test, benefits that extended far beyond AI collaboration.
The most surprising insight came from treating edge cases and error handling not as afterthoughts but as primary design considerations. Claude’s systematic approach to questioning implementation details led to more robust code from the start. When implementing the publication API, what began as a simple CRUD interface evolved to handle nuanced cases like draft states, multiple URLs per publication, and proper error handling for malformed requests. The AI’s tendency to thoroughly consider failure modes resulted in more resilient code than I might have written on my own.
Another unexpected strength emerged in API design discussions. Claude’s ability to think through different use cases helped create more intuitive and flexible interfaces. For example, when designing the publication update endpoints, our discussion naturally covered:
- Handling partial updates
- Maintaining data consistency
- Managing concurrent edits
- Version history tracking
- Access control implications
The reality of AI collaboration proved different from initial expectations. Success came not from trying to get perfect code immediately, but from establishing a process that consistently produced maintainable, well-tested code that met project requirements. This meant being methodical, maintaining clear communication patterns, and regularly verifying that implementations aligned with project goals. The AI became most valuable not as a code generator but as a thoughtful collaborator that could challenge assumptions and suggest alternative approaches.
Perhaps most importantly, this project demonstrated that effective AI collaboration isn’t about working around AI’s limitations but about leveraging its unique characteristics. The need for explicit context in each session, far from being a drawback, encouraged better documentation and design practices. The AI’s systematic approach to problem-solving helped catch edge cases early. Even the tendency to suggest multiple alternative approaches, which could seem like overhead, often led to more robust and well-considered solutions.
These lessons extend beyond just working with AI. Many of the patterns that emerged - clear documentation, systematic problem-solving, thorough consideration of edge cases - represent solid software development practices in any context. The AI collaboration simply made their value more apparent and their implementation more systematic.
The key to successful AI collaboration lies in treating it as a partner rather than just a tool. This means:
- Starting with clear requirements and context
- Documenting decisions and rationale in real-time
- Using testing as a design tool
- Embracing systematic thinking for edge cases
- Maintaining focus on simplicity and maintainability
The complete source code for the project is available on GitHub. 🌟