Overall Capstone
This page is the story of the quarter, that is, what it actually felt like to build the Interview Simulator, where things surprised me, and what I am walking away with. The technical details live on the Capstone Experience page; this is more about the experience itself.
I came into the capstone with an idea I genuinely believed in. Job interview prep is something almost everyone in this program has stressed about, and I wanted to build something that could actually help with that, not just something that looked good on a rubric. The scope felt manageable at first because I had a clear picture of what the finished product should feel like. What I did not have yet was any sense of how messy the path to get there would be. The decision I made early on, to define the entire user flow before writing a single line of code, turned out to be the most useful thing I did all quarter. Locking in setup, then question, then answer, then feedback, then save gave me something concrete to build toward and a way to measure progress that was hard to fake.
Bedrock was where the project stopped being comfortable. I had expected the LLM integration to be the fun part, and eventually it was, but first it was the part that pushed back the hardest. Getting the request format right for Claude versus Llama required careful reading and a lot of trial and error, and the model would sometimes return JSON with small formatting issues that broke the frontend in ways that took forever to trace. Working through that changed something about how I approached the rest of the project. I stopped treating fallbacks as workarounds and started building them in from the beginning, because I had learned that every integration point would eventually fail in some way. That instinct spread outward: I tightened the request and response contracts, added defensive checks in places I would have skipped before, and started assuming that things would break rather than hoping they would not. The first time the backend returned a generated question and the first time a session appeared on the dashboard are still the moments I think about when I think about this project.
The evaluation rubrics did not come from the code. They came from conversations. Talking with instructors and peers who had been through real interviews clarified what useful feedback actually looks and feels like: start with what went well, then give specific and actionable suggestions, not just a number. That framing became the core of the rubric design. Explaining the architecture and the Bedrock integration to other people had a similar effect on me; to make it understandable to someone else I had to simplify my own mental model of it, and that simplification made the system itself cleaner. I also got better, slowly, at recognizing when I had been stuck on something too long and actually asking for help instead of just spinning.
Looking back honestly, I over-invested in getting individual features exactly right before moving on, which left less time for testing and polish than I wanted. In the next project I would time-box those deep dives, write integration tests before building out the frontend, and write down deployment and configuration steps as I went rather than trying to reconstruct them later. What I am taking away is something more than a list of technical skills. I have a much clearer sense of the distance between a feature working in isolation and a system actually working end-to-end, a habit of building for failure from the start, and a sharper instinct for when to ask for help. This project gave me something real to point to, and it changed how I think about building things.
Future Steps
After the course, I want to keep iterating on the Interview Simulator as a live project rather than letting it sit. The most meaningful next step is running actual user testing with people who are actively preparing for interviews, because I made a lot of assumptions about what useful feedback looks like and I want to see whether those assumptions hold up with real users. I also want to improve the dashboard, specifically adding trends over time and clearer summaries of recurring strengths and weaknesses, since right now it shows history but does not do much to help users understand their patterns. Further out, I am curious about A/B testing different prompt structures to see whether certain rubric framings produce more actionable feedback, and about expanding into more interview types. The goal is for this to be something that genuinely improves the longer I work on it, not just a finished capstone project.