The complete story of building a cross-platform app with no dev background: what worked, what broke, and what I learned.
Built by a project manager with zero software development background. Try the app →
Not a developer. Not pretending to be one. But not starting from zero, either.
The big boss roles gave me scope discipline, budget instincts, and the ability to make decisions under pressure. The coordinator role gave me something different: a front-row seat to how dev teams work, and more recently, hands-on collaboration with teams building tools for real users.
What I learned from those experiences wasn't how to write code. It was how people who solve problems think. How they organize workloads, communicate with their team, and engage their stakeholders. That turned out to matter more than the code.
The skills transferred from the office. The work didn't. ForkIt! was built on my own time, on my own equipment. 80% of commits land on evenings and weekends, with the rest on dedicated break times or automated work from background processes.
Early in a previous role, someone coached me on managing a meeting that was running long. When the exact scenario came up, I did exactly what I'd been told. The room went cold. Afterward, I was informed that I needed to "remember who pays the bills around here."
The lesson: the stated process isn't always the real process. Following instructions perfectly can still be wrong if you misread which instructions matter.
Someone says: "We need hard limits on meeting time."
What they mean: we aren't getting what we need out of this meeting, and trying to is just running us out of time.
Someone says: "Build me a dashboard."
What they need: a way to connect people with data. If the dashboard becomes "The Solution" instead of a tool, you've lost the thread.
With that in mind: this app doesn't solve the problem of "where should we eat?". It's a Solution to the decision fatigue that gets in the way of just living your life.
Swipe or use arrows
High-density city. Strangers in a hostel testing my app to pick dinner. Everyone loved it until one guy said: "I want ANYTHING BUT Indian food. That's what we eat all day." (He was Indian.) I'd only built for inclusion: pick a cuisine you want. He needed exclusion: remove what you don't. The real feature lives at the intersection of both. "Tacos, but not Taco Bell." That's keyword plus exclude. That's the picky eater problem, solved.
An ex boycotted a Starbucks because someone he'd been on a bad date with worked there. Then he ran into her at a different location, because she'd moved. Block single location wasn't enough. So the app offers both: block this location, or block every restaurant with this name. It'll block every Starbucks that exists, even unrelated ones. That's the risk he's willing to take to never run into her again.
A beta tester tried to show the app to a friend and didn't know where to start. The info modal was a wall of text. The problem wasn't the app; it was that onboarding assumed people would explore. They won't.
A Reddit thread bragging about an "underground sandwich shop, cash-only." Google doesn't know about every good place. The API can't find the taco truck that's only there on Thursdays. Users can. Custom spots are private to the user who adds them. Mom's house stays private. The secretly good food truck stays secret. Your spots are yours.
The pain
A v1 user in a walkable city opened the app. The prompt showed car mode active. Dozens of restaurants were within walking distance, but the UI defaulted to driving radii. "0.5mi? Maybe even 0.25. For us city folk."
The turning point
I'd built the app in a car-centric area. The default reflected my environment, not theirs. The fix was two things. First, a walk mode toggle that switches the UI to walkable distances, a walking icon, and walking directions. Second, a walk prompt: if you're in car mode but the app detects many restaurants within walking distance, it suggests switching. The app doesn't just let you walk. It notices when you should.
The impact
His gentle "For us city folk" was a quiet reminder that other people exist, and that my defaults had excluded him. The app was built to solve decision fatigue. But for a city walker, walk mode solved something deeper: walk somewhere you've never been, find a place you didn't know existed, run into someone along the way. Using this app lessened his sense of isolation. That was an impact I didn't design for, but it was the one that mattered.
The observation
A couple at the hostel used the solo app and laughed at themselves. They couldn't agree on the filters, so they couldn't agree on the result. One person's "Italian within 2 miles" was the other person's "anything but Italian within walking distance." They ended up at their usual spot. They thought it was funny. I thought it was a product gap: sharing one phone means sharing one set of filters.
The turning point
Group Fork. Everyone submits their own filters. The app merges them and picks at random. Nobody chose wrong, because nobody chose. The decision is shared. There's nothing to negotiate.
The impact
The app was built to solve decision fatigue. Group Fork solved the next layer: the friction of coordinating with other people. Send a code. Set your filters. Show up. The coordination that used to feel like work now takes 30 seconds.
While showing the app around, two restaurant owners asked about buying ad space. A hostel stranger asked if I was planning to sell to a big company. Everyone saw dollar signs on something I wanted to keep free.
A friend put it plainly: "Yeah everyone wants to monetize everything." He described the inevitable path: sell ads, then start hiding restaurants that don't pay.
Me: "Yelp 2.0."
That exchange crystallized the guiding principle: no ads, no tracking, no sponsored listings. Ever. The Pro tier exists to offset API costs, not to build a revenue model on the back of the restaurants it's supposed to help people find.
Genuinely enthusiastic. High engagement. They gave detailed feature ideas shaped by the apps their families already use. The value wasn't in the specific requests; it was in understanding how a different audience thinks about food apps. Their input was signal about culture, not about features.
Highest value. Hungry, tired, zero patience. They didn't give feature requests; they gave failure points by walking away. One person searched the store and picked a different app with a nearly identical name and a lazier-looking icon. People don't have time to make decisions. That's the whole point of the app, and the store listing failed the same test.
Some became actual users. Some became alpha testers. All of them were genuinely nice and helpful. 110 upvotes, 54 comments, 20K views. Bug reports, feature ideas, DMs with structured feedback, and a fellow app developer who commiserated about the Google Play Console.
I weighted the mom group feedback as feature direction instead of cultural context. The See 'n Say imagery felt right because they were excited about it, and excitement is persuasive. Real users under real stress reframed the priorities. The mistake was mine in how I weighted the input, not theirs in how they gave it.
Two questions for every feature: Does it reduce friction? and Does it reduce decision fatigue? Both have to be yes, and it can't introduce new stress.
Copycat Recipes with Shopping Cart: Reduce friction? Yes, you don't have to go to the restaurant. But it adds a shopping list, ingredient management, a whole new task flow. Solving one friction point by creating five. Immediate discard.
Food Truck Mode: Reduce friction? Yes. Reduce decision fatigue? Yes. Fits current capabilities? No. Google doesn't have food truck data, and the cold-start problem killed the last app that tried it (TruckAround, 2019). Deferred to v4+. Good PM instinct: "not yet" is different from "no."
The "fix the platform to fix the fix" loop. Android sign-in was failing for users with non-Chrome default browsers. The fix required understanding Clerk's SSO flow, Chrome Custom Tabs, intent filters, and Android browser selection. Three sessions deep, the original bug was still there. The PM instinct: stop. What are we actually trying to solve? (Answer: show a toast message telling the user to try Chrome. Ship the workaround, not the platform fix.)
"Build me a button that picks a random restaurant"
Copy, paste, run. It either worked or it didn't. If it didn't, paste the error back in. Repeat.
"Here's my architecture doc. Here's the constraint. Here's what I tried. Here's the error."
The evolution wasn't about better prompts; it was about learning enough to ask better questions.
I was running a full product development workflow, just with an LLM instead of a team. Stakeholder engagement (SE), communications, product management, development, UI/UX: most of the roles a product team fills, we covered between the two of us. Not by being experts, but by knowing the workflow existed and managing the handoffs.
The roles we didn't fill are where things broke. No dedicated QA (edge cases on devices we didn't own). No legal or compliance (Apple rejected the app). No DevOps (Sentry was added after crashes, not before). No marketing or growth strategy. No accessibility audit. No data analytics. We touched all of these eventually, but touching a role and filling it are different things.
The 31-review suite now covers most of these gaps on paper. There are reviews for accessibility, store compliance, operational readiness, testing coverage, data privacy, and investor readiness. But a review that checks for accessibility is not the same as a user who depends on a screen reader. A compliance checklist is not a lawyer. The reviews catch what I think to look for. They don't catch what I've never seen.
Hover over (or click) a step to see it in action: how a check-in feature request became Group Fork.
The app talks to a backend. The backend talks to Google Places. That's it. One button, one API call, one result.
"I pay Google to access the restaurant data behind those ads and give it back to you, without the ads."
Fork Around sessions need real-time state: who's in the session, what filters they've set, whether the host has picked yet. That's Redis (Vercel KV).
The web joiner lets people without the app join group sessions from a browser. Another service to connect.
User accounts need authentication (Clerk). History, favorites, and settings need a database (Neon PostgreSQL). Subscriptions need payment infrastructure (RevenueCat).
Suddenly: 13 services talking to each other. Things that worked in isolation started breaking at the seams.
The backend API had no rate limiting or origin checking for weeks. No user data was at risk (the app doesn't store or transmit personal data through this endpoint), but anyone who found the URL could have run up the Google Places API bill on my account.
An LLM built it. An LLM didn't flag the gap. I didn't know to check. Fixed with rate limiting, CORS, origin checking, and auth headers. Nobody exploited it, but the exposure was real.
Sentry was added after users reported crashes, not before. One of the things I'd do differently: set up observability on day 1, not after the problems are already in production.
Apple rejected the app for missing subscription compliance. Auto-renewal disclosure, Restore Purchases button, Terms of Service links. None of this was in any tutorial I'd followed.
Work trip. Restaurant. Showing someone what I'd built. Forgot I was on a dev build. Claude had pushed a backend update I didn't realize was deployed.
App wouldn't work. Couldn't pick a restaurant. At a restaurant. Lost 3 potential users that night.
Asked Claude to clean up code. It renamed working API endpoints. Every deployed app was still calling the old names. Live users got server errors. Backward-compatible rewrites are still there today.
I asked Claude directly: "Check my account. What's my build credit limit?" Answer: "Don't worry about it, you're on the unlimited trial plan!" I asked again weeks later. Claude read its own previous answer from memory and confirmed: "Unlimited!"
The LLM was citing itself as a source. Consistent every time I asked, and consistently wrong. I found out the truth from an email: 80% of monthly credits used, two weeks left in the billing cycle.
After that, I learned to build locally. The $19/month build service was a rookie tax. Now I'm back on the free tier.
Add debug info to responseMigrate to Places APITier 2 visual polish: 8 issues (#88, #36, #85, #92, #93, #15, #14, #31)30 commits and 68 days between the first commit and the first GitHub issue.
Run on every commit and in CI
Run before deployment
Functional bugs (Tier 1) → visual bugs (Tier 2) → new features
5 automated checks → 26 manual reviews → file new issues (don't fix inline)
New issues get tiered → functional before cosmetic → review again → repeat
Guiding principle: keep it as free as possible
| Google Places API | Per call |
| Clerk, Neon, Vercel, RevenueCat, Sentry | Free tier |
| Apple Developer Program | $99/year |
| Google Play Developer | $25 one-time |
| Infrastructure | Pennies per user |
The code was never the hard part. These are the toolkits that don't exist for solo builders, the things I had to figure out by getting burned.
How to evaluate what cultural or ambient biases are shaping user feedback. When to approach someone for live testing vs. when to back off. I made someone mad by picking the wrong moment. How many questions are too many before you're a nuisance. Who to ask for what and when.
Most importantly: how to ask questions that surface the experience the app provides, not aesthetic opinions. "The button doesn't even work, yo. THAT'S what we're testing today", not "I liked the orange highlights a little thicker."
I built a color theory (orange = problem, teal = solution), a typography system (Montserrat), and a design governance doc through trial and error across months. Every new feature gets measured against these before a line of code is written. Without them, scope creep would have won.
No one hands you a design system when you're a solo dev. You build it, or every screen feels like a different app.
How are subscription flows typically structured? What's the standard pattern for group sessions? How do apps handle forced upgrades? What's the right testing assertion strategy? How should API versioning work in serverless?
These are questions that would take 5 minutes from a mentor. I asked an LLM instead, not because it was better, but because it didn't make me feel like a nuisance.
I organically built a tiered triage system, a dev cycle (Build → Stabilize → Ship), deploy sequencing, a 31-review code quality suite, and testing gates. All learned through mistakes, not from a playbook.
The hardest skill: telling a tool that can build anything you ask for "not yet."
A way to distinguish what kind of value each source provides. Mom groups gave cultural context about how families think about food apps. Hostel strangers gave behavioral data by walking away. Reddit gave structured feedback, bug reports, and actual testers.
Three sources, three completely different types of value. The skill is knowing what each one tells you, not ranking them.
This was not a coding project. It was a project management project that happened to produce code.
The LLM wrote the code. I managed the product: who it serves, what to build, what to cut, how to evaluate feedback, when to ship, and when to stop.
The failures were never about bad code. They were about missing process: no security review, no demo prep, no vendor verification, no stakeholder register. The skills that fixed them were not technical. They were organizational.
Every one of you has expertise that transfers to something you haven't tried yet. The tool just has to meet you where you are.
You don't need to become an expert. You need to build systems that catch what you can't, and bring what you already know.