← Case Study Home

The Store Is
Its Own Product
(Platform Engineering)

Apple and Google are not deployment targets. They are stakeholders with their own rules, billing systems, telemetry, and gates. Building for them is a discipline distinct from building the app, and most of these lessons don't show up in code review because they live in the platform contract.

0
Stores
0
IAP products
$0
April actuals
0
Reviews in 3 weeks

Part of the ForkIt! Case Study. Back to the case study →

Scroll to explore

Two Stores, Two Philosophies

I planned to ship Android only. Easier to get into, $25 once, no gatekeeping. Then I shipped to both, and the differences taught me more than either platform alone.

Apple

$99/year. Reviews in roughly two days. Targeted, specific feedback when they reject you. The locked-down ecosystem means everything just works: signing, provisioning, StoreKit, TestFlight. Three rejections, zero broken builds reaching users.

The friction is the product. Apple caught a fatal crash that would have reached users. The $99 buys gatekeeping that protects your reputation.

The tradeoff: customization isn't as robust. The ecosystem is controlled for a reason, and you work within it.

Android

$25 once. Reviews can take a week. Feedback is negligible: generic, sometimes irrelevant. The platform isn't always intuitive, and cross-platform support is weak and changing. But Android will let you fork around and find out.

Every submission accepted. Iterated through six version codes while iOS was still in review. A crash hit production and real users saw a broken app. Speed without gates means quality is entirely on you.

The tradeoff: customization shines. Android gives you room to experiment in ways Apple won't allow.

Same bug shipped to both platforms. iOS rejected the build, protecting users. Android accepted it, and a real person saw the crash. The $99 isn't a tax. It's an insurance policy that you pay every year, and most of the time it pays out.
Subscriptions

Subscriptions Are Not IAP With a Recurring Flag

Recurring billing has its own state machine on each platform. Same products, same prices, completely different mechanics for upgrade, downgrade, and replacement. Getting any of it wrong means a user is paying twice.

Subscription Group Level Inversion (iOS)

What happened

App Store Connect groups subscriptions by tier and orders them by level number. Lower number means higher service tier. This is the opposite of what you'd expect from "level 1 is the entry point." Pro was set as level 1, Pro+ as level 2. Apple read that as Pro being the top tier and Pro+ being a downgrade.

The result: an existing Pro subscriber tapping the Pro+ button hit the "downgrade" path, which doesn't replace the active subscription. They were billed for both periods, with the second one queued to take effect when the first lapsed.

What it cost

One user in Scottsboro paid for both tiers in the same month. Filed as issue #20. The fix was a one-line change in App Store Connect; the cost was the trust hit.

What fixed it

Inverted the level ordering: Pro+ as level 1, Pro as level 2. Apple now correctly treats Pro+ as the upgrade path. The mental model: think of "level" as priority order in the queue, not feature level.

Replacement Params Are Not Optional (Android)

What happened

Switching a user from one Google Play subscription product to another requires explicit subscriptionProductReplacementParams (or, on older billing libraries, purchaseToken plus replacementMode). Without them, Google does not error. It does not warn. It treats the call as a brand-new first-time purchase and creates a parallel subscription alongside the existing one.

Same Pro to Pro+ flow. Same user. Two active subscriptions, both billing.

What it cost

The same Scottsboro charge, on the Android side. Discovered through analytics review, not through any error path the app could surface.

What fixed it

The purchase flow now passes the active purchaseToken and the appropriate replacement mode whenever a user already holds a subscription in the same group. The lesson: "first-time purchase" and "switch products" are completely different flows on Google Play, and the API will silently let you call the wrong one.

The Paywall Has To Know What the User Already Has

What happened

The paywall component rendered both "Buy Pro" and "Buy Pro+" buttons regardless of the user's current state. A Pro+ subscriber was happily shown a Pro button. The component had no concept of currentTier.

What it cost

This is what surfaced the two billing bugs above. A Pro subscriber who tapped Pro+ in the paywall hit both store-side bugs at once.

What fixed it

The paywall component now accepts a currentTier prop and adapts CTAs accordingly. If the user is on Pro, only Pro+ is offered, and the copy reflects that it's an upgrade. If they're on Pro+, only Manage Subscription is offered. The mental model: a paywall is not a shop. It's a state-aware decision surface.

Three bugs, one root cause: I treated subscriptions as a special case of IAP instead of a separate product. The store APIs reflect that distinction even when the SDKs make it easy to ignore. The first time you ignore it, a user pays twice.
Sales Reporting

Sales Reports Lag. Dashboards Lie About Real Time.

There is no single source of truth for "how is the app doing this month." There are at least four, on different schedules, with different definitions, served by different systems. If you don't pick deliberately, you'll trust whichever one loads fastest, and it will probably be the most misleading one.

SourceLatencyWhat it actually answers
App Store Connect dashboardNear real timeEstimated proceeds, not finalized. Diverges from the report.
App Store Connect monthly sales report~5 days after month endAuthoritative finalized totals.
App Store Connect subscription eventsDailyRenewals, refunds, cancels, upgrades. Not in the sales report.
Google Play ConsoleNear real timeEstimates only.
Google Cloud Storage gs://pubsite_prod_*/sales/DailyPer-transaction sales reports. Authoritative for current month.
Google Cloud Storage gs://pubsite_prod_*/stats/DailyActive devices, install counts, ratings.

Service Account Permissions Are Not Cloud Storage Permissions

What happened

Granted "View financial data" plus admin to a Play Console service account, expecting that to immediately unlock gs://pubsite_prod_<DEVELOPER_ID>/. It did not. The bucket lives outside the Play Console permission graph; propagation can take 24+ hours and is not always observable from the Console UI.

What fixed it

For the analytics CLI, switched to Application Default Credentials with gcloud auth application-default login. ADC works immediately and uses the developer's own Google identity rather than the SA. Documented as the recommended path for solo developers in the project guide.

The Analytics CLI

What the case study now relies on: a small local CLI (scripts/analytics) that pulls Apple sales reports plus subscription events plus Google Play Cloud Storage stats and emits one monthly summary. Run with ./monthly-report.sh [current | YYYY-MM].

The first run made it visible that V4's tier-split actually worked.

PeriodNearby Search EnterpriseNearby Search ProNet cost
Q1 2026 (pre-V4)7,374$176.75 (credited via trial)
April 2026 (post-V4)132623$0.00

95% reduction in billable Enterprise calls; free users routed cleanly to Pro. April fell within Google's free monthly cap. The V4 prediction held against real data, not projection.

The OTA Contract

The OTA Contract

EAS Update lets a JS-only fix reach users without a store build. The convenience comes with a contract: bundles target a specific runtime, the runtime is derived from the binary version, and the binary version is the one thing you cannot change without orphaning every existing OTA. Most of the OTA pitfalls are downstream of that one rule.

The Runtime Version Trap

What happened

The project uses runtimeVersion.policy: "appVersion", which means the binary version string in app.json IS the OTA runtime version. Bumping app.json version to mark a new release feels like a clean version bump. It actually orphans every OTA bundle that was published against the old runtime: the new binary version doesn't match any existing bundle's runtime, so the OTA never reaches a user.

Discovered when an OTA published successfully and never showed up on devices.

What fixed it

Two-version convention, documented in the project guide:

  • app.json version (binary version, also the OTA runtime): bump only immediately before cutting a new EAS build for the stores. Three-segment MAJOR.MINOR.PATCH.
  • constants/config.js APP_VERSION (the value sent in the X-App-Version header and shown in the in-app version line): bump on every OTA push. Two-segment MAJOR.MINOR.

The lesson: when "what version is this?" has two correct answers, write down which one bumps when, and never let the OTA-side bump touch the binary-side string.

The Dependabot + RN/SDK Trap

What happened

A "minor-and-patch group" Dependabot PR included react-native: 0.83.2 → 0.85.1 alongside 28 other safe-looking bumps. Verification on expo run:ios --device passed; that path is lenient and skips the strict codegen step. The mismatch only surfaced when invoking expo export:embed (the eager bundle path), which broke both eas update and eas build --local.

SDK 55's babel-preset-expo@55.0.17 ships @react-native/codegen@0.83.4, which can't parse RN 0.85.1's VirtualViewNativeComponent's new onModeChange event. RN minor bumps inside an Expo SDK major are not safe; the constraint is not visible to Dependabot.

What it cost

OTA pipeline blocked. Reverted to the prior RN minor and lost an evening.

What fixed it

Pinned RN to the SDK-supported minor and added npx expo install --check as a custom CI step. The lesson: different toolchain paths exercise different code paths. A passing verification on one path can hide a different path's break. Dependency hygiene that doesn't account for which path runs in CI is theater.

The What's New Ironic Gap

What happened

Built a What's New modal that shows release notes when users open the app on a new OTA version. Shipped it as part of v4.2. Realized after publishing: the feature has no lastSeenVersion to compare against on the OTA that introduces it, so every existing user sees nothing. The modal that announces what's new can't announce itself.

What it became

The modal becomes useful from the next OTA forward, when there's a prior version to compare against. The lesson is small but recognizable: any "show me what's new since last time" feature has a bootstrap problem on the release that introduces it. Document it in the same OTA so the silence isn't a bug, it's the design.

Silent Telemetry

Telemetry the Platform Won't Give You

Some platform-side actions are deliberately opaque. The store APIs accept the call, return success, and tell you nothing about whether the user actually saw, accepted, or even noticed what happened. The review prompt is the cleanest example, and it is currently an open question.

The Review Prompt Funnel Mystery (Issue #25)

What we have

Roughly three weeks of v4 in production across iOS and Google Play. Active devices and paid transactions suggest engaged users exist. Total reviews: one on the App Store (a 3-star "Promising, but flawed" addressing chains, since fixed in v4.3) and zero on Google Play. The review-prompt funnel may be silently failing.

What we know

The implementation is straightforward: maybePromptReview in AppFiles/utils/helpers.js fires StoreReview.requestReview() from expo-store-review after five solo forks, gated once-per-install by STORAGE_KEYS.REVIEW_PROMPTED.

What the platform won't tell us
  • Apple silently swallows requestReview() if the user has seen three prompts in the trailing 365 days, or if they've disabled review requests in iOS Settings → App Store. There is no return value to distinguish "shown" from "swallowed."
  • Google Play In-App Review API has the same opacity. The call always succeeds; the prompt may or may not have been shown.
  • Once-per-install gating is permanent for the user. A user who dismisses at five forks never sees the prompt again, even at fifty.
Open angles
  • Funnel measurement. How many installs cross five forks? Of those, how many leave a review? If the gap is large, the prompt is hitting the silent-swallow path.
  • Trigger moment. Post-fork is the "go eat" mindset, not the "rate me" mindset. Better candidates: after saving a favorite, after completing a Fork Around session, on third-day-of-use.
  • Custom soft prompt before the system prompt. Common iOS pattern: ask "Enjoying ForkIt?" in-app, only fire requestReview() for users who tap Yes. Reduces wasted system-prompt budget on users who'd give one star, and gives a route ("No → tell us why") for friction reports.
  • Re-engagement. Apple's three-per-year quota would actually allow re-prompting at a higher threshold for users who never accepted at five.
Status

This is an open question, not a solved one. The platform contract here is: "trust us, we're showing the prompt at the right moment." When the data suggests it isn't, you have nothing to inspect and no instrumentation to add. Filed as issue #25.

Review counts are the most public health signal an app has, and they are gated by an API the platform deliberately makes opaque. A solo developer can't add telemetry around a black box. The fix is not "instrument the prompt." It's "design a funnel that doesn't depend on the prompt working."
The Pattern

What I Didn't Know to Ask

All of the above share a structure: a category of bug that doesn't surface in code review because it lives in the platform contract. Each one only became visible after it had already cost something.

Code review can't see it

The Pro to Pro+ duplicate charge looked correct in code. The bug lived in the App Store Connect group level ordering and the Android replacement params, neither of which appears in the diff.

Documentation buries it

Subscription group level semantics are a footnote in the App Store Connect docs. The cost of misreading the footnote is a real user paying twice.

Tests can't reproduce it

You can't unit-test "did Apple show the review prompt?" or "did Google treat this as a replacement?" The store is the test surface, and the store doesn't tell you what it did.

LLMs can't anticipate it

The training data covers the documented happy path. The platform contract gotchas live in postmortems, billing CSVs, and Stack Overflow comments from 2022. They're not in the model's "best practices" answer when you ask how to ship a subscription.

The mitigation is not "be smarter about the platform." It's treat each store as a stakeholder with its own contract: read its docs end to end, look at the billing artifacts directly, and assume that any silent success is hiding something. That mindset is the thing I most wish I'd had on day one.

Cross-references in the rest of the case study: the iOS rejection and the IAP gauntlet are documented as incidents in Failures. The April billing actuals and the V4 cost model live in Business. The architecture and OTA convention live in Technical.

The Store Is a Stakeholder

Apple and Google don't show up in the project's stakeholder map. They should. They have requirements, review cycles, billing systems, telemetry surfaces, and gates. They will silently degrade your product if you build for them as if they were a deployment target.

The discipline is the same one that runs through the rest of the case study: take the relationship seriously, write down what each side owes the other, and assume any silent success is hiding something. The store is the most important user the app has. It just happens to be a corporation.

Two stores. Four IAP products. One paywall that has to know what every user already has. Zero trust in any silent success.