Comprehensive Rules for designing, implementing and maintaining a modern mobile-application test-automation suite.
Your mobile tests are brittle, your CI is red more than green, and you're spending more time fixing test infrastructure than shipping features. Sound familiar?
Mobile testing isn't just "web testing with extra steps." You're dealing with:
Generic testing approaches fail because they ignore these mobile-specific challenges. You need a strategy built for the chaos of mobile development.
These Cursor Rules implement a battle-tested mobile testing architecture that handles the complexity for you. Instead of cobbling together tools and hoping they work together, you get a cohesive system designed around three core principles:
Real-Device-First Testing: Every commit runs on actual Android and iOS hardware, not just simulators. Your tests catch the memory leaks, performance issues, and hardware-specific bugs that only surface on real devices.
Smart Test Distribution: 70% unit tests for fast feedback, 20% integration tests for API/database validation, 10% E2E UI tests for critical user flows. This pyramid keeps your test suite fast while maintaining comprehensive coverage.
Production-Quality Test Code: Your test infrastructure gets the same rigor as your app code—static analysis, PR reviews, lint checks, and 80%+ coverage requirements.
// Before: Flaky tests with arbitrary waits
await driver.sleep(5000); // Hope the element loads
await element.tap();
// After: Deterministic execution with explicit waits
await driver.waitUntil(() => element.isDisplayed(), { timeout: 10000 });
await element.tap();
Your CI pipeline stops failing due to timing issues. Tests become predictable and trustworthy.
// Shared accessibility IDs between platforms
export const enum LoginScreen {
EMAIL_INPUT = 'login_email_input',
PASSWORD_INPUT = 'login_password_input',
LOGIN_BUTTON = 'login_submit_button'
}
Write your test logic once, run it on both iOS and Android. Platform-specific implementations handle the differences automatically.
// Page Object pattern eliminates selector brittleness
class LoginPage {
get emailInput() { return $(`~${LoginScreen.EMAIL_INPUT}`); }
get passwordInput() { return $(`~${LoginScreen.PASSWORD_INPUT}`); }
async login(email: string, password: string) {
await this.emailInput.setValue(email);
await this.passwordInput.setValue(password);
await this.loginButton.tap();
}
}
UI changes don't break your entire test suite. Update selectors in one place, and all tests continue working.
You're building a new checkout flow. Instead of writing the feature then scrambling to add tests:
// Step 1: Write the failing test first (red)
describe('Checkout Flow', () => {
it('should complete purchase with valid card', async () => {
await checkoutPage.fillShippingInfo(testData.validAddress);
await checkoutPage.selectPaymentMethod('credit_card');
await checkoutPage.fillCardDetails(testData.validCard);
const result = await checkoutPage.submitOrder();
expect(result.success).toBe(true);
expect(result.orderId).toMatch(/^ORDER-\d+$/);
});
});
Your test defines the expected behavior before you write a single line of app code. This drives better API design and catches edge cases early.
A critical bug report comes in: "App crashes on older Android devices." Instead of manual testing across device labs:
// Parallel execution across device matrix
const deviceConfigs = [
{ platformName: 'Android', platformVersion: '10', deviceName: 'Samsung Galaxy S10' },
{ platformName: 'Android', platformVersion: '13', deviceName: 'Google Pixel 7' },
{ platformName: 'iOS', platformVersion: '15.0', deviceName: 'iPhone 12' },
{ platformName: 'iOS', platformVersion: '17.0', deviceName: 'iPhone 15' }
];
// Auto-slice tests by CI_NODE_TOTAL for horizontal scaling
const testSlice = deviceConfigs.slice(
CI_NODE_INDEX * Math.ceil(deviceConfigs.length / CI_NODE_TOTAL),
(CI_NODE_INDEX + 1) * Math.ceil(deviceConfigs.length / CI_NODE_TOTAL)
);
Your CI automatically distributes tests across device configurations. You catch device-specific issues before they reach production.
Your latest release feels slower, but you need proof and root cause analysis:
// Automated performance monitoring in tests
beforeEach(async () => {
await driver.execute('mobile: startPerfRecord', { timeout: 60000 });
});
afterEach(async () => {
const perfData = await driver.execute('mobile: stopPerfRecord');
// Fail if startup time regression > 10%
const startupTime = perfData.appLaunchTime;
expect(startupTime).toBeLessThan(BASELINE_STARTUP_TIME * 1.1);
// Auto-attach performance data to CI report
await attachPerfDataToReport(perfData);
});
Every test run captures performance metrics. Regressions get caught immediately with concrete data for debugging.
mkdir -p tests/{android,ios,flows,pages,data,utils}
npm install --save-dev @types/node typescript eslint-plugin-no-only-tests
Create your directory structure following the established patterns. Each component has a clear responsibility and location.
// tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"moduleResolution": "node",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["tests/**/*"]
}
// caps/base.ts - Shared capabilities
export const baseCapabilities = {
platformName: process.env.PLATFORM_NAME,
automationName: process.env.PLATFORM_NAME === 'iOS' ? 'XCUITest' : 'UiAutomator2',
noReset: false, // Fresh state for each test
newCommandTimeout: 300,
// Device health requirements
batteryLevel: 20, // Minimum 20% battery
freeStorage: 200 // Minimum 200MB storage
};
// pages/login.po.ts
export class LoginPage {
async login(credentials: LoginCredentials): Promise<Result<void, TestError>> {
try {
await this.emailInput.setValue(credentials.email);
await this.passwordInput.setValue(credentials.password);
await this.loginButton.tap();
await driver.waitUntil(() => this.isLoggedIn(), { timeout: 10000 });
return Ok(undefined);
} catch (error) {
return Err(new TestError('Login failed', error));
}
}
}
# .github/workflows/mobile-tests.yml
- name: Run E2E Tests
run: |
# Auto-slice tests for parallel execution
export CI_NODE_INDEX=${{ strategy.job-index }}
export CI_NODE_TOTAL=${{ strategy.job-total }}
npm run test:e2e
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: always()
with:
name: test-results
path: |
screenshots/
videos/
logs/
allure-results/
// Before: Manual testing across 10 devices = 4 hours per release
// After: Automated testing across device matrix = 30 minutes per commit
// Before: 40% of releases had device-specific bugs
// After: Device-specific issues caught in CI before release
// Before: Performance regressions discovered by users
// After: 10% performance threshold violations fail CI automatically
.cursor-rules fileYour mobile testing strategy should be as sophisticated as your mobile app. These rules give you the foundation to build a test suite that scales with your development team and catches issues before your users do.
The difference between fragile tests that slow you down and reliable tests that accelerate development is having the right patterns and practices from day one. Start with these rules, and build the mobile test automation suite your team deserves.
You are an expert in cross-platform mobile test automation using TypeScript/JavaScript, Kotlin, Swift, Java and Python with Appium, Detox, Espresso, XCUITest, Maestro and Selenium Grid.
Key Principles
- Shift-left: design tests while writing acceptance criteria; commit failing test first (red/green).
- Real-device-first: every commit must run on at least one Android & one iOS physical device.
- Test pyramid: 70 % unit (device-hosted), 20 % integration (API / DB), 10 % E2E UI.
- Automate the repeatable; run critical flows manually on top 5–10 devices each release.
- Keep test code as production code: static analysis, PR reviews, lint and coverage gates ≥ 80 %.
- One responsibility per test; follow Arrange → Act → Assert (AAA). No hidden assertions.
- Page/Object pattern for UI elements; never hard-code selectors.
- Deterministic execution: no variable sleep; use explicit waits on expected conditions.
- Tests must be idempotent and environment-agnostic; clean up created data.
TypeScript Rules (for Appium/Detox harness)
- Target ES2020, moduleResolution=node, "strict":true.
- Directory structure:
tests/
⤷ android/
⤷ ios/
⤷ flows/ // high-level business journeys
⤷ pages/ // Page Objects, one file per screen
⤷ data/ // fixtures & data builders
⤷ utils/ // wrappers, helpers, waits
- File naming: <screen>.po.ts, <flow>.spec.ts, <util>.ts.
- Use async/await exclusively; every driver call returns a Promise.
- Wrap flaky selectors in get* helpers returning DetoxMatcher / WebdriverIO Element.
- Never commit .only / .skip modifiers; enforce via eslint-plugin-no-only-tests.
- Export const enums for accessibility-id values shared with app-code.
- Use data-driven tests via jest.each / mocha-param where input matrix >3.
Error Handling & Validation
- Each test step returns Result<T, TestError>; convert into assertion with expect(result.ok).toBe(true).
- Capture onFailure:
• screenshot (png)
• full device logs (adb/idevicesyslog)
• video (if device cloud supports)
• automatically attach artifacts to CI report.
- Early-fail guard: beforeEach verifies device health (battery >20 %, storage >200 MB).
- Validate network stubs in integration tests; surface mismatch immediately.
Framework-Specific Rules
Appium (Android & iOS)
- DesiredCapabilities defined in caps/base.ts; extend per-suite. Never duplicate.
- Enable "noReset":false for E2E to guarantee fresh state; use adb uninstall in afterAll.
- Prefer accessibilityId; fall back to -ios predicate / Android UiAutomator only if unavoidable.
- Use driver.execute('mobile: scroll', {direction:'down'}) rather than coordinate swipes.
Detox (React Native)
- Run detox build & test in separate CI steps to parallelize shards.
- Use detox-instruments on iOS for timings; flag any step >1000 ms as potential performance regression.
- Stub out external HTTP calls with MSW running in JS context.
Espresso (Android native)
- Keep Espresso Idling Resources registered for every async task (e.g., WorkManager).
- Do not test business logic here—cover it in Robolectric/JUnit on JVM.
- Device annotation @SdkSuppress(minSdkVersion=23) when API constraints matter.
XCUITest (iOS native)
- Extend XCTestCase + BaseTest that launches app once per class for speed.
- Use XCTAttachment for screenshots & performance metrics.
- Tag tests with @available(iOS 16.0, *) to compile-gate features.
Maestro (flow YAML)
- Keep *.yml under tests/maestro and validate with maestro lint in CI.
- Use groups & tags to create dynamic test suites: smoke, regression, perf.
Additional Sections
Continuous Integration / Delivery
- Pipeline stages: lint → unit (Jest/JUnit) → build APK/IPA → instrumentation (Espresso/XCUITest) → E2E (Appium/Detox) → deploy to beta.
- Auto-slice E2E by CI_NODE_TOTAL & CI_NODE_INDEX for horizontal scaling.
- Publish test reports in JUnit XML + Allure for trend analysis.
Performance & Load
- Run Firebase Test Lab robo-crawl nightly; fail build if startup time regression >10 %.
- Use "adb shell am profile" and "xcrun xctrace" inside synthetic flows for CPU/memory baselines.
- Simulate network via Android emulator commands (gsm, network, delay) & Xcode Network Link Conditioner.
Security Testing
- Integrate OWASP MASVS automated checks post-build.
- Use TLS pinning bypass tests; fail if MitM allowed.
Accessibility
- Ensure every interactable element has accessibility-id/label.
- Run axe-android & axe-ios in CI; block PR if critical violations.
Common Pitfalls & Guards
- Avoid hard sleeps; enforce eslint-plugin-no-sleep.
- Protect against test data collisions with UUID v4 suffixes.
- Reset push-notification permissions before each iOS test: xcrun simctl push.