Device Coverage Matrix
The Principle: Data-Driven Device Selection
You cannot test on every device. The key is risk-based selection driven by your actual analytics data, not by gut feeling or "what the team has on their desks." A device matrix should answer: what is the minimum set of devices that covers the maximum percentage of our users?
The Tiered Device Strategy
| Tier |
Selection Criteria |
Coverage Target |
Testing Depth |
| Tier 1 (3-5 devices) |
Top devices from analytics, latest OS |
50-60% of users |
Full regression, every release |
| Tier 2 (5-10 devices) |
Popular mid-range, one version behind |
25-30% of users |
Critical paths, weekly |
| Tier 3 (10-20 devices) |
Long-tail devices, oldest supported OS |
10-15% of users |
Smoke tests, monthly |
| Edge cases (as needed) |
Foldables, tablets, RTL locales |
<5% of users |
Targeted testing for specific features |
Building Your Matrix
The process starts with analytics, not assumptions:
# Example: building your device matrix from analytics data
import analytics
def build_device_matrix(min_coverage=0.90):
"""Select minimum device set covering 90% of users."""
devices = analytics.get_device_breakdown(days=30)
# Sort by user share descending
devices.sort(key=lambda d: d["user_share"], reverse=True)
selected = []
cumulative = 0.0
for device in devices:
selected.append(device)
cumulative += device["user_share"]
if cumulative >= min_coverage:
break
return selected
# Output example:
# [
# {"model": "iPhone 15", "os": "iOS 18", "share": 0.18},
# {"model": "iPhone 14", "os": "iOS 17", "share": 0.12},
# {"model": "Samsung Galaxy S24", "os": "Android 14", "share": 0.09},
# {"model": "iPhone 13", "os": "iOS 17", "share": 0.08},
# {"model": "Samsung Galaxy A54", "os": "Android 14", "share": 0.07},
# ... # ~12 devices to reach 90% coverage
# ]
Enriched Device Matrix
def build_enriched_matrix(min_coverage=0.90):
"""Build matrix with testing metadata for each device."""
base_matrix = build_device_matrix(min_coverage)
for device in base_matrix:
device["tier"] = assign_tier(device["share"])
device["test_scope"] = get_test_scope(device["tier"])
device["frequency"] = get_test_frequency(device["tier"])
device["farm_available"] = check_device_farm_availability(device["model"])
return base_matrix
def assign_tier(share):
if share >= 0.05:
return 1
elif share >= 0.02:
return 2
else:
return 3
def get_test_scope(tier):
scopes = {
1: "full_regression",
2: "critical_paths",
3: "smoke_test",
}
return scopes.get(tier, "smoke_test")
def get_test_frequency(tier):
frequencies = {
1: "every_pr",
2: "weekly",
3: "monthly",
}
return frequencies.get(tier, "monthly")
Sample Device Matrices by Application Type
E-Commerce Application (Global)
| Device |
OS |
Tier |
Why |
| iPhone 15 |
iOS 18 |
1 |
Highest revenue per user |
| iPhone 14 |
iOS 17 |
1 |
Second highest user share |
| Samsung Galaxy S24 |
Android 14 |
1 |
Top Android flagship |
| Samsung Galaxy A54 |
Android 14 |
1 |
Top mid-range, emerging markets |
| iPhone 13 |
iOS 17 |
2 |
Still widely used |
| Google Pixel 8 |
Android 14 |
2 |
Stock Android reference |
| Samsung Galaxy A14 |
Android 13 |
2 |
Budget Android, high volume |
| Xiaomi Redmi Note 13 |
Android 14 |
2 |
Popular in Asia |
| iPhone SE (3rd gen) |
iOS 17 |
3 |
Smallest screen still in use |
| iPad (10th gen) |
iPadOS 17 |
3 |
Tablet layout verification |
| Samsung Galaxy Z Fold5 |
Android 14 |
Edge |
Foldable testing |
Enterprise B2B Application
| Device |
OS |
Tier |
Why |
| iPhone 15 Pro |
iOS 18 |
1 |
Corporate standard device |
| Samsung Galaxy S24 |
Android 14 |
1 |
Corporate Android standard |
| iPad Pro 12.9" |
iPadOS 17 |
1 |
Executives use tablets in meetings |
| iPhone 14 |
iOS 17 |
2 |
One version behind policy |
| MacBook (Safari) |
macOS |
2 |
Desktop access |
| Windows (Chrome) |
Windows |
2 |
Desktop access |
Maintaining the Matrix Over Time
Device matrices are not static. Review and update quarterly:
def quarterly_matrix_review():
"""Review and update the device matrix quarterly."""
current_matrix = load_current_matrix()
new_analytics = analytics.get_device_breakdown(days=90)
changes = []
for device in new_analytics:
# New device that crossed the Tier 2 threshold
if device["share"] >= 0.02 and device["model"] not in current_matrix:
changes.append({
"action": "ADD",
"device": device["model"],
"share": device["share"],
"reason": f"New device with {device['share']*100:.1f}% share"
})
for device in current_matrix:
# Existing device that dropped below relevance threshold
current_share = get_current_share(device, new_analytics)
if current_share < 0.01:
changes.append({
"action": "REMOVE",
"device": device["model"],
"share": current_share,
"reason": f"Share dropped to {current_share*100:.1f}%"
})
return changes
What Triggers an Immediate Matrix Update
| Trigger |
Action |
| New OS major version released |
Add latest OS to Tier 1 devices |
| New flagship device launched |
Add if it enters top 10 within 2 weeks |
| Device share drops below 1% |
Move to Tier 3 or remove |
| New form factor (foldable, etc.) |
Add to edge case testing |
| New market expansion (new country) |
Pull analytics for that market, update matrix |
| Bug report from specific device |
Temporarily promote to Tier 1 for investigation |
Mapping Matrix to CI Pipeline
# .github/workflows/mobile-matrix.yml
name: Mobile Test Matrix
on:
push:
branches: [main]
pull_request:
jobs:
tier-1-tests:
# Run on every PR
strategy:
matrix:
include:
- device: "iPhone 15"
os_version: "18"
platform: "ios"
- device: "Samsung Galaxy S24"
os_version: "14.0"
platform: "android"
- device: "Samsung Galaxy A54"
os_version: "14.0"
platform: "android"
steps:
- run: pytest tests/mobile/ --device="${{ matrix.device }}"
tier-2-tests:
# Run weekly on schedule
if: github.event_name == 'schedule'
strategy:
matrix:
include:
- device: "iPhone 13"
os_version: "17"
- device: "Google Pixel 8"
os_version: "14.0"
- device: "Xiaomi Redmi Note 13"
os_version: "14.0"
steps:
- run: pytest tests/mobile/ -m "critical_path" --device="${{ matrix.device }}"
tier-3-tests:
# Run monthly
if: github.event_name == 'schedule' && github.event.schedule == '0 2 1 * *'
strategy:
matrix:
include:
- device: "iPhone SE"
os_version: "17"
- device: "Samsung Galaxy A14"
os_version: "13.0"
steps:
- run: pytest tests/mobile/ -m "smoke" --device="${{ matrix.device }}"
The Coverage vs Cost Tradeoff
| Approach |
Devices |
Annual Cost (Cloud Farm) |
Coverage |
Maintenance |
| Minimal (Tier 1 only) |
3-5 |
$5K-10K |
50-60% |
Low |
| Balanced (Tier 1+2) |
8-15 |
$15K-30K |
80-90% |
Medium |
| Comprehensive (All tiers) |
20-30 |
$30K-60K |
95%+ |
High |
| Exhaustive |
50+ |
$100K+ |
99%+ |
Very high |
The balanced approach is the sweet spot for most teams. Moving from 90% to 95% coverage roughly doubles cost and maintenance effort. Moving from 95% to 99% doubles it again. Make this tradeoff consciously based on your risk tolerance and user impact.