Investigating Web Content Delivery Performance over Starlink
Source: arXiv:2510.13710 · Published 2025-10-15 · By Rohan Bose, Jinwei Zhao, Tanya Shreedhar, Jianping Pan, Nitinder Mohan
TL;DR
This paper asks a practical but underexplored question: what happens to web content delivery when the access network is not a terrestrial ISP but a LEO satellite network like Starlink? The authors argue that prior Starlink work mostly measured link or path performance, while web delivery depends on a chain of decisions across multiple layers: Starlink’s chosen PoP, DNS resolver placement, and CDN server mapping. Their main contribution is a large measurement study that decomposes end-to-end page-fetch delay into those pieces and shows that the dominant bottleneck is often not “the satellite link” in the abstract, but the mismatch between Starlink’s egress topology and terrestrial content-delivery assumptions.
The result is a three-regime picture. In content-rich regions with local PoPs and dense CDN/DNS infrastructure, Starlink can get close to terrestrial performance, with the satellite segment accounting for roughly 80–90% of RTT. In sparse-edge regions, a nearby PoP can still produce poor web performance because DNS and CDN systems map requests to distant resolvers or caches, driving total latency above 200 ms. In remote-PoP scenarios, the access path itself dominates. Using Starlink’s 2025 infrastructure expansion as a natural experiment, they show that moving PoPs closer to users can cut median page-fetch times by 60% in Africa, but has much smaller effects in dense CDN regions such as Canada, where CDN placement already tracks both old and new PoPs.
Key findings
- The study combines 225K Cloudflare AIM tests, 6.1M traceroutes, 10.8M DNS queries, and 523K HTTP GET fetches collected from Nov 2023 to Sept 2025 across 145 countries and 99 Starlink-connected RIPE Atlas probes, plus six controlled Starlink terminals.
- The authors identify three Starlink web-performance regimes: content-rich PoPs, sparse-edge regions, and remote-PoP scenarios; in content-rich regions the user-to-PoP segment accounts for about 80–90% of RTT, while sparse-edge and remote-PoP cases add major PoP-to-CDN/DNS penalties.
- Cloudflare’s anycast CDN is reported to be about 18 ms lower latency than Akamai on average and about 6 ms lower than CloudFront; in sparse-edge cases the Akamai gap can exceed 100 ms.
- For selected countries, end-to-end CDN RTTs exceed 200 ms when remote PoP assignments trigger resolver mislocalization and cache misses; the paper highlights cases such as PH users seeing about 180 ms to Akamai servers.
- DNS resolver response times are lowest when resolver PoPs align with Starlink PoPs; Cloudflare and Google are reported around 28 ms median globally, versus about 33 ms for Quad9.
- DNS cache hit rates differ materially by provider: Cloudflare 78%, Google 65%, Quad9 58% globally; for African content, hit rates drop by 20–25 percentage points, and Quad9 falls below 40%.
- The 2025 Starlink PoP expansion is used as a natural experiment: moving users in Africa from European PoPs to local African PoPs reduced median page-fetch times by 60% and increased cache hit rates from 60% to 85%.
- For Canada, comparable PoP reassignment had minimal effect because dense CDN deployment already existed near both old and new PoPs, showing that PoP proximity only helps when CDN/DNS infrastructure is also aligned.
Methodology — deep read
Threat model and assumptions: this is a measurement paper, not an attacker-model paper, so the implicit adversary is mostly “the Internet’s default mapping machinery,” not an active malicious actor. The authors assume Starlink users are ordinary subscribers whose traffic exits through Starlink-managed PoPs, and that CDN and DNS systems make placement decisions using their normal geolocation, anycast, and resolver-selection logic. They are not trying to defeat an on-path censor or emulate a sophisticated adversary; instead they want to observe how standard content-delivery systems behave when the access network is LEO-satellite based. The key assumption is that the last public hop before Starlink’s private network identifies the serving PoP, and that reverse DNS / IATA / provider metadata is sufficient to geolocate CDN edges and resolver instances. They also assume that comparing Starlink measurements to terrestrial baselines in the same countries is meaningful, even though Starlink and terrestrial access users may not be perfectly matched on device, application mix, or subscription class.
Data provenance, scale, and labeling: the study has three sources. First, passive Cloudflare AIM measurements from Nov 2023 to Sept 2025, filtered to Starlink ASNs 14593 and 45700, yielding about 255K speed tests across 145 countries. The paper says each AIM record includes city-level geolocation for both client and edge server; the authors use median idle latency per location as a proxy for the typical CDN assignment at that network position. Second, M-Lab speed tests over the same time window provide reverse traceroutes from M-Lab servers back to clients, which the authors use to infer the active Starlink PoP by finding the last public hop before Starlink. Third, active measurements from 99 Starlink-connected RIPE Atlas probes in 32 countries and six controlled Starlink terminals across Canada, Germany, Ghana, and Zambia. The controlled terminals ran every two hours from March to September 2025 and produced 43K DNS CHAOS queries, 819K DNS A-record queries, 512K traceroutes, and 523K HTTP GET fetches. The targets came from the Tranco top-2K and were selected to represent Cloudflare, Akamai, and CloudFront. For DNS caching analysis, they issued A-record queries with recursion disabled (RD=0) so that a reply indicates a cache hit and no data/REFUSED indicates a miss. For resolver location, they sent CHAOS class id.server TXT queries to Cloudflare, Google, and Quad9. For path analysis, they used TCP traceroute with UDP fallback and geolocated hops via PTR semantics and IPInfo validation.
Architecture / algorithm: the paper does not introduce a learned model or new protocol; the core “algorithm” is a measurement decomposition pipeline. The end-to-end webpage fetch is broken into user-to-PoP latency, PoP-to-DNS-resolver latency, and PoP-to-CDN-edge latency. For each probe/location, the authors take the minimum RTT across their targeted websites to isolate the path to the relevant CDN or resolver, then aggregate medians across vantage points. In Cloudflare AIM, the edge-selection mapping comes from Cloudflare’s own anycast behavior, so the authors use the reported server location as observed by the test. In M-Lab and traceroute data, they infer PoP from the last public hop before Starlink. In the controlled HTTP fetches, CDN cache status and edge location are read from HTTP headers: CF-Cache-Status and CF-Ray for Cloudflare, X-Cache and X-Amz-Cf-Pop for CloudFront, and a Pragma: akamai-x-cache-on request header for Akamai to coax cache status into X-Cache. A concrete example the paper walks through is Zimbabwe before and after the January 2025 Starlink reassignment: before the change, Zimbabwe traffic exited via a Frankfurt PoP in Germany, causing the DNS resolver to be chosen from a European viewpoint and CDN mapping to European caches; because the relevant African content was not cached there, users suffered cache misses and intercontinental origin traversals. After reassignment to a Kenya PoP, user-to-PoP distance dropped substantially, but performance still lagged terrestrial access because CDN caches in the new region were not as well provisioned for Zimbabwean content.
Training regime / hardware / reproducibility: there is no model training, optimizer, batch size, or seed strategy because this is not a machine-learning paper. The closest analog is the measurement cadence and instrumentation. The controlled vantage points executed the same DNS/traceroute/HTTP suite every two hours over a six-month period, while RIPE Atlas measurements were repeated every 15 minutes during a fifteen-week window. The paper states that it spans two years of passive measurement and a separate six-month active campaign, but the truncated text does not specify all hardware details for the controlled Starlink terminals beyond their geographic distribution. Reproducibility is promising but not yet complete: the authors say they will release measurement datasets, analysis code, and the controlled methodology upon acceptance, so at the time of the text this is a planned release rather than a verified public artifact.
Evaluation protocol and concrete metrics: the paper evaluates three main quantities: latency to the Starlink PoP, latency from PoP to DNS resolver, and latency from PoP to CDN edge, plus end-to-end HTTP page-fetch time and DNS cache hit rate. The baselines are terrestrial ISPs in the same countries, Cloudflare vs Akamai vs CloudFront, and Cloudflare/Google/Quad9 for DNS. They compare across geography, across PoP reassignment events, and across CDN-provider mapping behavior. Important slices include countries with content-rich PoPs (US, DE, CL), sparse-edge regions (CO, HU, BJ, PH), and remote-PoP scenarios (ZW, ZM, MG). The paper reports that Cloudflare often outperforms the DNS-mapped alternatives by about 18 ms on average, with larger gaps when resolver mislocalization occurs; it also reports that relocating PoPs closer to users in Africa cut median page-fetch time by 60% and increased cache hit rate from 60% to 85%. The truncated text does not describe formal statistical tests, confidence intervals, or hypothesis testing, so the strength of the conclusions rests mainly on scale, repeated measurements, and before/after natural experiments rather than inferential statistics.
Technical innovations
- A multi-layer decomposition of Starlink web performance into user-to-PoP, PoP-to-DNS, and PoP-to-CDN components rather than treating RTT as a single opaque metric.
- A longitudinal measurement design that combines passive Cloudflare AIM and M-Lab data with active RIPE Atlas probing and controlled HTTP/DNS experiments to observe both global patterns and PoP reassignments.
- Use of Starlink’s 2025 PoP expansion as a natural experiment to quantify how infrastructure changes alter page-fetch time and cache hit rate.
- Provider-comparative analysis showing how Cloudflare anycast, Akamai DNS mapping, and CloudFront DNS mapping diverge under Starlink’s egress topology.
- Cache-hit measurement via RD=0 A-record queries to separate resolver RTT from recursive lookup overhead.
Datasets
- Cloudflare AIM — 255K speed tests — Cloudflare aggregated internet measurements
- M-Lab speed tests — size not specified in excerpt — M-Lab public measurement platform
- RIPE Atlas Starlink probes — 99 probes across 32 countries; 6.1M traceroutes and 10.8M DNS queries — RIPE Atlas
- Controlled Starlink vantage points — 6 terminals across Canada, Germany, Ghana, Zambia; 43K CHAOS queries, 819K A-record queries, 512K traceroutes, 523K HTTP GETs — author-operated
Baselines vs proposed
- Starlink vs terrestrial ISPs (median latency): terrestrial = 19–20 ms vs proposed = about 50 ms globally in the paper’s aggregate comparison
- Cloudflare CDN vs Akamai CDN: median latency = proposed about 18 ms lower than Akamai
- Cloudflare CDN vs CloudFront CDN: median latency = proposed about 6 ms lower than CloudFront
- Cloudflare DNS vs Google DNS vs Quad9 DNS: median resolver RTT = about 28 ms vs 28 ms vs 33 ms, respectively
- Cloudflare DNS cache hit rate vs Google vs Quad9: 78% vs 65% vs 58%
- Africa PoP reassignment before vs after: median page-fetch time = proposed 60% lower after relocating to local PoPs
- Africa cache hit rate before vs after PoP relocation: 60% vs 85%
Figures from the paper
Figures are reproduced from the source paper for academic discussion. Original copyright: the paper authors. See arXiv:2510.13710.

Fig 2: TN subscribers reach local CDN edges hosting lo-

Fig 1: Median RTT difference (Starlink −Terrestrial) be-

Fig 3: CDN latency breakdown over Starlink from selected

Fig 4: Access latencies from Starlink subscribers to CDN
Limitations
- The study relies heavily on inferred PoP and CDN locations from traceroute, PTR, headers, and geolocation databases; those inferences can be wrong or stale, especially for anycast and dynamically steered infrastructure.
- The controlled campaign covers only four countries with six terminals, so it cannot fully represent the diversity of Starlink deployments worldwide.
- The paper emphasizes latency and cache-hit rate but does not report deeper application QoE metrics such as page-render time, object-level waterfall analysis, or user interaction latency.
- The truncated text does not show formal statistical significance testing, confidence intervals, or robustness checks for the reported deltas.
- Results are tied to Starlink’s infrastructure state in 2023–2025; PoP placement and CDN peering can change quickly, so some findings may age fast.
- The active measurement schedule is periodic rather than continuous, so transient routing anomalies or short-lived CDN mapping changes may be missed.
Open questions / follow-ons
- How should CDN request mapping be redesigned to account for satellite ISP egress PoPs that are geographically distant from the end user but topologically central to Starlink traffic?
- Would satellite-aware DNS and CDN mapping policies outperform today’s anycast/DNS heuristics across both sparse and dense infrastructure regions?
- How much of the observed benefit from PoP relocation could be replicated by smarter peering, local caching, or resolver placement without changing Starlink’s access topology?
- Can in-orbit or regional satellite-native caching reduce the cache-miss penalty in content-sparse regions, and what content should be cached there?
Why it matters for bot defense
For bot-defense and CAPTCHA practitioners, the main lesson is that access-network topology can skew both latency and localization signals that many defenses quietly depend on. If a service uses IP geolocation, resolver geography, or RTT fingerprints to choose CAPTCHA difficulty, Starlink users may be systematically misclassified because their traffic emerges from a distant PoP whose location is not the user’s physical location. That can create false positives, inconsistent challenge rates, or broken risk scoring in regions where PoP assignment and CDN/DNS mapping diverge.
More broadly, the paper is a reminder that infrastructure-aware bot defenses should not assume that client-facing network signals reflect the browser’s real geography. For practitioners, that means validating challenge policies against LEO satellite access, checking whether CDN/DNS localization leaks into risk engines, and testing bot mitigations under PoP reassignment and resolver mislocalization. If your anti-abuse stack is tuned on terrestrial traffic alone, Starlink-like paths can be an unpleasant surprise.
Cite
@article{arxiv2510_13710,
title={ Investigating Web Content Delivery Performance over Starlink },
author={ Rohan Bose and Jinwei Zhao and Tanya Shreedhar and Jianping Pan and Nitinder Mohan },
journal={arXiv preprint arXiv:2510.13710},
year={ 2025 },
url={https://arxiv.org/abs/2510.13710}
}