Near real-time satellite data: NRT vs same-day delivery and why the timestamps matter
2025-08-13 · 9 min read · Timeliness · Near real-time · Latency · SLA · Versioning

TL;DR: “Near real-time” should be a number, not a feeling. Define it as measurable latency from sensing to delivery, and publish timestamps so anyone can verify the promise. If your product claims “today”, do not fill gaps with tomorrow’s pixels.
What “near real-time” actually means
In Earth observation, people use “real-time”, “near real-time (NRT)”, “same-day”, and “near-now” as if they are interchangeable. They are not. In practice, these terms describe two different kinds of promises: a latency promise (deliver within X time of sensing) and a calendar promise (deliver by a stated deadline in a stated timezone).
NRT should mean a latency promise. Concretely, it is a bound on sensing-to-delivery latency, measured from when the scene was observed to when the output is available to the customer. NASA Earthdata frames NRT as data available about 1 to 3 hours after observation, while also emphasizing that faster products can differ from standard products because they are produced with fewer inputs and checks.NASA Earthdata: Near real-time versus standard products (defines NRT as 1–3 hours and explains tradeoffs)
Same-day delivery is different. It is a calendar promise: deliver by end of day, in a defined timezone, for a defined area and cut time. It can be operationally useful, but it is not the same as NRT because it depends on orbit timing, downlink timing, and where you draw the line for “today”.
If you want these terms to survive procurement, operations, and audits, write them as verifiable statements tied to timestamps, and put them into an SLA (service-level agreement) that defines exactly what is being measured.
A practical glossary you can copy into a spec
The table below is deliberately written as contract language. It separates the marketing term from the measurable promise.
| Term | What you are promising | What to define in writing | Typical window |
|---|---|---|---|
| Real-time | Streaming or alert-like delivery | Sensing-to-delivery latency and delivery mechanism | Seconds to minutes |
| Near real-time (NRT) | Strict latency from sensing | Max sensing-to-delivery latency, and whether sensing is a timestamp or time range | Hours (commonly 1 to 3 hours in NRT framing) |
| Nominal (not NRT) | Routine availability window | Processing mode, update policy, and expected availability window | Hours to a day |
| Same-day delivery | A calendar deadline | Timezone, AOI (area of interest), cut time, and reissue policy | Same calendar day |
| Standard or science-quality | Heavier processing for consistency | Expected latency and update rules | Longer and variable |
Some programs make “timeliness” explicit instead of leaving it as a vibe. The Sentinel-2 User Handbook defines timeliness categories, including a Nominal mode (on-line availability between 3 and 24 hours after sensing) and an NRT mode (on-line availability between 100 minutes and 3 hours after sensing).Copernicus Sentinel-2 User Handbook (defines timeliness categories and NRT vs nominal availability windows)
A good litmus test is simple: If someone cannot point to the exact timestamp the latency is measured from, and the exact timestamp it is measured to, then “NRT” is just a label.
The timestamps that make the promise provable
A timeliness claim is only defensible if you can reconstruct what happened and when. You do not need exotic metadata. You need a small, consistent timestamp set that travels with the product.
sensing_time(or start and end): When the scene was observedpublished_at: When upstream inputs became available to youprocessed_atplusprocessing_version: When you ran the pipeline, and which exact version did itdelivered_at: When the output became available to the customer
If you only store delivered_at, you cannot tell whether a delay came from the satellite schedule, upstream publishing, your processing, or your delivery layer. More importantly, you cannot prove what the product date actually represents.
Timeliness is a pipeline, not a single number
Even when people agree on a definition, they often measure the wrong segment. The practical model is a four-stage pipeline you can measure end to end.
- Acquisition (sensing time): The satellite observes the scene, which is the time your map claims to represent.
- Availability (publish time): The data becomes accessible in a ground system or cloud endpoint.
- Processing (compute time): Masking, fusion, resampling, indices, QA, packaging, and post-processing.
- Delivery (customer time): The result lands in your API, tiles, or storage with metadata attached.
“NRT” should be expressed as a bound on sensing-to-delivery. Operationally, you will also care about publish-to-delivery because that isolates your performance from upstream publication delays.
NASA’s LANCE (Land, Atmosphere Near real-time Capability for Earth observation) exists specifically to distribute NASA Earth observation data within about three hours of observation, which is exactly the kind of operational framing you want when someone claims “near real-time”.NASA Earthdata: LANCE (NRT distribution capability within about three hours of observation)
“Near-now” is not a standard class, so treat it as a calendar promise
“Near-now” shows up in product names and conversations, but it is not a widely standardized class like “NRT”. If you use it, define it explicitly as same-day delivery, including a timezone and a cut time. Without those two fields, “near-now” is ambiguous by design.
A robust definition looks like this in plain language: “Same-day delivery means all outputs for the nominal date are delivered by 23:59 local time in the AOI timezone, with a stated cut time for late passes and a stated reissue policy.” That definition makes it clear you are not promising strict latency from sensing. You are promising a calendar deadline that depends on how you handle late data.
A little on why timeliness still matters
For many workflows, speed is only valuable if the date is defensible. Flood extent today is not the same product as flood extent yesterday, and “a clean map” is not automatically “a correct map”. Compliance, audit, and payout decisions often hinge on what the world looked like on a specific date, not on when a map was delivered.
This is where most avoidable disputes happen: Someone asked for “NRT”, someone delivered “same-day”, and nobody wrote down the timestamps that would have made the difference obvious.
The failure mode to watch: temporal leakage
The most common way timeliness definitions get silently broken is temporal leakage: filling today’s cloudy gaps with tomorrow’s acquisitions. The output looks better, coverage numbers improve, and the product becomes less truthful. If your contract or workflow depends on a date, leakage is not a minor detail. It changes what the map means.
You can prevent this with one simple rule in your spec: For any product that claims a nominal date, no sensing times after the nominal date end are allowed. If a provider cannot expose sensing timestamps, you cannot verify this rule, which means you are buying a black box.
SLA wording that survives reality
Most SLA failures are not engineering failures. They are definition failures. A defensible SLA ties the marketing term to a measured latency, explicit cut times, and explicit provenance fields.
Here is an example pattern that avoids ambiguity while staying implementable:
Deliver an early cut within X hours of sensing for inputs published by Y. Deliver a same-day final cut by 23:59 local time in the AOI timezone. Outputs for a nominal date must not include sensing times after nominal date end. Every output must include sensing time bounds and processing version.
If you want to go one step further, require a declared update policy: Whether the same nominal date can be reissued, how superseding works, and whether clients are expected to pin versions for audits.
Validation and metrics
If you have the timestamps, you can keep definitions honest with a few blunt measures. Track median and P90 latency from sensing-to-delivery (and publish-to-delivery), report the share of pixels whose sensing time falls within the allowed window for the nominal date, and explicitly check that the share of pixels from the future is 0% for date-faithful products.
If you want one concrete example of “NRT” treated as a requirement rather than a label, Sentinel-5 publishes explicit NRT timeliness requirements per processing level, expressed as limits from sensing (for example, Level-1B under 135 minutes and Level-2 under 165 minutes).Copernicus Sentinel-5 data products (publishes NRT timeliness requirements per processing level)
ClearSKY in practice
When we describe a workflow as NRT or same-day, we treat it as a promise expressed in timestamps. If an output claims a nominal date, we do not use future acquisitions to make the map look cleaner. Instead, we ship an early cut as soon as inputs are published, then reissue within the stated same-day deadline if late passes or additional sensors materially improve completeness. The contract is defined in sensing time bounds, publish time bounds, and a processing version, not in a slogan.
FAQ
›What is the difference between near real-time (NRT) and same-day delivery?
Near real-time (NRT) is a latency promise measured from sensing time to delivery time. Same-day delivery is a calendar promise measured against an end-of-day deadline in a defined timezone. Same-day can be useful operationally, but it is not equivalent to a strict sensing-to-delivery bound.
›Which timestamps should I require to make a timeliness claim auditable?
At minimum, require sensing time (start and end if relevant), an upstream published time, a processed time with a processing version, and a delivered time. Those fields let you separate upstream delays from your own pipeline performance. They also let you prove what the nominal date actually represents.
›What does “near-now” mean in satellite products?
“Near-now” is not a widely standardized timeliness class in Earth observation. If you use it, treat it as a calendar promise and define it as same-day delivery with a timezone and a cut time. Without those details, two teams can use the same word while promising different things.
›What is temporal leakage, and why is it a problem?
Temporal leakage is when a product that claims “today” quietly fills gaps with sensing from tomorrow. It usually improves visual completeness, but it changes what the map means and can break compliance, audit, or payout logic. If dates matter, you need a rule that forbids sensing times after the nominal date end.
›How should NRT latency be measured and reported in an SLA?
Define latency as sensing-to-delivery and state exactly which timestamps are used for the start and end of measurement. Report distribution metrics like median and P90 so the SLA reflects real operations, not a best case. If you also report publish-to-delivery, you can show how much delay is under your control versus upstream publication.