Introduction
EDGX, a Belgian space technology startup, has closed a €2.3 million seed round to speed up the commercialization of Sterna, its edge AI computer for satellites. Sterna is a high-performance data processing unit powered by NVIDIA technology. It is designed to run complex algorithms in orbit so satellites can analyze raw sensor data on the fly, select only the most valuable insights, and transmit much smaller files to the ground. That approach targets a long-standing bottleneck in space communications where bandwidth is limited, downlinks are expensive, and legacy store-and-forward architectures can no longer keep pace with the data coming off modern sensors. With Sterna, satellite operators can push AI inference to space, deliver answers in minutes rather than hours, and reduce their total cost of delivering data-driven services.
What Changed
For years, many satellites followed a simple process. Capture data, store it, and wait to pass over a ground station before downlinking the full dataset for processing on Earth. That pattern worked when sensors produced modest volumes and customers were content to wait. It breaks when you add high-resolution imagers, multi-spectral payloads, synthetic aperture radar, and continuous monitoring campaigns. Files balloon. Contact windows slip. Latency climbs. Valuable insights can age out before anyone sees them. The shift now is about commuting computation to where the data originates. Instead of shipping every pixel or radar return to the ground, Sterna runs trained models in orbit to detect, classify, fuse, and rank information. Only the distilled results come down. This is the same pattern that transformed computing on Earth when phones started running on-device AI and factories adopted local inference at the network edge. The seed funding gives EDGX the runway to turn that pattern into a dependable product for space operators.
Why It Matters
Satellites sit on a gold mine of information. Weather events evolve hour by hour. Ships move while you wait for the next pass. Wildfires can double in size before a traditional pipeline finishes processing the prior image. In that environment, the organization that delivers the right answer first often wins the customer. Edge AI in orbit cuts the dead time between sensing and decision. If a model can flag a change in infrastructure, detect smoke plumes as they form, or identify a drifting vessel that does not match its transponder identity, then the priority packet to downlink is small and actionable. That saves bandwidth and power. It also increases revisit value, because a customer does not need ten redundant captures to get the one frame that matters. For operators, the economics can improve across the board. Less data sent to Earth means lower downlink fees, fewer ground station contacts, shorter storage bills, and smaller cloud processing footprints. More timely alerts can support higher price points for premium services. Over time, this can reshape satellite business models from raw imagery sellers to real-time intelligence providers.
Meet Sterna: An Edge AI Computer For Space
Sterna is described as a data processing unit, or DPU, built around NVIDIA technology with a focus on AI acceleration. In practical terms, that means it is designed to handle parallel math efficiently, which is the backbone of neural network inference. It sits alongside the payload and the flight computer, ingesting sensor streams and running models that have been trained and validated on the ground. The output may be a small alert, a segmentation mask, a cropped image, or a ranked list of detections. Sterna is built to tolerate the realities of spaceflight. Power budgets are tight. Thermal conditions fluctuate. Radiation can flip bits and stress components. Although every mission profile is unique, the promise behind Sterna is to package enough compute into a compact, rugged module that fits within the constraints of small satellites while still delivering the performance needed for real workloads.
What Sterna Is Designed To Do
- Run AI inference on orbit. Convolutional and transformer-based vision models, traditional machine learning pipelines, and signal processing chains can all be deployed to classify, detect, and regress features in real time. 2) Shrink downlink volumes. Instead of full frames or raw radar echoes, Sterna can produce lightweight metadata, thumbnails, or cutouts that still fully answer the customer’s question. 3) Operate within satellite constraints. The system is intended to manage power draw intelligently, maintain throughput in harsh conditions, and integrate with standard space bus interfaces so it fits into existing designs without a start-from-scratch retrofit.
The Problem With Store-And-Forward
Legacy architectures collect data and wait to downlink it. Two issues dominate. First, you are limited by physics. Ground station passes are short and depend on orbital geometry. Weather and scheduling can narrow those windows further. Second, you pay in time and cost. You must transmit large files repeatedly and then process them on Earth, often in cloud environments that charge for ingress, storage, and compute. The result is a queue that grows faster than it drains. Edge AI cuts across both issues. It reduces the size of what must be sent and moves the heavy lifting to the source, which lowers incremental cost and latency. The upshot is simple. When operators move from a raw data pipeline to a question-and-answer pipeline, the customer stops waiting on physics and starts receiving results.
Use Cases That Benefit First
Earth observation. Satellites equipped with high-resolution imagers can run object detection to flag construction, vehicle counts, or changes in land use. Instead of sending the entire frame, Sterna can downlink a change map and a small set of cutouts with coordinates and confidence scores. Maritime domain awareness. By fusing imagery with AIS reports, models can identify non-cooperative vessels or spot anomalies like ships rendezvousing in restricted areas. Alerts arrive in near real time during the next pass, which is a better fit for time-sensitive interdiction. Wildfire and disaster response. Thermal and optical sensors can detect hotspots or flood boundaries, so first responders receive georeferenced perimeters while they still matter. Agriculture and forestry. Edge inference can score crop vigor, moisture stress, or pest risk at the field level, which avoids full frame transfers while giving growers the insights they need for treatment and irrigation. Infrastructure monitoring. Pipelines, power lines, and solar farms can be monitored for encroachment, damage, or panel soiling. In many cases, the actionable output is a small list of maintenance tickets with locations and thumbnails. Tasking, tipping, and cueing loops run faster when space assets return detections rather than raw feeds. Sterna can help close those loops by screening for patterns and handing off high-value hits to ground analysts. Space situational awareness. Payloads that watch the sky can use on-orbit inference to detect and track objects, then pass summarized tracks to the ground, improving catalog quality without saturating downlinks.
How The Software Lifecycle Works
Training. Teams train and validate models on the ground using a representative dataset. Quantization and optimization. Before deployment, models are trimmed and optimized for inference on the target hardware. That can include pruning, mixed precision, and compilation for the NVIDIA toolchain to meet performance and power targets. Deployment. Backups and rollbacks are part of the plan so the flight computer can revert if anything behaves outside limits. Inference and logging. During operations, Sterna executes the inference pipeline, writes small outputs to the downlink queue, and keeps a minimal log that supports model health monitoring without bloating storage. Continuous improvement. When the ground team reviews results, they label edge cases, retrain, and push updates. Over time, the model gets better at the specific scenes and seasons of each orbit.
Integration Basics For Mission Designers
Interfaces. Satellite buses differ, but most missions standardize on a small set of electrical and data interfaces. Sterna is intended to connect to common payload and system buses so it can ingest sensor data and talk to the flight computer without custom rewiring. Thermal. Sustained inference generates heat. Power budgeting should account for model duty cycles, orbital day and night, and simultaneous loads from communications and attitude control. Fault tolerance. Space upsets happen. Memory protection, watchdog timers, and graceful degradation strategies help the DPU survive bit flips and resume work transparently. Security.
Performance Considerations Without The Hype
Throughput versus latency. Many missions care more about getting a small, correct answer quickly than about processing every frame. Operators should define service-level goals early and tune pipelines for those goals. Accuracy versus size. A compact model that runs all day on a tight power budget can be better than a massive network that rarely fits into the duty cycle. Field results matter more than benchmark numbers. Clouds, smoke, glint, and sensor quirks can confuse models. Training data should include the messy reality of orbit, not just clean lab scenes. Observability. Teams need lightweight metrics to see how models behave in the wild. That means logging confidence distributions, false positive and false negative patterns, and model drift over time.
What The Funding Enables
The seed round gives EDGX the resources to grow manufacturing, finish flight qualification for more mission profiles, expand software tooling, and support early customers as they move from pilot to production services. Space hardware companies often face a long road between a working prototype and a dependable, repeatable product. Certification, environmental testing, supply chain stability, and documentation all take time and money. The round positions EDGX to make Sterna easier to buy, install, and support at scale.
Who Sterna Is For
New constellation builders. Teams with a clean-sheet design can plan their bus, payload, and downlink strategy around on-orbit inference from day one. That opens the door to smaller radios, fewer passes, or a different mix of sensors. Operators upgrading an existing fleet. Legacy satellites can gain new capabilities by adding a compute module on the next build or during a mid-life refresh. Companies that buy downlinked data today can partner with spacecraft operators to push their models into orbit and capture value earlier in the pipeline.
Practical Benefits In Plain Numbers
Every mission is different, but the benefits fall into a few obvious buckets. Less bandwidth. When a model shrinks a scene to a handful of detections, you can fit far more value into a single pass. Lower end-to-end latency. If a satellite can decide that only 2 percent of frames contain relevant change, the operator can prioritize those frames and get them to the customer faster. Lower cost to serve. Small outputs reduce downlink and storage bills, while in-orbit filtering reduces the cloud compute needed on Earth. More resilient operations. When passes are missed or weather interrupts, the backlog stays manageable because you are not trying to send everything.
A Realistic Look At Risks And Limits
No model is perfect. False positives waste downlink budget. False negatives miss events. In low-signal situations, the most accurate pipeline may still require human review. Operators should plan for a feedback loop where analysts can request the original frame when something looks important or questionable. Space environments are hard. Edge detection that identifies people or sensitive facilities must follow the laws and norms of each customer and region. The key is to engage legal and compliance teams early and often so that models respect constraints by design.
How To Evaluate Sterna For Your Mission
Start with the mission goal. Write down the specific answers you plan to deliver to a customer, like ship counts, fire perimeters, or construction change rates. Define your latency target. Is the customer happy with same day results, or do they need an alert within one pass. Inventory your data sources. List the sensors, resolutions, cadences, and ancillaries you will use, along with likely scene conditions in your orbit. Map the pipeline. Work backward from the answer to the minimal set of transforms and model steps needed. Everything that does not help deliver that answer is a candidate for removal or deferral. Size the power and thermal envelope. Verify that duty cycles, heat dissipation, and placement will support your inference schedule. Plan for observability.
Implementation Checklist
Architecture and safety plan approved by systems engineering. Data interfaces defined and verified with payload and flight computer teams. Model optimized and tested against a representative, messy dataset. Deployment and rollback playbooks rehearsed in a hardware-in-the-loop lab. On-orbit resource scheduling defined for inference windows versus comms and ADCS operations. Ground analyst workflows updated so that teams can request full frames when an alert merits deeper review.
What To Expect After Deployment
During the first few weeks, expect to tune thresholds and schedules. False positives may be high until you calibrate for the real dynamics of your orbit and scene mix. Over time, analysts will trust the outputs as the model settles and the feedback loop improves. You will also see new corner cases that were not present in your training set. That is normal. A steady cadence of small updates is healthier than rare, sweeping revisions. Customers will often ask for new outputs once they see what is possible. Be ready to capture those requests, add them to the roadmap, and evaluate them against power and duty cycle constraints.
Timelines And Milestones To Track
Lab integration. Hardware-in-the-loop tests where your payload streams recorded data into Sterna while the flight computer supervises. Environmental tests. Thermal vacuum, vibration, and radiation exposure where appropriate for your mission profile. Ramp. Gradual increase in model duty cycles and downlink of prioritized outputs. Production steady state. A predictable cadence of updates, health checks, and customer deliveries.
How Sterna Can Change The Business Model
Today, many operators sell data by the scene, by the square kilometer, or by the downlink. Customers then purchase analytics on top. When models run in orbit, operators can sell alerts, confidence-scored detections, or guaranteed delivery windows for high-priority events. That aligns revenue with outcomes rather than raw pixels. It also reduces the barrier for customers who cannot handle large datasets but need reliable signals. Over time, this shift can build stronger recurring revenue for operators and higher satisfaction for end users who care most about timely answers.
Frequently Asked Questions
How is an edge AI computer different from the main flight computer. A flight computer manages the spacecraft. It handles attitude control, power, thermal, and communications. An edge AI computer like Sterna focuses on payload data processing. The two coordinate closely, but their responsibilities are distinct so that inference tasks never jeopardize spacecraft safety. Does running AI in orbit use too much power. AI workloads do require power, but duty cycles and model optimization keep usage in check. Operators schedule inference in bursts, choose quantized models, and balance compute against comms, so average draw stays within the satellite’s budget. What happens if the model is wrong. Operators monitor outputs and maintain the ability to request full frames when an alert looks suspicious or important. They use those cases to improve the training set and push updates. How are software updates sent safely. Updates are signed, encrypted, tested on the ground, and deployed with rollback support. If anything behaves unexpectedly, the system reverts to the last stable version. Can Sterna work with different sensors. Yes in principle. The whole point of a general-purpose DPU is to support a range of payloads. Preprocessing pipelines translate raw streams into the tensors that a model expects, and models are trained on the specific characteristics of each sensor. What about security and compliance. Operators follow the rules of each jurisdiction and customer, implement strong authentication and encryption for uploads and storage, and keep tight control of who can access models and outputs.
Conclusion
Space operators are running into a physics problem. Sensors are getting better faster than downlink capacity is growing. Store-and-forward pipelines that once made sense now leave value on the table because customers wait too long and pay too much to get answers. EDGX is leaning into that gap with Sterna, a compact, NVIDIA-powered data processing unit that moves AI to the edge of space. With €2.3 million in new funding, the company can push Sterna through the remaining product work that turns a promising engineering concept into reliable flight hardware with strong software support. The value proposition is direct. Run the model where the data is born. Transmit only what matters. Deliver answers faster. Lower the cost to serve. Build a service that customers keep because it helps them act at the speed of their own reality. If you operate, build, or buy from satellites, it is time to plan for on-orbit inference. The organizations that adopt it first will set the standard for responsiveness and efficiency in space data services, and they will be better positioned to turn every pass into real decisions on the ground.