The Architecture Behind Bluvolve’s Locate Earth API: Building a Real-Time, Flexible Geofencing System

I’ve spent years building software systems. Some worked beautifully, while others taught me hard lessons about complexity, scale, and maintainability. Over time, my focus shifted from just solving technical problems to building systems that stay robust and manageable over the long run. This journey led me to the architectural design behind Bluvolve’s Locate Earth API, a real-time geofencing and location-tracking platform.
In this article, I want to share exactly how I structured the architecture of the Locate Earth API. It’s about how data flows through the system, the critical decisions I made, and the reasons behind those choices. My goal here is to provide clarity and value by going beneath the surface and explaining why each component exists.
Designing for Real-Time Flexibility
When I first started thinking about this project, I knew it needed to handle diverse scenarios. Users might track vehicles, equipment, or deliveries, each with different update frequencies and priority levels. The system had to adapt quickly, process geofence events accurately, and reliably deliver notifications. It also needed to integrate easily into different industries without creating unnecessary complexity.
I realized early on that real-time location tracking required careful architectural choices. It was essential to ensure the system could scale smoothly without losing accuracy or performance. Here’s how I approached it.
Handling Location Data: Fast and Reliable
Location data is at the heart of the system. Every incoming location update must be processed quickly and stored efficiently. But not all location data has the same purpose. I decided to separate the data into two distinct storage models:
Latest Location Data
For the most recent location of an asset, speed was critical. This data powers live tracking and immediate geofence checks. I chose to store this in an SQL table called asset_location. This table always holds only the latest location per asset. When a new location comes in, it simply overwrites the previous one.
Why SQL? It’s straightforward and reliable for rapid reads and updates. Since this table only ever holds the latest information, it remains compact and performant, enabling fast lookups even with thousands of tracked assets.
Historical Location Data
Historical data is different. It grows continuously and can quickly become large. It’s vital for analytics, reporting, and investigating past events. To store historical location updates efficiently, I considered several options, including Azure Table Storage or a dedicated SQL history table.
Initially, a straightforward SQL solution works fine. However, Azure Table Storage offers scalability and cost advantages over time, especially as historical data expands. Keeping historical data separate allows queries and analytics without slowing down real-time operations.
Real-Time and Batch Processing: Best of Both Worlds
Not every asset update needs instant processing. Some scenarios, like emergency vehicles or high-value cargo, need immediate geofence checks. Others, like routine equipment tracking, can handle slight delays to optimize database load and reduce costs.
To achieve this balance, the Locate Earth API supports two modes:
Real-Time Processing
Critical assets immediately process location updates. Each incoming update triggers a spatial query using PostGIS to detect geofence entries or exits. If the asset crosses a boundary, a geofence event is immediately generated and queued for webhook notification.
This immediate processing ensures the system responds instantly when it matters most. It also demands robust indexing and efficient spatial queries. PostgreSQL with PostGIS and GIST indexing perfectly met these needs.
Batch Processing
For less critical scenarios, incoming location updates enter an event queue. A background worker periodically processes these events in batches, significantly reducing load on the database and improving scalability. Businesses can configure this interval. i.e. every 10 seconds, 30 seconds, or any other frequency that fits their needs.
Using a message broker like Azure Service Bus decouples batch processing from the main API flow. This design choice helps maintain smooth system performance even under high load.
Geofence Event Detection: Stateful and Efficient
Detecting geofence events accurately is challenging. The system must recognize precisely when an asset enters or exits a defined boundary. If done incorrectly, the user receives false alerts or misses critical notifications altogether.
I implemented a stateful event tracking mechanism. Here’s how it works:
- The system maintains a simple lookup table called geofence_state.
- Each row tracks the last known state (inside or outside) of an asset relative to each geofence.
- When a new location arrives, the system uses PostGIS’s ST_Contains() function to check if the asset is within any geofence.
- If the state changes from outside to inside, it triggers an ENTER event. If it changes from inside to outside, it triggers an EXIT event.
- If no state change occurs, the system quietly ignores the update, preventing unnecessary notifications.
To enhance stability, especially near boundaries, the API also supports optional debounce settings. For instance, users can configure the system to ignore rapid entry-exit fluctuations within a configurable time frame, like five seconds.
Reliable and Scalable Webhook Delivery
Webhooks play a vital role in alerting users when geofence events happen. Ensuring webhook notifications reliably reach their destination is crucial. To achieve this, I decided to handle webhook delivery asynchronously.
When the API detects a geofence event, instead of directly calling the webhook endpoint, it places the notification into a message broker (Azure Service Bus). This approach prevents API slowdowns or interruptions if a client webhook endpoint is temporarily unreachable.
A dedicated worker service consumes webhook events from the queue. It looks up the registered webhook subscriber, prepares the notification payload, and attempts delivery.
Webhook deliveries can fail for various reasons like network issues, client downtime, or incorrect URLs. The worker implements an exponential backoff strategy for retries, spacing attempts further apart each time. If retries reach a configured maximum, the event moves to a Dead Letter Queue for manual review.
Successful webhook deliveries are logged in an audit table. Failures are clearly recorded, allowing manual intervention through an admin API if needed. This transparency helps maintain trust and allows swift issue resolution.
Configurable Architecture for Industry Flexibility
From the beginning, I wanted the Locate Earth API to remain flexible. Every architectural component can adapt to different industry scenarios. Tenants can configure:
- Real-time or batch processing for each asset
- Debounce times for geofence stability
- Webhook delivery preferences
- Event filtering options
This configurability ensures the system meets diverse business needs without additional complexity.
A Simpler, Maintainable Approach
Throughout the design process, my guiding principle was simplicity. Complexity quickly becomes unmanageable. Each architectural decision focused on clearly defined, single-purpose components. Data handling, event processing, and notifications were carefully separated. This clarity significantly simplifies future maintenance and improvements.
Conclusion
Building the Bluvolve Locate Earth API was about more than technical solutions. It was about carefully choosing an architecture that remains simple, flexible, and robust under real-world conditions. By clearly separating concerns, thoughtfully handling real-time events, and ensuring reliability through asynchronous processing, the API became something genuinely useful and adaptable.
I hope sharing this architecture helps others facing similar challenges. If you plan to use asset tracking and geofencing in your business or product, let’s connect and discuss further.
Cheers!