New 42-day free trial
Smarty

Evaluating geocoding API capacity and speed

How fast is your geocoder? Some APIs hit 27 geocodes per second, others achieve 100,000+. Evaluate before committing to save time, cut costs, and improve data.
Davin Perkins
Davin Perkins
 • 
May 25, 2022
Tags

The right geocoding API can help your business reduce costs and increase profits through accurate location data. But how do you know if a service can handle your data needs? Capacity and speed are two key topics to consider when comparing geocoding services.

You can find out more about all the questions you need to ask in our ebook, 8 Questions to Ask When Selecting a Geocoding API by clicking the link here:

Download '8 Questions to Ask When Selecting A Geocoding API' Now.

API capacity and speed: What you need to know

The higher the capacity and faster your geocoding API works, the faster you can utilize that address data and make money. Your accounting department is already smiling just thinking about it.

Let's get into how we measure geocoding speed and what's considered fast versus slow.

Geocoding API speed is measured in “query per second” rates, or QPS. That means we're tallying how many addresses can be geocoded per second. There's a wide spectrum or published geocoding API provider speeds—from 27 QPS all the way up to 100,000+ QPS.

Most businesses find that producing dozens, hundreds, or even thousands of geocodes per second simply isn't fast enough.

Speed and capacity are closely linked. If a service provider doesn't have enough server capacity, they can't handle a mass quantity of geocodes at one time. When too many geocodes overload the system, there's typically one of two reactions:

  1. The geocoding system bogs down and slows to a turtle-like speed.
  2. The geocoder behaves as a cranky hall monitor and forces all users into a line.

Both scenarios have the same outcome—SLOW speeds.

So what's the solution to your need for speed?

Look for a cloud-based geocode provider that can spin up new servers almost instantly to accommodate near infinite usage. This provides very high speeds and nearly unlimited capacity for users.

Think you don't need Sonic the Hedgehog level speeds for your address data? Consider this use case:

In the property and casualty insurance industry, profitability hinges on the accuracy of risk assessments—location being a key factor in those assessments. Because location attributes change frequently, many insurance companies update geocodes for their whole database of hundreds of millions of addresses every month. That's simply not practical without a fast geocoding solution.

Operationally efficient, high-performing businesses shouldn't be forced to wait. Choose a geocoding provider with high capacity and high speeds to get the most benefit from your address data.

Questions to ask about geocoding API capacity & speed

  • Will the speed and capacity fit my business needs?
  • What hardware / software limitations might affect the provider's speed?
  • What does the provider's SLA (Service Level Agreement) state about downtime, latency, outages, response time, QPS, and server capacity that impact speed and capacity?

Of course, while important, speed and capacity aren't the only factors to consider when choosing your geocoding service.

You also want to consider things like on-premise versus cloud, and whether a provider is compatible with third-party basemaps. To find out the answers to these questions, click the link to download the full ebook:

Download '8 Questions to Ask When Selecting A Geocoding API' Now.

Subscribe to our blog!
Learn more about RSS feeds here.
rss feed icon
Subscribe Now
Read our recent posts
Patient form optimization: The $17.4 million problem
Arrow Icon
Let's start with a number that should make every hospital administrator do a double take: $17. 4 million. That’s how much the average hospital loses annually—just from denied claims due to patient misidentification. This isn’t from equipment costs, not from staffing shortages, and not even from insurance negotiations—just from keeping bad patient data. Surely, our forms aren’t that bad. (Yes, they are, and stop calling me Shirley. )But here’s the reality: According to the 2016 Ponemon Misidentification Report, 30% of hospital claims get denied, and over a third of those denials are caused by inaccurate or incomplete patient information.
The GPS adventures of a distracted developer
Arrow Icon
My name is Jeffrey Duncan, and at the pestering of Smarty’s editor, I’m writing a blog about the many adventures I’ve had in life and how address data has played a big part in them. I met my wife about eight years ago on a dating website. At the time, I lived in Provo, Utah, while she lived in Palmwoods, Australia, on the east coast of Queensland. On this dating app, I entered the area where I was interested in finding someone, about a 25-mile radius of Provo, Utah. I had no intention of leaving the valley, definitely not the state, and certainly not the country.
Improving address data quality in healthcare
Arrow Icon
Healthcare is experiencing a dramatic shift in how patient information is stored and managed. Electronic medical records (EMRs) and electronic health records (EHRs) have become the new standard, driven by two powerful forces: everyone has mobile devices and the increase in the human population. A growing problem with clean address data in EMRs and EHRsWhile EMRs and EHRs offer amazing accessibility and storage capabilities (way better than a filing cabinet), they've also exposed a critical weakness in many healthcare organizations: address data quality.

Ready to get started?