When we launched the API back in July, we had some ideas as to how to gauge success from a metrics perspective. Some of those success measures were around adoption by member stations, others we based on total number of registrants, and others were based on number of requests. That said, having one of the first comprehensive content APIs, it was hard to determine what the actual numbers meant. In our first few weeks, we had over 300 registrants. Was that good? We think so, but it is hard to know. We know that many of those registrants were member stations, many were developers in the public, and some percentage were people who registered simply to take a look at what they just read about in an article somewhere. After one month, we exceeded 1,000,000 requests to the API itself. We were pretty confident that number was a good one, but again, we had no real basis of comparison.
Despite the challenges in figuring out what our numbers mean, we do believe that our usage and registration numbers (published most recently two weeks ago in my last post) are a strong indication of success for the API.
Another challenge is how to actually get our metrics. While our goal is to encourage the re-use of our content, we obviously want some way to measure success. There are several key ways that we have baked into the system to allow us to see how the API is being used. Keep in mind that there is no 100% way to know how many eyes are seeing the content, only how people are implementing it, and in some cases, on which websites, blogs or applications people are seeing the content that came from the API. The primary methods are as follows:
* All requests to the API require an access key. This helps us identify trends in usage of the API at the key level, in addition to at much higher levels.
* For each request in the system, we will be outputting a log to our servers that includes the request, the API key used in the request, and the stories/assets that were returned. Over time, we will be able to see trends of use, most popular requests, most commonly distributed stories, etc.
* For any rich-content request to the API (ie. text elements that contain HTML), we have included a 1x1 pixel image that is served from NPR servers (which is an industry standard approach for capturing metrics online) and passes information back to our logs. This will help us identify some of the places where NPR content is appearing when it has been cached by the website, blog or application.
Like I said, this is not the complete picture, but these approaches result in metrics that do give us a good indication as to how the API is getting used and by whom. With that in mind, these numbers only have weight if they translate into real-world consumption of the content. In my next post I will highlight some of the more interesting implementations and usages that we have heard about in the marketplace.