What is current API v1 rate limit?


#1

After a couple of days of running API V1 fine at 59 RPM, I’m now getting 429 (Too Many Requests) errors even at 12 RPM. I’ve double-checked that I don’t have some hidden process running, am logging all request times including retries, etc.

I should add, in case it matters: I’ve been running a big process, grabbing stuff from all states to do the quarterly “Which vote scrapers aren’t working” thing I do. But it’s never gone over 60 RPM since last Saturday.

I wonder if this maybe actually reflects too much aggregate server load? It’s not the usual meaning, but when I was truly running too hot on Saturday, I was getting a different error code - 403, IIRC. If so, others should be getting 429’s too.

[We need an API v1 category.]


Old data being served from API for NY and MA state legislators
Feasibility of running local scrapers as a replacement for API usage
#2

Your key is in a cool off period from what I can tell.

I’m traveling today but I’ll look into temporarily raising your limit for a while when I’m able to. Your key is making 120 rpm, I’d look at if you’re getting 301s and adjust your requests accordingly.


#3

Thanks, James.

I found the issue: there’s an additional limit of 10000 requests per day, which is a bit less than 7 RPM sustained, or a bit under 3 hours at 60 RPM.

quota exceeded: 10000/daily

Is this documented somewhere? Is it new?
This is the first I’ve known of it.

-Ed


#4

that limit has been in place since last year, what level of usage do you need? we can look at an increase


#5

60 RPM sustained should be good enough, so 60 * 24 * 60 = 86,400. I don’t need it often; and it averages out to a lot less; the less frequently I download, the bigger the job is. In this case, because of the NH outage, I haven’t done a full update since late March, so there are a lot of bills. If that’s too much, I can just reduce my rate to fit, though it will be tedious to run for days on end. Not knowing what the problem was, was the worst of it.

FWIW, I intend to try to help reduce the required workload for updating under v2, principally by looking at cases where queries need enhancing to support updatedAt, and/or scrapers where updatedAt is being pointlessly updated. More on that elsewhere, later when I’ve more experience with it.