refer
Google Docs for regstuff/Wayback Machine SPN2 API Docs
You can also refer this one which is doing the same thing as you. But you need to use translator to read it. https://qiita.com/yuki_2020/items/73307ddb2d286d79a5a9
@𝖢𝖸_𝖥𝗎𝗇𝗀
Well I am pretty sure that it would capture way too much trash that I don't care about and it would make the program take much longer to complete too, so I don't think that it would work.
I have read their old API documentation, searched online, and also analyzed a couple of other scripts that do the same thing, but they all only make a single fetch request, not multiple ones like I am trying to do. Is there a way to do this, or an API limitation I don't know of? Does the "concurrent captures limit (limit=3)", mean that I can only save 3 pages per minute?
Below is what I found out about their API.
Anonymous users have lower concurrent captures limit (limit=3) compared to authenticated users (limit=5). The limit of daily captures for anonymous users is 5k. The size of screenshots is limited to 4MB. Bigger screenshots are not allowed due to system overload. If a target site returns HTTP status=529 (bandwidth exceeded), we pause crawling that for an hour. If a target site returns HTTP status=429 (too many requests), we pause crawling that for a minute. All requests for the same host in that period get a relevant error message. Previously, we started these captures later, adding a delay of 20-30sec.