Is there any way to run hourly backups using the API? Has anyone ever done this?
Do you mean backup of your Pipedrive data?
If you do, I would ask why you want to back it up in the first place (we already do that for you).
But if for some reason you want to have the data duplicated and in sync somewhere, you can do it.
You should just make sure you don’t exceed the rate limit.
And using webhooks you can receive the updated data without affecting the rate limit.
Dani, can you give my developer Dhruv and I step by step instructions to go from the current daily backups to hourly backups? Also, would once per hour exceed current rate limits?
I’m sorry, Adam. I’m not really sure what you mean. But let’s see if I can help.
What’s your end goal?
If you’re talking about the backups that we do automatically, you can’t change the time interval on those…
And if you mean keeping your data in sync, if you use webhooks you can have it updated in realtime without affecting the rate limit.
Dani, it’s very simple. All we want to do is to make an hourly backup of our Pipedrive instance, that’s it!
Hey Adam, there is not a “backup all my pipedrive data” API call, rather you need to identify which items you wish to back up, then call the appropriate API endpoint for that data. As long as you throttle calls (https://pipedrive.readme.io/docs/core-api-concepts-rate-limiting) you will not hit the rate limits.
We sometimes use the API to extract all available data; if the account is large it can take many, many hours to extract all the data - so it may not be practical to export via the API every hour. The “Recents” API endpoint allows determination of what changed recently for some objects, and so can be used to make the extract more efficiently. Feel free to PM me if you would like to hear how we do this.
Tim, thanks very much. I will have Dhruv stay in touch with you as we build this.
Dhruv, let’s build this now, and I’ll ask Sina which bucket to put it in.