Hi,
I am looking for some assistance to set up a daily sync of organisation and people data in Pipedrive, from data in SQL server. I can provide the SQL queries to retrieve the data that will need to be added, updated and removed in Pipedrive. Is this anyone’s forte?
Thanks,
Michael
Hello,
I would be glad to assist you.
To discuss further in detail kindly reach me at garry@cisinlabs.com or Skype me: cis.garry
Looking forward for your response.
Thanks
Garry
Hi
Greetings!
I have gone through your post & I would be happy to assist you.
I need some information to share my understanding with you. Rates are very competitive so it won’t be any issue for you
Please add me on Skype so we can discuss further.
Warm Regards,
Trish
Skype - live:.cid.baff7c7dd9471b54
Hi Michael ,
Sure, that’s what I have the expertise migrating and writing cronjob to fetch and populate the data into Pipedrive base.
Please connect further and share the sample SQL and your Pipedrive base fields to map the columns.
Looking forward to your reply eMail - deepvyas71@gmail.com
Regards
Deep
Hi Michael,
I would recommend you check the commercial COZYROC SSIS+ library. It is an extension library for Microsoft SQL Server Integration Services (SSIS) and it includes connectors for Pipedrive in addition to connectivity for more than 100 other applications. When you test and develop, no license key is required.
Hi Michael,
All that you have described is certainly well within my area of expertise and I would be glad to help you out on all this.
You can reach me on andrewjohnson56782@gmail.com
Cheers and Have a great day ahead,
Andrew
Michael, since the replies are mostly contact offers, figured I’d add something on the technical side.
With your SQL queries already producing the add/update/remove sets, the main work is the Pipedrive API layer. A couple things that bite people on this kind of sync: rate limits will throttle you on larger batches (v2 uses token based limiting, watch for 429 responses and build in backoff), and custom field keys in Pipedrive are hashed strings, not the human readable labels you see in the UI. So your script needs to resolve those first via the fields endpoint before you can map your SQL columns correctly.
For deletes, the v2 API does support bulk delete (up to 1000 per request), but it processes them asynchronously in the background and marks records for permanent deletion after 30 days, so you’ll want to account for that lag in your sync logic.
If you’d rather skip building all that plumbing, this is basically what we do at Stacksync, database to Pipedrive sync with automatic field mapping and conflict handling. Otherwise a scheduled Python script against the v2 endpoints with proper rate limit handling is the standard DIY path.