iPaaS vs. Custom Code: A Technical Breakdown
A technical comparison of iPaaS and custom code for integrations, covering cost, scalability, error handling and implementation details.
TL;DR Matrix
A high-level comparison of the two approaches across key technical dimensions.
| Dimension | iPaaS (e.g., Zapier, Make) | Custom Code (e.g., AWS Lambda, Azure Functions) |
|---|---|---|
| Speed to Deploy | Hours to days | Days to weeks |
| Maintainability | Low. Managed by vendor. Connector updates are automatic. | High. Requires code updates, dependency management, infra config. |
| Customisation | Low to Medium. Limited by connector actions and logic blocks. | High. Unconstrained. Any API, any logic, any data transformation. |
| Observability | Opaque. Relies on platform-provided logs and history. | High. Full control over logging, tracing and metrics (e.g., CloudWatch, Datadog). |
| Error Handling | Basic. Pre-built retries, often with fixed backoff. Limited custom logic. | Granular. Custom retry policies, dead-letter queues, circuit breakers. |
| Cost Model | Per-task/operation. Predictable at low volume, expensive at scale. | Per-ms compute + invocations. Cheap at low volume, efficient at scale. |
| Scalability | Managed, but often has concurrency/rate limits per plan. | High. Scales to thousands of concurrent executions. Limited by downstream services. |
Use Cases
Choosing the right tool depends entirely on the problem's context, not just its technical merits.
Choose iPaaS for:
- Marketing & Sales Automation: Connecting a CRM (HubSpot) to an email tool (Mailchimp) where speed is more important than custom logic.
- Internal Operations: Posting new Stripe customer details to a Slack channel. It's a low-stakes, high-value task that doesn't warrant engineering time.
- Prototyping: Quickly validating an integration's value before committing development resources to build a custom solution.
Choose Custom Code for:
- Core Product Features: A user-facing integration that requires low latency, high reliability and custom branding. The integration is the product.
- Complex Data Transformation: Ingesting, normalising and enriching data from multiple sources before loading it into a data warehouse. The logic is too complex for a visual builder.
- High-Volume Webhooks: Processing thousands of incoming webhooks per minute from a service like Shopify, where the cost of an iPaaS would be prohibitive.
Technical Analysis
Let's analyse a common scenario: a webhook from Stripe for a charge.succeeded event needs to update a Contact record in Salesforce.
iPaaS Approach (Zapier)
The entire process is configured through a web interface. You don't write code, you connect pre-built blocks.
- Trigger: You select the 'Stripe' app and choose the 'New Charge' trigger. You authenticate your Stripe account via OAuth.
- Action: You select the 'Salesforce' app and choose the 'Update Record' action. You authenticate your Salesforce account.
- Data Mapping: You map fields from the Stripe trigger to the Salesforce action. For example, you'd map
Billing Details Emailfrom Stripe to theEmailfield in Salesforce to find the correct contact. Then you'd mapAmountto a custom field likeLast_Payment_Amount__c.
Zapier handles the underlying HTTP requests, authentication token refreshes, and basic data type conversions. You never see a line of code or a JSON payload.
Custom Code Approach (AWS Lambda + API Gateway)
Here, you control every part of the process. You define an API Gateway endpoint to receive the webhook and trigger a Lambda function to process it.
import os
import json
import requests
import hmac
import hashlib
# Pulled from environment variables for security
SALESFORCE_TOKEN = os.environ['SALESFORCE_TOKEN']
SALESFORCE_INSTANCE_URL = os.environ['SALESFORCE_INSTANCE_URL']
STRIPE_WEBHOOK_SECRET = os.environ['STRIPE_WEBHOOK_SECRET']
def validate_stripe_signature(payload_body, signature_header):
# The bit most guides skip: You must validate the webhook signature
# to ensure it's genuinely from Stripe. iPaaS does this for you automatically.
try:
event_timestamp, signed_payload = signature_header.split(',')[0].split('=')[1], signature_header.split(',')[1].split('=')[1]
signed_payload_string = f"{event_timestamp}.{payload_body}"
expected_signature = hmac.new(
STRIPE_WEBHOOK_SECRET.encode('utf-8'),
signed_payload_string.encode('utf-8'),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(expected_signature, signed_payload)
except (ValueError, IndexError):
return False
def handler(event, context):
# 1. Validate the incoming request's signature
signature = event['headers'].get('Stripe-Signature')
if not signature or not validate_stripe_signature(event['body'], signature):
return {'statusCode': 400, 'body': 'Invalid signature'}
# 2. Parse the Stripe event payload
stripe_event = json.loads(event['body'])
if stripe_event['type'] != 'charge.succeeded':
return {'statusCode': 200, 'body': 'Event not processed'}
customer_email = stripe_event['data']['object']['billing_details']['email']
amount = stripe_event['data']['object']['amount'] # This is in cents
# 3. Construct and send the request to Salesforce
sf_url = f"{SALESFORCE_INSTANCE_URL}/services/data/v58.0/sobjects/Contact/Email/{customer_email}"
headers = {
'Authorization': f'Bearer {SALESFORCE_TOKEN}',
'Content-Type': 'application/json'
}
payload = {
'Last_Payment_Amount__c': amount / 100 # Normalise data
}
try:
response = requests.patch(sf_url, headers=headers, json=payload)
response.raise_for_status() # Check for HTTP errors
except requests.exceptions.RequestException as e:
print(f"Error updating Salesforce: {e.response.text}")
# This error can be handled with retries or sent to a DLQ
return {'statusCode': 502, 'body': 'Downstream API error'}
return {'statusCode': 200, 'body': 'Success'}
This code gives you full control over the logic, from security validation to data normalisation (e.g., converting cents to pounds).
Error Handling
How each approach handles failure is a critical differentiator.
iPaaS
- Retries: Most platforms automatically retry failed steps on a fixed schedule (e.g., after 1, 5, and 15 minutes). This is effective for transient network issues or brief API outages (5xx errors).
- Rate Limits (429s): Behaviour varies. Some platforms may respect the
Retry-Afterheader. Others may just fail the step after their standard retry attempts, leaving you to manually replay it later. - Failure State: When all retries are exhausted, the run is marked as 'Failed'. You typically get an email notification. There's no built-in mechanism for a dead-letter queue the failed task and its payload sit in the run history.
Custom Code
-
Granular Retries: You can implement any retry strategy you need. For a 429, you can parse the
Retry-Afterheader and wait for the specified duration. For a 503, you can implement exponential backoff with jitter to avoid thundering herd problems.# A simple exponential backoff with jitter import time import random for attempt in range(max_retries): # ... make request ... if should_retry(response): delay = (base_delay * 2**attempt) + random.uniform(0, 1) time.sleep(delay) else: break -
Dead-Letter Queues (DLQs): This is a major advantage. You can configure the Lambda function so that if an event fails after all processing attempts, it's automatically sent to an Amazon SQS queue. This preserves the event data, prevents data loss, and allows you to inspect and re-process failures later without blocking new events.
-
Circuit Breakers: For integrations critical to system stability, you can implement a circuit breaker pattern. If a downstream service (like Salesforce) consistently returns errors, the function can 'trip the breaker' and stop sending requests for a period, preventing it from overwhelming a struggling service.
Cost & Scalability
iPaaS
- Cost: You pay per 'task' or 'operation'. A single workflow can use multiple tasks (e.g., Trigger -> Filter -> Action = 3 tasks). This model is predictable but scales poorly. 100,000 executions per month on a mid-tier plan can become very expensive.
- Scalability: The platform handles scaling, but your plan dictates the limits. You'll face constraints on how frequently workflows can run (e.g., every 5 minutes) and potential concurrency limits. You're paying for a managed service with defined performance tiers.
Custom Code (AWS Lambda)
- Cost: You pay for what you use: the number of invocations and the compute duration (in GB-seconds). The AWS Free Tier often covers hundreds of thousands of simple integration executions per month for free. At scale, the cost is exceptionally low compared to iPaaS task-based pricing.
- Scalability: It scales automatically to handle demand, from one request per day to thousands per second. The primary bottleneck isn't the Lambda service itself but the rate limits of the APIs you're calling. You have direct control over concurrency settings to manage this load effectively.