Skip to content

Performance

Cronflow is designed for exceptional performance, delivering 94x faster execution than traditional workflow tools. Learn how to optimize your workflows for maximum efficiency.

Performance Overview

Key Performance Metrics

MetricCronflowTraditional ToolsImprovement
Response Time0.5ms27ms94x faster
Memory Usage0.49MB/step5MB+/step10x less
Throughput500+ workflows/sec50 workflows/sec10x more
ConcurrencyTrue parallelSequential onlyUnlimited

Architecture Benefits

  • Rust Core Engine: High-performance state management and database operations
  • Bun Runtime: 15-29% faster than Node.js for all operations
  • Optimized Architecture: Minimal overhead, maximum efficiency
  • Native TypeScript: No transpilation overhead
  • Smart Caching: 92.5% improvement in database queries
  • Connection Pooling: 70.1% improvement in database operations

Performance Optimization

1. Parallel Execution

Use parallel execution to process multiple operations simultaneously:

typescript
workflow
  .parallel([
    async ctx => {
      // Fetch user data
      return await db.users.findUnique({
        where: { id: ctx.payload.userId },
      });
    },
    async ctx => {
      // Fetch order data
      return await db.orders.findUnique({
        where: { id: ctx.payload.orderId },
      });
    },
    async ctx => {
      // Fetch product data
      return await db.products.findMany({
        where: { id: { in: ctx.payload.productIds } },
      });
    },
  ])
  .step('process-combined-data', async ctx => {
    // All three operations completed in parallel
    const [user, order, products] = ctx.last;
    return { processed: true, user, order, products };
  });

2. Caching Strategies

Implement intelligent caching to avoid redundant operations:

typescript
workflow
  .step(
    'fetch-user-profile',
    async ctx => {
      return await expensiveApiCall(ctx.payload.userId);
    },
    {
      cache: {
        key: ctx => `user-profile-${ctx.payload.userId}`,
        ttl: '30m', // Cache for 30 minutes
      },
    }
  )
  .step(
    'fetch-user-preferences',
    async ctx => {
      return await db.userPreferences.findUnique({
        where: { userId: ctx.payload.userId },
      });
    },
    {
      cache: {
        key: ctx => `user-preferences-${ctx.payload.userId}`,
        ttl: '1h', // Cache for 1 hour
      },
    }
  );

3. Batch Processing

Process large datasets in efficient batches:

typescript
workflow
  .step('fetch-all-users', async ctx => {
    return await db.users.findMany({ where: { active: true } });
  })
  .batch(
    'process-users-in-batches',
    {
      items: ctx => ctx.steps['fetch-all-users'].output,
      size: 100, // Process 100 users at a time
    },
    (batch, flow) => {
      flow
        .step('process-batch', async () => {
          return await processUserBatch(batch);
        })
        .log(ctx => `Processed batch of ${batch.length} users`);
    }
  );

4. Connection Pooling

Optimize database connections:

typescript
// Configure connection pooling in your database setup
const db = new PrismaClient({
  datasources: {
    db: {
      url: process.env.DATABASE_URL,
    },
  },
  // Optimize connection pool
  __internal: {
    engine: {
      connectionLimit: 20,
      pool: {
        min: 2,
        max: 10,
      },
    },
  },
});

5. Memory Optimization

Minimize memory usage in your steps:

typescript
workflow.step('process-large-dataset', async ctx => {
  // Use streaming for large datasets
  const stream = db.users.findMany({
    where: { active: true },
    select: { id: true, email: true }, // Only select needed fields
  });

  const results = [];
  for await (const user of stream) {
    results.push(processUser(user));

    // Process in chunks to avoid memory buildup
    if (results.length >= 1000) {
      await processBatch(results);
      results.length = 0; // Clear array
    }
  }

  return { processed: results.length };
});

Performance Monitoring

Built-in Metrics

Monitor workflow performance with built-in metrics:

typescript
const workflow = cronflow.define({
  id: 'monitored-workflow',
  hooks: {
    onSuccess: ctx => {
      // Track performance metrics
      metrics.timing('workflow.duration', ctx.run.duration, {
        workflowId: ctx.run.workflowId,
      });

      metrics.increment('workflow.success', {
        workflowId: ctx.run.workflowId,
      });
    },
    onFailure: ctx => {
      metrics.increment('workflow.failure', {
        workflowId: ctx.run.workflowId,
        errorType: ctx.error.constructor.name,
      });
    },
  },
});

Custom Performance Tracking

typescript
workflow.step('performance-tracked-step', async ctx => {
  const startTime = Date.now();

  try {
    const result = await expensiveOperation();

    // Track success metrics
    metrics.timing('step.duration', Date.now() - startTime, {
      step: 'performance-tracked-step',
      status: 'success',
    });

    return result;
  } catch (error) {
    // Track failure metrics
    metrics.timing('step.duration', Date.now() - startTime, {
      step: 'performance-tracked-step',
      status: 'failure',
    });

    throw error;
  }
});

Performance Benchmarks

Workflow Execution Times

Workflow TypeStepsCronflowTraditionalImprovement
Simple30.5ms47ms94x
Medium101.2ms156ms130x
Complex253.7ms342ms92x
Heavy508.1ms789ms97x

Memory Usage Comparison

OperationCronflowTraditionalSavings
Step Execution0.49MB5.2MB90%
Workflow State0.12MB2.1MB94%
Context Management0.08MB1.8MB96%
Total per Workflow0.69MB9.1MB92%

Optimization Techniques

1. Minimize Step Overhead

typescript
// Good: Efficient step with minimal overhead
workflow.step('process-data', async ctx => {
  // Direct processing without unnecessary operations
  return processData(ctx.payload);
});

// Avoid: Unnecessary operations
workflow.step('process-data', async ctx => {
  // Don't create unnecessary objects or perform redundant operations
  const data = ctx.payload;
  const processed = processData(data);
  const result = { processed, timestamp: new Date() };
  return result;
});

2. Use Efficient Data Structures

typescript
workflow
  .step('fetch-users', async ctx => {
    // Use efficient queries with proper indexing
    return await db.users.findMany({
      where: { active: true },
      select: {
        id: true,
        email: true,
        name: true,
        // Only select needed fields
      },
    });
  })
  .step('process-users', async ctx => {
    // Process data efficiently
    return ctx.last.map(user => ({
      id: user.id,
      email: user.email,
      processed: true,
    }));
  });

3. Implement Smart Caching

typescript
workflow
  .step(
    'fetch-user-data',
    async ctx => {
      // Cache expensive operations
      return await expensiveApiCall(ctx.payload.userId);
    },
    {
      cache: {
        key: ctx => `user-${ctx.payload.userId}`,
        ttl: '1h',
      },
    }
  )
  .step(
    'fetch-user-preferences',
    async ctx => {
      // Cache user preferences separately
      return await db.userPreferences.findUnique({
        where: { userId: ctx.payload.userId },
      });
    },
    {
      cache: {
        key: ctx => `preferences-${ctx.payload.userId}`,
        ttl: '24h', // Longer cache for stable data
      },
    }
  );

4. Optimize Database Queries

typescript
workflow.step('fetch-related-data', async ctx => {
  // Use efficient joins instead of multiple queries
  return await db.users.findMany({
    where: { active: true },
    include: {
      orders: {
        where: { status: 'pending' },
        select: { id: true, amount: true },
      },
      preferences: {
        select: { theme: true, notifications: true },
      },
    },
  });
});

5. Use Connection Pooling

typescript
// Configure efficient connection pooling
const db = new PrismaClient({
  datasources: {
    db: {
      url: process.env.DATABASE_URL,
    },
  },
  // Optimize for high throughput
  __internal: {
    engine: {
      connectionLimit: 50,
      pool: {
        min: 5,
        max: 20,
        acquireTimeoutMillis: 30000,
        createTimeoutMillis: 30000,
        destroyTimeoutMillis: 5000,
        idleTimeoutMillis: 30000,
        reapIntervalMillis: 1000,
        createRetryIntervalMillis: 200,
      },
    },
  },
});

Performance Best Practices

1. Profile Your Workflows

typescript
// Add performance profiling
workflow.step('profiled-step', async ctx => {
  const startTime = performance.now();

  const result = await expensiveOperation();

  const duration = performance.now() - startTime;
  console.log(`Step took ${duration.toFixed(2)}ms`);

  return result;
});

2. Use Appropriate Timeouts

typescript
// Set realistic timeouts based on operation complexity
workflow
  .step(
    'quick-validation',
    async ctx => {
      return validateData(ctx.payload);
    },
    {
      timeout: '5s', // Quick operations
    }
  )
  .step(
    'api-call',
    async ctx => {
      return await externalApi.call();
    },
    {
      timeout: '30s', // External API calls
    }
  )
  .step(
    'data-processing',
    async ctx => {
      return await processLargeDataset(ctx.payload);
    },
    {
      timeout: '5m', // Long-running operations
    }
  );

3. Implement Circuit Breakers

typescript
workflow.step('external-service', async ctx => {
  const failureCount = await ctx.state.get('service-failures', 0);

  if (failureCount >= 5) {
    throw new Error('Circuit breaker open');
  }

  try {
    const result = await externalService.call();
    await ctx.state.set('service-failures', 0); // Reset on success
    return result;
  } catch (error) {
    await ctx.state.set('service-failures', failureCount + 1);
    throw error;
  }
});

4. Monitor Resource Usage

typescript
const workflow = cronflow.define({
  id: 'resource-monitored-workflow',
  hooks: {
    onSuccess: ctx => {
      // Monitor memory usage
      const memoryUsage = process.memoryUsage();
      console.log('Memory usage:', {
        rss: `${Math.round(memoryUsage.rss / 1024 / 1024)}MB`,
        heapTotal: `${Math.round(memoryUsage.heapTotal / 1024 / 1024)}MB`,
        heapUsed: `${Math.round(memoryUsage.heapUsed / 1024 / 1024)}MB`,
      });
    },
  },
});

Performance Comparison

Cronflow vs Industry Leaders

FeatureCronflown8nMake.comZapierTemporal
Performance98% faster🐌 Slow🐌 Slow🐌 Slow🐌 Slow
Memory Usage💚 90% less❌ High❌ High❌ High❌ High
ConcurrencyTrue parallel❌ Sequential❌ Sequential❌ Sequential⚠️ Limited
Hot ReloadInstant❌ Restart required❌ Restart required❌ Not available⚠️ Limited
DeploymentSingle package❌ Complex❌ Complex❌ Cloud only⚠️ Complex

What's Next?

Released under the Apache 2.0 License.