Managing large Integrations


A Question, an edited response

I was asked recently to describe my integration experience with platforms (think Trello, Google Analytics or similar), and how I've managed the challenges of scaling integrations across a wide ecosystem.

What follows is my response, with the benefit of hours of editing and formatting, I hope you (and future me) find it helpful.

Hello there, I will summarise my experiences with integrations below. I find the following to be true regardless of what integration you are undergoing, i itemize them below and go into some detail later in the article.

As a couple of things are common between them, be it exposing an API to interface with a Selenium farm, workflows powered by Trello : external resources, managing access, resource limits, type of response, testing and monitoring.

Encapsulate your external resources

As much as we want to be fast and pragmatic, I have found that encapsulating to quite speed things up and also lets you focus on your logic and not necessarily design your system to fit a provider.

interface CanProvideWorkflowMechanisms  {
  proceedTo(ctx context.Context, boardId, cardId, toCardId string) (error)
}

The above is a sample encapsulation, but even this is already being forced to look like say a Trello integration, what happens if this workflow becomes provided by a Miro board? cardId may become irrelevant.


struct ProceedOptions  {
   toCardId *string
   someOtherPropertyRequiredByOtherProvider *string  
}

interface CanProvideWorkflowMechanisms {
  proceedTo (ctx context.Context, from, to string, options ProceedOptions) (error)
}


Another benefit of encapsulating here is that it becomes. A lot easier to test our integration and to also fake certain scenarios (more on this in the #Testing section)

Using a common wrapper for http clients, grapnel clients and the likes is also very helpful.

Another benefit of this is it provides an opportunity to introduce an anti corruption layer to your external providers’ inputs and outputs

Async where possible


// dependency injection
services.AddHttpClient();

public abstract class AppHttpClient
{
  protected readonly HttpClient Http;
  
  protected AppHttpClient(IHttpClientFactory client, string baseRoute)
  {

    BaseRoute = baseRoute;
    Http = client.CreateClient();
  }

  protected async Task<TReturn> GetAsync<TReturn>(string relativeUri, AllowedHeaders headers, CancellationToken token = default)
  {
    HttpResponseMessage res = await Http.GetWithHeadersAsync($"{BaseRoute}/{relativeUri}", headers, token);
  }

}

The snippet above is an approximation of our initial solution, we thought this was good because it prevented us initiating the client in each subclass (we had a subclass for different services).


// dependency injection
services.AddHttpClient();

// see: https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests
services.AddHttpClient<ISomeService, SomeService>(client =>
{
    client.BaseAddress = new Uri(Configuration.GetValue<string>("ASPNETCORE_YOUR_Api_Url"));
});


var httpConfig = new HttpConfig();
httpConfig.someBaseUri = Environment.GetEnvironmentVariable("ASPNETCORE_YOUR_Api_Url");


public abstract class AppHttpClient
{
  protected readonly HttpClient Http;
  
  protected AppHttpClient(HttpClient _http)
  {
    Http = _http;
  }

 protected async Task<TReturn> GetAsync<TReturn>(string relativeUri, AllowedHeaders headers, CancellationToken token = default)
  {
    // client is set up correctly via dependency injection
    HttpResponseMessage res = await Http.GetWithHeadersAsync(relativeUri, headers, token);
  }
}


The above snippet shows how we refactored to be in line with best practices (more info in the learn.microsoft.com article linked earlier in this section). We have avoided the subtle problem of socket exhaustion by using the HttpClientFactory which keeps its own pool of client handlers recycling periodically.

Handling keys

The possibility or rotating keys should be supported by abstractions. In the case of multiple keys perhaps to separate users (maybe by geographical area or tiers) should be supported


class XProviderImpl {

private getAuthenticationInformation(int customerId, TCustomerOpts customerOpts) {
  return {
    clientId: ‘’,
    clientSecret: ‘’,
    region: customerOpts.region
  };
}

triggerEffect(int customerId, TBody data, TCustomerOpts customerOpts) {

  // ...
  return client.make(getAuthenticationInformation(customerId, customerOpts)).do();
  };
}

Take for instance, we need to perhaps due to legal reasons process some customer information in certain data regions, or use a different account to process requests for our customers of a higher tier. This should also be provided for.

Above I have presented pseudocode to represent how this may look like.

Testing

Code should be laid out in a way to ease unit testing core logic and integration testing (replacing http / graphGL clients for instance). Testing should also be set up to achieve the following goals:

Set timeouts on your clients and other sane defaults




Originally published on November 4, 2024.
integrations software