System integration has been one of the most consistent focus areas for businesses over many years. It is one of those pillars within IT which has constantly shifted shape and evolved but has never lost its relevance because the number and varieties of systems that enterprises invest in, has only increased. If anything, system integration has now become more complicated and demanding than ever before with a myriad of systems cropping up within enterprises, starting from minute sensors everywhere to humongous legacy mainframes to hosted SaaS solutions. To combat this, a variety of system integration approaches are being adopted to connect this diverse and scattered ecosystem into a coherent application network that can effectively satisfy the operational efficiency needs that a hungry real-time enterprise demands.
In this post, we take you through some of the top system integration trends that we see emerging in today’s digital age and explain why it is essential that you start adopting these if you have not already begun to do so.
Real-time Information Exchange: Use Events and Event Streams
Real-time information exchange across the entire enterprise ecosystem is becoming vital for all industries that are wanting to make quick decisions based on immediate and accurate availability of information within digital experience channels. Events and event streams are emerging as the de-facto mechanism for generating and pushing data across a distributed system landscape to interested recipients.
The proliferation of smart devices has provided the ability to gather huge volumes of real-time data in the form of event streams. From autonomous vehicles to smart factories, dirt-cheap sensors are now installed everywhere generating prolific volumes of streaming data which can now actually be collected and stored in big data systems running in scalable cloud infrastructure.
Not only event streams, the inherent capability of modern-day systems to be able to natively emit key business events whenever they perform their core functions makes event-driven architectures a natural style of integration for them.
Finally, the inherent need to deliver outcome-based services in real-time is pushing businesses to tap into events as soon as they occur and respond to them instantaneously so that they can deliver timely and high-quality solutions and experiences to their customers.
It is important to start by investing in a modern-day messaging platform that gives you the ability to collect and distribute events seamlessly across your entire application network. You should then look at some of the key business processes that stand to gain the most by moving to a more real-time execution and decision-making model. Next, you should look to properly define and catalogue these initial set of events for them to be discovered later by future consumers.
Then, start rolling out your initial event-driven implementations and measure the time efficiency gained by your key business processes as a result. You should finally look to scale this model and repeat it until you have transformed into a truly real-time event-driven enterprise.
Rubicon Red partners with Solace who provide a best of breed distributed messaging platform, Solace PubSub+ Platform which fulfils many of the complex requirements that a modern-day event-driven architecture demands.
You stand to gain a lot by adopting an event-driven system integration approach. Firstly, you can reduce your operational costs significantly because you can achieve increased application responsiveness and faster decision-making. You can start building truly distributed systems capable of scaling infinitely as they are now loosely coupled with each other using event-driven contracts.
Finally, you will move from an ‘always store but seldom process’ approach to an ‘always moving and always processing’ approach which will ensure that you don't miss those critical moments of engagement when delivering your customer experiences.
Unlocked & Commoditised Assets: Use self-service APIs
Unlocked and commoditised assets are becoming crucial for enterprises to harvest untapped potential hidden away within their existing legacy systems and data silos in order to make them available at mass scale as ‘off-the-shelf’ commodities to all parts of their digital innovation channels. Self-service APIs are emerging as the ubiquitous technique to implement a discoverable, secure and consistent access layer for all enterprise assets.
The need to constantly innovate the way business is done without having to incur significant capital investment every time has forced enterprises to look into existing IT assets and analyse how value can be extracted out of them to serve new digital demands. APIs provide a way to unlock this value and make it discoverable for lines of business. Digital transformation demands agility at higher layers of the business and the faster digital teams can self-serve functionality stored away in legacy systems, the lesser dependency they have on central IT to deliver key digital experiences.
Regulations and open standards have put further pressure on enterprises to comply and make customer data available transparently to different kinds of end consumers. APIs present a natural means of achieving this and we have already seen multiple trends such as Open Banking emerge to facilitate access to consumer and financial data in a self-service secure manner.
Finally, the proliferation of digital channels such as modern websites, mobile apps, chatbots and such has forced businesses to present access to backend data and systems in a headless mode so that it can be used in various ways to serve the diverse needs of each channel.
Start by investing in an API strategy. This will help you clearly articulate your specific business goals which will be met through an API investment. I say this because APIs have diverse applicability and your specific strategy will shape up how you would start on the journey. For example, if one of your primary goals is to be able to offer data at a wide scale to the external world and even monetise it, investing in a full-fledged API management platform becomes mandatory. However, if you are only interested in internal consumption within very specific lines of business, you can start small and build your initial set of APIs before you decide to manage and measure them using a dedicated API platform.
Next, define a clear set of standards keeping in mind the consumers of the APIs to ensure that the APIs are adopted fully once implemented. Then start creating focused API initiatives targeted at key business objectives and execute them through projects. An API delivery is like making a sauce where you need to taste it constantly before you are satisfied with it. So measure your API consumption metrics and iterate until you get desired levels of adoption which would mean you have got your recipe fully baked.
At this stage, you can start rolling out more API projects and put them in charge of dedicated product-centric API delivery teams who can treat your APIs as products and constantly refine and improve them to deliver continuous value.
For someone looking to invest in a sophisticated, modern API management and integration platform, I would strongly recommend taking a look at the MuleSoft Anypoint Platform. We partner with MuleSoft and have rolled out large scale API-led system integration approaches for various customers.
Agility, Scalability, Innovation, Partnerships. This summarises some of the main benefits you can reap out of a well-implemented API-led application network. With APIs, you can respond to business change faster than ever before and offer legacy existing assets in a consistent manner as reusable services in order to layer and build higher-order solutions rapidly on top of them.
Treating your APIs as products allows you to accelerate and scale your digital roadmap. You can easily leverage your legacy systems of record and build systems of innovation and differentiation which allow you to achieve constant innovation using a pace-layered application strategy. Finally, you can open new doors and form digital partnerships to expand your business footprint, powered by a strong consumer and developer community for your APIs.
Hybrid Integration: Use a swiss-army knife, not a golden hammer
Hybrid integration platforms and approaches are becoming necessary to cope with the diverse system integration needs of the modern enterprise. System integration vs data integration, batch vs real-time, synchronous vs asynchronous, centralised vs distributed, there is no one universal platform that can cater to all these disparate system integration approaches and needs. A golden hammer approach, employing a single enterprise integration platform does not work anymore. A carefully chosen set of specialised knives designed to solve specific integration problems, integrated into a swiss-army knife holder needs to be the new norm.
The sheer increase in the number and variety of systems that need to be integrated these days has resulted in equally unique ways of integrating them together. Different integration platforms have emerged, each bringing their own unique value propositions, making it hard to apply a uni-dimensional lens while selecting them for your needs.
It has also now become evident that there is no universal system integration approach that triumphs the other. While API-led and event-driven microservice architectures might be becoming more prevalent, there continue to be use cases for large scale secure file transfer, bulk data loading using ETL techniques and such, none of which can be discounted.
Lastly, integration is no longer a one-skillset game that can be played by only specialised players. All personas starting from the veteran middleware architects to millenial citizen integrators have a role to play now and there is no one-size-fits-all platform or approach that caters to all their unique requirements.
I recommend you start by defining an integration capability model listing out an exhaustive set of technology-agnostic integration capabilities broken down into two or more levels. This gives you a view of the holistic landscape and puts you in a position to prioritise and pick the capabilities that you see most appropriate for your specific integration scenarios.
Once you have chosen the set of applicable capabilities, try mapping them to both the integration platforms you already have within your enterprise as well as other best-of-breed platforms available on the market. Location preferences also play a role here and you would need to consider the hosting feasibility of your integrations. For example, can you run all of them in a SaaS-based platform. If not, can you run tactical non-critical integrations on such a platform that will allow you to rapidly build and roll out.
Based on your analysis you can distil down to a number of platform options and then apply other dimensions such as cost-benefit analysis, long term vision, immediate demands and so on, to finally arrive at a portfolio of integration platforms that suit your diverse integration needs.
Assuming you now have more than one platform in your kitty, it is also absolutely essential that you lay out a clear decision tree for your delivery teams to follow so that they can pick the right platform for the right problems they are trying to solve. Without that, it is quite easy to use the wrong tool for the wrong job.
We partner with multiple integration platform vendors including MuleSoft, Oracle, Solace, StreamSets and Workato and have seen them co-existing well within large-scale customer deployments where a number of different system integration approaches were needed to cater to a range of requirements.
The obvious benefits are that you can use very specialised tools now to solve specialised problems. This removes the much-needed weight that an erstwhile monolithic platform was forced to carry. Trying to retrofit every style of integration into the same platform exposes undesirable lacuna within them which is natural because they are not purpose-built for every kind of use case.
It also allows you to be both tactical and strategic. For example, you can use one platform to quickly roll out tactical integrations using a citizen development or point to point style but later plan to move them into a more long term strategic direction based on say API-led service-oriented architectures. It gives you the flexibility to time your movement and also weigh your options against criteria such as time-to-market, flexibility, reusability and such.
Infinite Observability: Use Big Data to collect every bit of information
Infinite observability pertains to the ability to watch and collect every bit of information as it passes through the software integration layer in order to unravel insights that individual systems can never yield on their own. The advent of Big Data platforms has made it possible to tap into this rich flow of information within the integrated application network and process it at scale to identify bottlenecks, optimise performance, eliminate redundant information flows and much more.
The affordability and compute speed at which terabytes of data can be ingested with virtually no limits on how much you can store has made it possible to tap into raw data sources which no one could have ever imagined a few years back. The distributed nature of system integration has naturally resulted in a huge number of systems and platforms emitting different kinds of logs, traces and metrics constantly. Enterprises are wanting to use this data to not only troubleshoot and prevent integration failures but also to spot anomalies and improvement opportunities within business processes as well which is now easily possible using big data solutions.
The first step is to make sure that your system integrations are fully observable, both at a platform and at an application level. Most modern integration platforms are pretty capable of reporting and forwarding logs and metrics to central aggregators, however, if you have platforms that cannot do that, you need to start putting in components that can collect and tap into these logs and stream them continuously to data processing pipelines. Once the platform side is handled, make sure that you enforce standard application logging standards across your landscape so that every integration reports key metrics as and when information flows through it. Next, you need to start building data pipelines that can continuously ingest these raw streams and meaningfully wrangle them to produce cleaner data sets augmented with context and supporting information and devoid of noise and duplicates. Finally, you need to have an affordable big data storage solution where you can store these large data sets for running analytics and ad-hoc queries. Then, you can let your analysts and data scientists loose on this rich data and use it to derive technical and business insights like never before.
We have built data engineering and big data solutions using tools such as StreamSets, AWS Services like Kinesis, Athena, S3 and ingested them into warehouse systems such as Snowflake. We partner with all of these vendors to provide integrated data management solutions for our customers.
You may actually think after reading so far, what is the big deal here? Logs and metrics are nothing new and have also been leveraged to troubleshoot system integration issues. If you are, then I want you to expand your thinking much farther than just the technical lens. Data that flows through your integrated application network is unique in many ways because it tells you about every metric that you can think of about your business. By being able to tap into this often underestimated auxiliary source of information, you can identify optimisation opportunities that you may not even have thought of. These are the only data sources that are uniquely positioned to tell you about every business process execution because the processes often span across multiple systems and even geographical boundaries. By converting this raw data into actionable information, you get the power to derive insight that can not just help you bring in huge operational efficiency to your business but also identify trends that you might want to capitalise on to seek new business avenues.
Self-healing Integrations: Use AI to derive Smart Insights
Self-healing integrations are becoming a necessary reality where the modern-day integration complexity cannot lend itself to traditional troubleshooting and manual recovery processes. Failures at scale are inevitable and system downtimes cannot be contained in a timely manner. Evolution in Artificial Intelligence (AI) solutions has made it possible to develop integrations in a manner that they can embrace failure, learn and derive smart insights from them, and finally heal themselves on their own to enable seamless uninterrupted operations.
The availability of complex machine learning as simple to use point and click service offerings has put it within reach of almost every enterprise that wants to derive smart insights from raw data. The wide-scale and distributed nature of today’s system integration solutions have made it harder and harder to react in a timely manner to failures. Systems built using microservices have already started to influence patterns such as circuit breakers and graceful degradation in order to embed the fact that systems now need to be built assuming that everything will fail. This has resulted in a fundamental shift in thinking where enterprises no longer want to avoid errors but rather want to accept them as the norm and in the process want to bring these considerations right upfront into their integration journey rather than dealing with these as an afterthought.
It is a long road to reach a state where systems start to self-heal. I am laying out three broad steps only whereas in reality there are many stages between each of these steps.
The first step you should take is to make sure that your entire application and even underlying infrastructure can be monitored continuously and recreated fully through automation if the need be. Investment and maturity in practices such as DevOps and infrastructure as code (IaC) will set you up to rapidly roll out changes that you make to application code or platform configuration.
The next step would be to start embedding simple rule-based decision engines into your integrations so that they are equipped to do more than usual when they face unexpected situations and anomalies. At this point, you should start cataloguing all the situations and actions that you perform either in a manual or semi-automated manner to restore or remediate the integration services.
Finally, start investing in AI-based machine learning services and feed them with different exception conditions corresponding to reactions in order to train them. You need to then start replacing your simple rules engines with these sophisticated AI-powered engines which can use their acquired knowledge to start understanding your integrations and take necessary actions to heal them automatically.
The benefits are obvious and though the road to having truly self-healing systems is very ambitious, I would say that even if you take the initial steps towards this goal, you stand to gain a lot. By investing in DevOps practices, you will get to a stage where production deployments become painless and almost a non-event. This will allow you to start manually fixing things and rolling them out more confidently than ever before.
Now, if you have actually been able to employ some sort of AI into the mix (which would mean you have reached heightened stages of maturity), then you can expect system integrations that require very little manual maintenance and can work in almost an auto-pilot mode which will actually free up both valuable support time and redundant maintenance expenses to be repurposed towards more strategic IT initiatives.
Lastly, the smart insights that you can potentially gain from such systems allow you to start predicting exceptional scenarios and events well ahead of time and use them to start building future solutions that not just self heal but are immune to failures and cause zero disruption to your day to day business operations.
About Rubicon Red
Rubicon Red is a boutique consulting provider, specialising in APIs and Integration, Intelligent Automation and Data Engineering, with services spanning the entire development lifecycle including Advisory and Implementation Services, Managed Services and Solution-as-a-Service.
With the accelerating pace of business today, organisations must not only be good at what they do, but they must also be able to do it quickly. Getting the right data to the right place at the right time is crucial to enable the right decision and subsequent action in real-time. Rubicon Red helps organisations unlock their data and deliver it where it's needed in real-time, to allow the business to respond faster to customer needs while reducing costs and driving efficiencies.
If you're thinking about modernising your integration approaches to help unlock your own data, schedule a call. We'd love to chat and see how we could help.