Are you thinking beyond CAT? Practical guide to implementation strategy.

Are you thinking beyond CAT? Practical guide to implementation strategy.

Top 10 things firms should be considering as they near initial go-live milestones.

It seems that the industry has resigned to the fact that CAT going live is no longer a matter of ‘IF’ but ‘WHEN’. FINRA’s ability to work through thorny issues and keep up with deliverables / promises to date has been proving the naysayers wrong. General view is that 2a will most definitely happen on time!

While the industry at large is working very hard to achieve go-live with successful testing and April 2020 go-live of 2a, this article aims to give firms time to pause and consider other critical items. The list is not exhaustive and there is no intention to cover the obvious challenges i.e. Phases 2c / d, Legal Entities and FDID, linkages, representative orders, customer data, error corrections, enhanced surveillance etc. That’s entirely a different topic and would require appropriate focus.

Regardless of your CAT solution (i.e. internal, vendor, etc.) the aim is to provide practical considerations that will yield significant benefit to your organization and make CAT implementation more accurate, meaningful, and sustainable.

Lastly, 2020 is starting to shape up as one of the most challenging years for Reg Reporting implementation. Primary driver for this is the fact that starting with April until Dec, CAT community will experience 8 independent go-live dates! To add to the frenzy, there will be multiple important milestones for testing, new tech spec releases, etc. Further, each new go-live will introduce significant challenges, and should be treated as an independent initiative. Please see below referenced 2020 go-live cheat sheet, for actual dates refer to FINRA CAT Timelines.

Readiness Assessment

CAT implementation timeline modification from ‘big bang’ to ‘phased out’ go-live has been of tremendous benefit to the industry, and according to some experts, CAT would not have been anywhere near as far along if not for this change. There is tremendous opportunity for the industry to avoid a typical costly and draining ‘remediation process’.

With CAT there is a unique opportunity to take a pulse check very early on and as you progress through phases by conducting an independent ‘Health Check‘, which will yield very important output, e.g. Inform soundness of current implementation, influence future controls, inform upcoming phases, and make overall change managements much more cohesive.

Practical Recommendations

Engage with internal stakeholders and/or external resources to access and validated various aspects of the implementation. Some examples include: (a) Ensuring the rule interpretation is complete and signed off (b) Requirements are consistent and traceable to the rule (c) Data sourcing is documented appropriately (d) RAID Log is complete and closed, among various other points.

Expected Outcome

You will identify gaps / potential issues, very early on in the process. Having the ability to prioritize ‘known’ issues and having a list available for external / internal audit or interested parties will proof invaluable!

BAU Transition

Due to multiple go-live dates, the transition to BAU is not a trivial / typical exercise as it relates to CAT. The resources working on the immediate implementation will likely have to continue to roll out future phases. The strategy will be unique to each firm / size / location etc. Note: It’s not obvious at first hand, but as pointed out above, there are 8 expected go-live production dates for CAT in 2020 alone; BAU should be appropriately designed to scale.

Practical Recommendations

To get you started, some low hanging fruit are: (a) Ensure that you have an ongoing process and plan for knowledge transfer. Don’t leave critical knowledge as it relates to decisions, internal limitations, etc. to only remain with the implementation team (b) Create relevant content on confluence page / SharePoint or procedures, to easily share with appropriate team members (c) have documentation such as training materials, escalation procedures, clearly mapped and updated (d) Design a process that fits your company and business e.g. regional ownership vs. follow the sun(e) lastly, one of the most critical components, due diligence on accurate initial headcount requirements, will ensure your team can cope with work and not generate backlog.

Expected Outcome

This effort will yield much fruit. For starters, your firm will be ready to deal and focus on exceptions / errors and escalations. You will be able to scale as the scope grows, because you will have all necessary components in place. Withstand queries from senior stakeholders and interested teams (auditors, compliance, etc.). Lastly, this will ensure that you are not relying on any key ‘go-to-person’ to ensure you can keep the shop open.


Controls are fabric that gives senior management, auditors, regulators some level of comfort to ensure accuracy, timeliness, and completeness when it comes to regulatory reporting. Unfortunately, typically controls are built ‘hind-sight’ after a major flaw is uncovered or audit points out specific weakness. Although, at times necessary, the sequence for building controls on the back of an incident is far from ideal. Firms should consider the implementation and assumptions and build solid controls unique to their implementation, ‘new business’ process and risk tolerance. Consider using independent tools to conduct some controls; Can help your firm establish credibility in addition to benefiting from ‘crowd sourcing’ approach to controls and thereby avoiding a silo viewpoint.

Practical Recommendations

This section largely depends on the size of your organization but will likely be relevant to all in one way or another. Start with (a) defining a control framework (b) look to existing controls already in-place for other reg reporting obligations (c) involve impacted teams to generate critical controls (d) itemize your list of controls that spans ‘critical’ to ‘important’ to ‘nice to have’ (just sample buckets), as this will help you define your strategy. (e) think about timeline of controls (I.e. pre / post reporting) (f) ensure that controls are owned by the correct actor, a control that doesn’t have an appropriate user will not only be useless, but can actually cause you pain points down the road (e.g. Why do you have a control that no one is looking at).

Expected Outcome

Controls will be designed based on pro-active and thoughtful approach vs. Reactive.

Service Level Agreements (SLAs)

One of the hot button topics for the industry is the ‘error correction cycle’ and its impact on ‘exception management’. Essentially firms will have 1 ½ days to correct errors (T+3 correction requirement is from Trade Date and FINRA will provide broker dealers with errors by 12pm next day). SLAs with key players in the process to manage error corrections in 1 ½ is a very worthwhile consideration.

Practical Recommendations

(a) identify various actors in your business process flow (b) further identify who needs to be involved resolving the issue (e.g. Reg Reporting IT Team, Trade Capture group, Front Office, etc) (c) link error types with potential users (d) generate a proposal of expected actions and timeline (e) negotiate the final SLAs (f) create an escalation process for all impacted teams for instances where SLAs are not adhered to or bottlenecks are created.

Expected Outcome

BAU teams will successfully comply with managing exceptions and errors and have a solid plan for dealing with anomalies.


With the passing of time, and natural attrition of your SMEs working on the implementation, knowing the ‘why’ ‘how’ ‘who’ ‘when’ as it relates to your program will be critical. It is inevitable that assumptions are made, unique rule interpretation specific to a business line are penned, bespoke code to deal with a unique problem, etc. Are all important components of your program. It may be obvious now why something was done or implemented a certain way; it is NOT the case with the passing of time. Ensuring that you have clear traceability, evidence of sign off, approval of critical decisions, will not only shield your work and withstand the test of time, it will make the lives of people who own the process after you that much easier. Although this item will not show up for a very long time, eventually your due diligence will pay off and earn your work a solid reputation. This section is closely correlated to your data strategy, storage, and lineage.

Practical Recommendations

(a) Define a strategy on traceability (b) ensure consist tooling is used to capture and high-light traceability (c) avoid any black-box solutions, make as much as possible transparent to all relevant users (d) have a framework of why items have to trace to each other e.g. Regulatory Rule to a specific Rule Interp to a specific User Story or a Reportable Attribute to Stored Data Attribute to System Generated Attribute.

Expected Outcome

Following consistent and pre-agreed defined approach, your implementation will be easy to validate, employ change, and maintain.


Most firms are focused specifically on getting past the hitting the expected go-live dates, and are having a hard time keeping up with requirements, changes to tech specs, internal implementations, etc. Who has time or presence of mind to think about what happens 2 or 3 years from now? Answer is: Not many can afford this luxury! However, thinking about implication of regulators starting to leverage CAT data for surveillance for reporting purposes is of paramount importance. If you look back at how CAT was born, the initial whitepaper and legislation has, what feels like, infinite number of references to how they intent to utilize the data to improve surveillance. And they give examples of what they currently unable to do vs. What they intent to do with new data points (e.g. both new products, PII info, and new events). It’s worthwhile exercise to plan and speak to your technology and compliance teams to give your firm an opportunity to be at the forefront of the initiative vs. having to be caught off guard.

Practical Recommendations

(a) Review initial whitepapers and public comments prior to NMS Plan being approved, as well as, other public sources that speak to how the information is intended to be utilized (b) make a list of new surveillance practices or limitations and determine if this impacts your firm or business lines (c ) work with your business partners to determine there are any requirements to improve internal surveillance capability or functionality.

Expected Outcome

Will be able to have foresight on how the regulators will be leveraging the new data and ensure that it doesn’t have an adverse impact on your business.

Data Lineage & Governance

Although data governance is distinct from lineage, the two components are very much correlated. Therefore, as you go through the implementation process, it’s important that the way your data is stored / transferred and shared is fit for purpose.

Practical Recommendations

(a) Define your data strategy (b) ensure that procedures are in place to govern data points that may impact your reporting obligations, with proper escalation process (c) consistency or known dependencies for data points & usage is highlighted (e.g. Sell x all systems is represented as S)

Expected Outcome

Will be able to have foresight on how the regulators will be leveraging the new data and ensure that it doesn’t have an adverse impact on your business.

Change Management

CAT is not a small or short program; it has multiple phases and goes out for few years (see above referenced timeline). It’s important that you appropriately design the program and plan for changes in personnel / new business / etc.

Practical Recommendations

(a) Gantt chart that spans the entire program, to accounts for all the phases. Although I appreciate that there are multiple dependencies (e.g. we don’t have Tech Specifications for all the phases) (b) Consolidate ownership as much as possible. Single vision / execution will yield stronger / better reporting results and reduce re-work. Naturally, there will be workstreams unrelated to each other e.g. FDID, Options business units, Equity business units, etc. However, they will nevertheless have multiple dependencies and contingencies on each other, not to mention potential re-use. Single point to run and execute the program will yield continuous benefit.

Expected Outcome

Consistent implementation across various business units. Further, easier ability to remediate and manage changes / strategy shifts for future implementations.

Personally, Identifiable Information (PII)

CAT requirement to send PII data for all relevant accounts is a very sizable challenge. Although the notion of FDID has been made so much more reasonable when FINRA introduced the concept of ‘Relationship ID’, still I want to caution you that FDID and PII associated with each trading account, investment advisor, beneficiary, etc. will not be trivial to solve even for Broker Dealers with sophisticated reference data governance / strategy. There is still an opportunity for firms to review and improve the client and associated reference data.

Practical Recommendations

(a) Identify groups accountable for client data (e.g. Client onboarding, reference data team, etc.) (b) capture various processes and use cases and determine implication for CAT (c) ensure you have access to key PII for all use cases (This will be especially an important point when it comes to Wealth Management related accounts e.g. custody account for minors)

Expected Outcome

Either have a clear way to identify and tag appropriate PII data for CAT reporting or understand known gaps to articulate to senior management / regulators as appropriate. Knowing gaps, managing the talking points and having a solution that your firm is working towards, can make a meaningful difference in risk assessment and regulatory review.

Cyber Security

Last but certainly not least, cyber security has a paramount role in the story of CAT. In majority of instances, the reporting data and personal information is largely already being provided and is accessible to the regulators. So, the question then is: Why has ‘security’ been such a focal point when speaking about CAT? There are numerous answers and considerations to answer this question. One that’s top of mind is the fact that all previous reporting to date has been done in silo e.g. OATS provides trade activity, while Electronic Blue Sheets provided the ‘actor’ (there are multiple other examples); With CAT, for the first time, firms will be associating the ‘WHAT’ with the ‘WHO’ on each order. That is a very significant change in how reporting is done today. And firms, as well as other interested parties, should rightfully be concerned about the security of this data and stability of financial markets.

Practical Recommendations

(a) Involve the experts! Identify the right folks within your firm who own and understand cyber security, so that they can appropriately evaluate the tools being proposed by the industry, and various implications. Don’t confuse ‘Technology’ expert with ‘Cyber Security’ experts, the two often are not aligned.

Expected Outcome

With proper input from internal experts, your firm will have a stronger appreciation of the risks and tools associated with reporting PII and trading activity and take appropriate precautions or advocate for alternative tools / solutions.


All in all, as with any other complicated topic, there are multiple other items that firms should be thinking about now, the ones that were covered above seem to be most practical to tackle at this stage, but you should NOT stop here! Use this as an opportunity to have an internal discussion and create your critical list of items that your firm should be focusing on. Wishing you a successful go-live and overall smooth implementation program.

2020 CAT Implementation Timelines Cheat Sheet

Date Expected go-live
April 20, 2020 File submission for 2a begins. Data integrity validations
July 27, 2020 Intra-firm Linkage validations
October 26, 2020 Firm Linkage validations
October 26, 2020 Exchange and TRF Linkage validations
Date Expected go-live
May 18, 2020 File submission for 2b begins. Data integrity validations
August 24, 2020 Intra-firm Linkage validations
December 30, 2020 Firm Linkage validations
December 30, 2020 Exchange and TRF Linkage validations
O365 – Digital Lipstick?

O365 – Digital Lipstick?

Having had a few days to clear my head from a full week at my first AWS re:Invent in Las Vegas, I got to thinking about how to make sense of the announcements, customer testimonials, conversations and sheer magnitude of the event. Was there some common thread tying together the 100’s of sessions spread across multiple casinos, 65,000 visitors from around the world and a fantastic exhibitors hall (including a really cool analytics demo from TIBCO via a stationary bike time trial)? Everyone is trying to capture and define the elusive concept of Digital Transformation to be able to RE-INVENT their business, their technology and/or themselves. This then begs the questions, what does REAL DIGITAL TRANSFORMATION look like, who will be the ultimate winners, and who will go the way of the DoDo bird?

Despite noise of Microsoft Azure nipping at their heels, AWS is still the undisputed King of the Cloud. Attendance at re:Invent was up around 10-15% YoY, a 10x increase since the first show 7 years ago! Amazon announced 77 new products and services – 20 alone just around Machine Learning (no surprise here since this has been a steady drumbeat from AWS over the past year). We also heard compelling stories from their Enterprise clients and I was glad to see Amazon moving to make AWS a more enterprise-friendly platform with new products like Control Tower.

A high point of the week was a dinner hosted by Tim Horan from Oppenheimer, where we discussed Cloud, Digital Transformation and the impacts of politics with industry experts. A key topic of conversation was what to make of Microsoft Azure’s gain in cloud market share over the past few years. AWS cloud market share has dropped from ~80% in 2016 to an estimated 63% in 2019, while Azure’s share has climbed from 16% to 28% over that same time period. When looking at Enterprise workloads the race is much tighter; a RightScale 2018 survey shows Azure adoption grew 35% YoY while AWS adoption in this group increased by 15%. But the Azure numbers are worth a closer look.

Microsoft buries its Azure revenues in a much larger pile of “Commercial Cloud” revenues that include Office 365. So, while Microsoft announced a 73% growth in Azure cloud revenue, it’s impossible to put a hard dollar to that number. Industry experts are in agreement that the lion’s share of Microsoft’s commercial cloud growth comes from Office 365. Therefore, it’s safe to assume that the majority of Enterprise workloads running on Azure are O365 which begs the question, “is this real digital transformation?”

In Roxane Googin’s article on July 25, 2019 in the High Technology Observer entitled “Reality v. MSFT: Real versus Fake Digital Transformations”, she concludes that “a true digital transformation is about more than replatforming existing operations. In fact, it does not happen by making ‘personal productivity’ better. Rather, it is about rethinking operations from the ground up from the customer point of view, typically using real-time ‘AI infused’ algorithms to replace annoying, time-consuming and unpredictable manual efforts.“

I’d argue that the shift from PC windows + office to O365 is merely a replatforming exercise to improve productivity. While this move can certainly help businesses reduce expenses by 20 to 30% and drive new revenues, it does not fundamentally alter the way a business operates or interacts with clients. Therefore, perhaps this change should be viewed as Digital Transformation “lipstick”. We do, however, have great examples of Real Digital Transformations; AWS re:Invent was full of transformational testimonials and, at RiskFocus, we are fortunate to be partnering with a number of firms that are also embarking on Real Digital Transformations. I’d like to highlight a couple below.

The first story is about a NY-based genomics company looking to re-invent healthcare. They understand that current healthcare providers use just a tiny portion of information from the human body and little or no environmental data to categorize a patient as either sick or well. They are building predictive patient health solutions leveraging a much richer, deeper and broader set of information. To deliver on this mission they must unleash the power of the cloud; that is the only way they can meet the challenges presented by the scale, sensitivity and complexity of the data and sophistication of their probabilistic testing algorithms. They are not leveraging the cloud to run traditional health-care solutions, but re-inventing what healthcare looks like.

The second use case is an institutional, agency-model broker known for their technology-first approach. They were a FinTech company before the term existed. Sitting on years of institutional data consisting of 100s of petabytes of tick trade data, they are looking to harness the power of this information as a vehicle for changing how they do business. Leveraging a highly performant data lake and building sophisticated AI algorithms, the firm wants to crunch billions of records in seconds to deliver recommendations on trade completion strategies both for their internal consumers and ultimately in an “as a Service” offer. Once again, this is a mission that can only be tackled leveraging the scale and flexibility of the cloud.

Who wins? Do large, multi-national organizations have enough size and staying power that they can afford to take a “lift and shift” approach to the Cloud, replatforming their existing enterprise workloads and then taking a slow methodical approach to transformation? Or is the pressure from upstarts across every industry – the new HealthTechs and FinTechs – going to be so disruptive that the incumbents need to rethink their transformation strategy and partners?

The race is just beginning as, by most estimates, only 10-20% of workloads have moved to public cloud. Over the next two years we will reach a tipping point with more than half of all workloads predicted to be running in public cloud. Microsoft is well-positioned with Enterprises over this timeframe. However, if Amazon continues their pace of delivering innovative, disruptive services and couples that with their increased focus on Enterprise marketing and sales, expect them to retain the throne. One thing is certain, the rate of change will only continue to accelerate, and the winners won’t win by sitting still.

Flagship Application Migration to AWS Leveraging Cloud-Native Technology and DevOps Best-Practices

Flagship Application Migration to AWS Leveraging Cloud-Native Technology and DevOps Best-Practices

The Client

A market-leading fund administrator with global presences and over $1 Trillion in AUM.

Risk Focus delivered excellent technical staff with deep knowledge of and insight into the operations and applications of Amazon Web Services, and strong governance and oversight of projects with effective use of AGILE methodologies. The Risk Focus team was able to quickly understand the complexity of our on-prem environment and apply their experience with other customers to guide our solution. Most importantly, the Risk Focus team created a strong partnership with my team to deliver the project.

Managing Director, Head of Development and Application Support

The Challenge

The Client’s flagship application had grown unwieldly due to a deeply layered architecture, the result of multiple generations of development. The Client was also looking to exit their data centers and wanted to use this as a catalyst to re-architect the applications and implement DevOps best practices. After initially failing with two other PS firms, this leading Financial Services provider engaged Risk Focus to take-over and complete the migration of their flagship product to AWS.

The Solution

Risk Focus helped the client by:

  • Introducing good development and release processes, increasing velocity and reliability
  • Introducing build and release automation, increasing reliability and repeatability
  • Introducing on-demand environments, decreasing cost
  • Rearchitecting the Flagship app, increasing flexibility

Release frequently requires automation, as there is no time for long, error-prone manual processes. Risk Focus specializes in identifying and opening bottlenecks with a combination of technology introduction and process coaching. Prior to their migration, many of the processes for the client were time-intensive, manual, and error-prone.  This led to severe contention in areas such as test environments for QA processes, where new application releases were bottlenecked by the availability of these environments. Additionally, though the implementation was technical, the base application of the firm supports back office processes for hedge funds.  Strong domain knowledge derived from building similar trading and risk systems at some of the largest banks and exchanges in the world guided Risk Focus in designs of architecture and recovery processes.

For this client, Risk Focus:

  • Introduced CI/CD pipelines for both applications and the infrastructure components such as networking and VPC on which applications depended.
  • Worked with the client to design and implement customer onboarding automation using DynamoDB and Lambda.
  • Used CloudFormation templates and CodePipeline to create On-Demand environments for QA processes, automating the process of creating and destroying test environments as needed.

The solution relied on CodePipeline, CodeBuild, and CodeDeploy, with code stored in CodeCommit to create the pipelines.

Migrating to the cloud is most effective when combined with application refactor.  One of the key victories of the client’s migration to AWS was an update to the architecture of their application.

Risk Focus assisted the client with:

  • Migrating their core application from a monolithic Java application in WebSphere backed by an Oracle store to a micro-services framework.
  • Designing a complete multi-region DR solution, effectively lowering their exposure to unforeseen regional events.

The services are run in Docker containers on ECS in AWS, with a managed-services backend running on a set of Postgres RDS databases.

The Benefits

Risk Focus’s work has substantially increased the client’s efficiency by introducing automation in numerous areas.

  • The newly introduced pipelines allowed the client to migrate from an arduous build process on a developer’s laptop to continuous integration complete with an automated test framework built into the deployment pipeline.
  • Automating onboarding has replaced a manual, error-prone system that took weeks with an automated system where the heavy lifting was complete in minutes.
  • Automating QA has not only allowed the client to remove the bottleneck in creating test environments, it has allowed for much more efficient resource management for a significant cost savings. When combined with automated tests, the weeks-long testing cycle was reduced to days. Risk Focus continues to work with the client to shrink that cycle from days to hours, as the legacy manual validation are increasingly captured in automation.

In the end, the client has achieved their key goals. Their application is onboarding customers, their team is moving forward at a greater velocity, and their lead time to release new features is starting to shrink dramatically. The client continues to move forward aggressively in their adoption of AWS. Risk Focus remains engaged with this client, continuing to provide guidance and focused technical deliverables.

Application Refactoring and DevOps Implementation for Industry Leading Regulatory Reporting Software Solution 

Application Refactoring and DevOps Implementation for Industry Leading Regulatory Reporting Software Solution 

The Client

RegTek.Solutions, now part of Bloomberg, is one of the premier software providers in the Regulatory Reporting space with 15 of the 20 largest global banks as customers. RegTek.Solutions, provides modular software solutions built around actionable regulatory intelligence.

Due to business developments, we needed to quickly evacuate our data center. Risk Focus moved us quickly into the cloud and re-engineered our development and deployment processes, allowing us to become more agile and increase delivery velocity.  Moreover, they helped us deliver our first SaaS offering and onboard our first clients quickly and seamlessly.  The key differentiator that Risk Focus brought to the table is the unique combination of deep domain knowledge coupled with technical expertise and delivery excellence

Brian Lynch

CEO, RegTek.Solutions, A Bloomberg company

The Challenge

RegTek’s software was developed and tested inside its own datacenter, with VMs running on a small set of Hyper-V racks. The software was delivered exclusively to run inside its clients’ datacenters as binary artifacts, but RegTek was also asked by clients to offer some of its products as a SaaS offering.

RegTek approached Risk Focus to help it:

  • Create a fully-automated CI/CD pipeline outside of its datacenter that would allow it to provision environments on demand, build the binary artifacts, and run large-scale testing on its suite of products
  • Create the ability for RegTek to produce different deployable binary artifacts, both traditional WARs as well as Docker Containers
  • Create a secure automated ability for it to offer some of its products as a SaaS offering to prospective clients
  • Onboard its first batch of clients onto the newly-built SaaS offering

The Solution

The project consisted of three parts:

  • Constructing CI/CD Pipelines
  • SaaS Architecture Design
  • Client Onboarding

We proposed that RegTek move to AWS, and because its software needs to retain the ability to run in any datacenter, we ensured that the software is cloud portable and not tightly bound to the AWS cloud native offerings. Additionally, because of the sensitivity of the data being reported, all SaaS clients had requested complete isolation from one another.

AWS was used to:

  • Create single-tenant VPCs with Oracle RDS instances provisioned for each client, which were created by using CloudFormation templates.
  • Deploy some of the RegTek products as SaaS offerings into separate AWS accounts under an Organization leveraging Consolidated Billing.
  • Send logs to CloudWatch. All access to the deployed resources is monitored by CloudTrail.
  • Achieve resiliency by relying on ELB, Multi-AZ deployments and Auto-scaling groups.

The domain knowledge of Risk Focus was especially useful in Client Onboarding, which RegTek had subcontracted fully to Risk Focus.

This involved:

  • Placing Business Analysts at the client sites to identify the necessary feeds and design their delivery process.
  • Perform required data mappings and enrichment to ensure that the raw trading feeds that were delivered by the client could be submitted to the SDR (Swaps Data Repository) of the DTCC.

Additional 3rd party technology used for this solution included:

  • Jenkins
  • Ansible
  • OneLogin for Authentication and Authorization of client users
  • DataDog for monitoring

The Benefits

Risk Focus’s work has helped RegTek.Solutions operate a highly successful business that features both client data center installations and a SaaS reporting solution.

In particular:

  • The elasticity provided by AWS allows RegTek develop and test much faster by provisioning and tearing environments down in an automated way.
  • It also allows their clients to keep growing their Financial Services business, while staying compliant and avoiding the hefty fines levied on businesses that do not report in an accurate and timely manner.

This allows RegTek.Solutions to provide a higher value service to their clients.

Deutsche Börse – Leveraging AWS For Rapid Infrastructure Evolution

Deutsche Börse – Leveraging AWS For Rapid Infrastructure Evolution

The Client

The Deutsche Börse is the German Stock exchange providing a marketplace for the trading of shares and other securities. It is also a transaction services provider and gives companies and investors access to global capital markets.

Given the extreme time pressure that we were under to deliver a mission-critical platform, together with Risk Focus’ we decided to use AWS for Development, QA, and UAT which proved to be the right move, allowing us to hit the ground running. The Risk Focus team created a strong partnership with my team to deliver the project. Maja Schwob

CIO, Data Services, Deutsche Börse

The Challenge

In 2017, the Deutsche Börse needed an APA developed for their RRH business to support MIFID2 regulations, to be fully operation by January 3, 2018. The service provides real-time MIFID2 trade reporting services to around 3000 different financial services clients. After an open RFP process, The Deutsche Börse approached Risk Focus to build this system, resulting in an on-time, problem-free launch. The Deutsche Börse then approached Risk Focus again in May 2018 to expand their system to allow the Börse to process twenty times the volume of messages, with no increase in latency, and deliver the system to their on-premises hardware within 4 months.

The Solution

Though the implementation was technical, Risk Focus was ultimately recruited by the business unit at the Deutsche Börse to provide an implementation of their service, requiring us to determine and implement both technical and business requirements. The stakeholder group also included the internal client IT team and the Bafin (German Financial Regulator), as the choice of technology, infrastructure and cloud provider was decided in tandem with all three groups. Risk Focus’s deep domain knowledge in Regulatory Reporting and Financial Services was crucial to understanding and proposing a viable solution to the client’s need that satisfied all stakeholders. That domain expertise in combination with Risk Focus’s technology acumen then allowed for delivery of the service under very tight constraints.

The client hardware procurement timelines and costs precluded the option to develop and test on-premises. Instead, Risk Focus developed, tested and certified the needed infrastructure in AWS and applied the resulting topology and tuning recommendations for the onsite infrastructure. Risk Focus:

  • Proposed a radical infrastructure overhaul of the client systems that included the replacement of their existing Qpid bus with Confluent Kafka, involving architecture changes and configuration tuning.
  • Implemented an automated CI/CD system that built both environment and application to find the optimal configuration, allowing developers and testers to create production-scale infrastructure on-demand cost- and time-effectively.

Finding optimal configuration required executing hundreds of performance tests with 100s of millions of messages flowing through a complex mission-critical infrastructure, and it would have been impossible in the few weeks available without the elasticity and repeatability provided by AWS.

Additional 3rd party technology used for this solution included:

  • Docker Swarm: client chosen Docker orchestration framework
  • Redis: in-memory cache
  • Confluent Kafka: scalable replay log
  • TICK: monitoring framework
  • Greylog: log aggregation
  • Jenkins: CI/CD pipeline

The Benefits

The system was delivered to the client data center on-time within a startlingly short timeframe. Risk Focus worked with the client to allow their internal IT departments to take over the delivered solution, allowing Risk Focus to disengage from the process. All test environments and automation were made available to the client, allowing them to further tune and evolve the system.

The ability for the client to continue developing and experimenting in AWS enables them to:

  • Make precise hardware purchasing decisions as volume demands change
  • Maintain an environment for further development to adapt to new regulatory requirements.

Risk Focus’s work provides a pathway to public cloud migration once that path is greenlighted by regulators.

CFRA-AWS Equity Publishing Platform Case Study

CFRA-AWS Equity Publishing Platform Case Study

The Client

CFRA Research is one of the best-known Financial Research companies and offers global coverage of individual stocks, ETFs and Mutual Funds to its retail and institutional clients.  The research that CFRA provides is based on quantitative and qualitative processing of large amounts of financial information on companies and their financial instruments.

Risk Focus understood our business and had the right technical expertise to design and execute a modern serverless architecture. This allowed us to increase agility, improve resiliency and cut costs at the same time.  They were able to engineer and rebuild the core of our platform in timeframes that were so compressed, that we did not believe were possible. Risk Focus replaced a low-cost outsourcing firm but proved to be much better value for money due to their expertise, productivity and ownership of delivery. They were committed to our success and were obsessed with automation, security and the total cost of ownership of the deliverable.

More impressively, they worked as true partners to our organization and helped us build out our teams, processes and IT infrastructure.

Eram Schlegel

CTO, CFRA Research

The Challenge

In 2016, CFRA acquired S&P Global’s Equity and Fund Research business including the core systems that were supporting it.  The systems were moved to AWS in a classic Lift & Shift approach.  In May of 2019, CFRA engaged Risk Focus to help them rationalize their whole architecture, replace certain raw data sources, integrate a new publishing platform from Eidos Media and create a new API layer to facilitate modern data access. Everything had to go live by October with no disruption to existing clients. 

The Solution

Our development team and business analysts worked with CFRA to map out the use cases to a logical architecture. This was then converted into a physical architecture and we created mocks for the various service implementations.  The API and Service design, the ingestion pipelines as well as the design of the database schemas required an intimate understanding of the business domain. Given the very tight timeframes, Risk Focus approached the challenge with a solution that would allow us to work in parallel. Risk Focus:

  • Partitioned the system into microservices
  • Agreed on the interfaces between them
  • Aligned the team structure to the microservices
  • Rolled-out a simple CI/CD solution based on Jenkins to allow the teams and developers to quickly roll out their pieces without breaking anyone else’s

The diagram below reflects the high-level architecture.

For each of the components, Risk Focus looked at the most applicable AWS offerings. The systems demand of the CFRA’s business are very spikey: multi-hour flat lines with big surges at various times.

This is a great use-case for many of AWS’s serverless services. Ultimately, Risk Focus used:

  • AWS Glue for the ETL service and AWS Lambda for most processing tasks
  • AWS API Gateway and Cognito for both the internal and client-facing APIs
  • Aurora for the main RDBMS, as it was both very performant and cost effective.

Additional AWS technology used for this solution included:

  • Glue
  • IAM
  • SQS
  • SSO
  • Aurora BeanStalk
  • CloudWatch

Additional 3rd party technology used for this solution included:

  • Jenkins
  • Terraform
  • Ansible

The Benefits

CFRA was able to go live within a very compressed timeframe with a new architecture and software stack that

  • Allows them to run with much lower data and software licensing fees
  • Resulted in over 90% cost reduction on average
  • Increased the resilience of the solution by leveraging the various AWS Serverless offerings such as AWS Glue, Lambda and Beanstalk

The delivery encompassed not just the end software, but also the necessary tooling and processes that enable CFRA to have a fast, automated, predictable Software Delivery Lifecycle.  In particular:

  • The CI/CD pipeline encompasses both the application and the underlying infrastructure
  • New environments can be provisioned on-demand with a partial or full stack of the software in minutes
  • Production releases can be made daily instead of quarterly
  • Fully automated pipelines ensure a consistent and repeatable security posture that has tightly controlled role access and secrets which are centrally managed and rotated

Risk Focus’s work has allowed CFRA to grow their analysis and distribution platform with increased agility and improved predictability.