Featured Insight

Equities Trading & The Cloud:
A Webinar Series

About the Program

Equities Trading & The Cloud is a three-part virtual event series exploring Instinet’s Digital Transformation with Risk Focus and the AWS Cloud.

The Webinars

Part 1

Pre-Migration Strategy Formation

Part 2

Coming Soon

Part 3

Coming Soon

To view the webinar series, please fill out this form.

Required fields are indicated by an asterisk (*).

    About the Content

    The cloud offers businesses the ability to unlock, amplify, and integrate enterprise-scale data like never before. Cloud capabilities like machine learning (ML) and artificial intelligence (AI) now allow businesses to apply real-time analytics, accelerate decision-making, and enhance the execution of business processes.

    The fast pace of Equities Trading offers an ideal context to leverage advanced cloud technology. The question becomes: How do institutional investors use the cloud to add value to each trade without introducing unwanted latency, security and compliance issues, or systemic risk?

    Panelists from Risk Focus, Instinet, and AWS draw insight from Instinet’s cloud journey, offering a candid discussion about the challenges, opportunities, and emerging best practices around taking Equities Trading to the cloud.

    Discussion Topics

    • How do you prepare and manage the disparate teams that are involved—both operationally and culturally—and what does it mean to form a “cloud culture”?
    • What are some common approaches to moving massive amounts of on-prem historical data, and how are security and compliance being addressed in the process? 
    • What are some of the key benefits to moving data off premises?
    • Is a cloud strategy critical to the development of ML and AI? How and why?
    • What is the best way to formulate a strategy for identifying the right goals and KPIs?
    • What are the pre-, during, and post-migration steps for Quant and Execution teams, the technology organization, product management, and operations?
    • How should you break the cloud journey down into achievable, incremental steps over a migration time horizon?
    • How do you create a strategy for applying the cloud to new, advanced offerings and services—including ML and AI capabilities? How can you be sure you’re building effective solutions to real problems, versus “solutions in search of problems”?

    Moderator

    Irene Aldridge

    Adjunct Professor,
    Big Data & AI at Cornell University
    and Managing Director, Research

    AbleMarkets

    Panelists

    Minor Huffman

    Chief Technology Officer

    Instinet Incorporated

    Peter Meulbroek

    Partner, Head of DevOps and Cloud Solutions

    Risk Focus

    Peter Williams

    Global Head of Financial Services,
    Partner Technology

    Amazon Web Services

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Refactoring for Cloud:
    No Magic, Hard Work, but Less Risk 

    Author

    Vassil Avramov
    Founder & CEO
    Vassil.Avramov@RiskFocus.com

    Introduction

    One of the common challenges that companies face when moving legacy applications to the cloud is that a simple “lift-and-shift” frequently isn’t an option. The reason why some applications can’t be migrated without modification could be that a workload assumes a specific physical infrastructure that isn’t replicable in a cloud environment. More often, a lift-and-shift doesn’t make sense because the application can’t take advantage of elastic compute or cloud-native offerings. As a result, institutions frequently face the daunting task of a full refactoring of their critical business applications into microservices. While doing so may be beneficial once realized, it can be risky and is almost always expensive. 

    At Risk Focus, we’re proud of our perfect track record of rebuilding and replatforming applications for the cloud. There’s no secret sauce to our success with migrating legacy applications to the cloud. Instead, we base our practice on the following pillars:

    Focused Approach
    Domain Knowledge 
    Technical Expertise 
    Iterative Process 
    Cost Efficiency

    Focused Approach 

    • When moving legacy applications, a common practice is to replicate the functionality of the existing system exactly. Doing so may be unnecessary: applications that are 10+ years old frequently aren’t built to serve the current needs of a business. Instead of trying to force a square peg in a round hole, we might be better off focusing on what the application should be doing rather than on what it’s currently doing. 

    Domain Knowledge 

    • Technology excellence is required, but not enough, when refactoring functionally rich applications. Our senior staff has decades of financial services domain expertise, and we lean heavily on our business analysts throughout the process, beginning with planning and design. 
    • Refactoring into micro-services requires that the boundaries of services are appropriately defined. These definitions aren’t engineering constructs but are determined at an organizational and business-function level. Domain-driven design (DDD) is an approach that can help define these boundaries, but to be effective, it’s crucial that the domain be well understood. 

    Technical Expertise 

    • Over the years, we’ve built critical trading, risk-management, and reporting systems for many of the largest banks, hedge funds, and industry utilities. We’re also one of a handful of AWS Consulting Partners who have the AWS Financial Services competency. We have more AWS Certifications than we have staff in the company (and we are counting the staff inclusive of Operations and Sales). This level of certification is a guarantee that everyone on a project, even our business analysts, have rigorous technical training.  
    • Traditionally, legacy systems rely on a “shared state” (e.g., using the same database) as an integration point. One of the patterns facilitating an application’s breakup into microservices is a move toward “shared flow” (e.g., using a redo log like Kafka). This approach allows us to more easily develop components in parallel, scale them, and move some into the cloud while leaving others in a data center as needed.

    Iterative Process 

    • Whenever we start on a big re-architecture or refactoring project, our first milestone is to provide a quick working Proof of Value: Will the suggested technology and architecture meet the business and technical requirements? The only way to be certain is to get real data. 
    • We follow a disciplined, iterative process that has regular demos of progress at the end of each sprint. Even if an application can’t go into production at the end of each sprint, the regular demos of functionality and integration points allow for incremental testing. This removes nasty surprises around go-live, provides early feedback, and enables robust test automation. 

    Cost Efficiency

    • We work very closely with AWS and can leverage several AWS funding programs. Our AWS partnership has allowed us to cut our clients’ migration costs by up to half. 

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Are You Thinking Beyond CAT?
    A Practical Guide to Implementation Strategy

    The top 10 things firms should consider as they near initial go-live milestones.

    Author

    Alex Rabaev
    Head of Regulatory Reporting Practice
    Alex.Rabaev@RiskFocus.com

    It seems that the industry has resigned to the fact that CAT going live is no longer a matter of ‘IF’ but ‘WHEN’. FINRA’s ability to work through thorny issues and keep up with deliverables / promises to date has been proving the naysayers wrong. General view is that 2a will most definitely happen on time!

    While the industry at large is working very hard to achieve go-live with successful testing and April 2020 go-live of 2a, this article aims to give firms time to pause and consider other critical items. The list is not exhaustive and there is no intention to cover the obvious challenges i.e. Phases 2c / d, Legal Entities and FDID, linkages, representative orders, customer data, error corrections, enhanced surveillance etc. That’s entirely a different topic and would require appropriate focus. Regardless of your CAT solution (i.e. internal, vendor, etc.) the aim is to provide practical considerations that will yield significant benefit to your organization and make CAT implementation more accurate, meaningful, and sustainable.

    Lastly, 2020 is starting to shape up as one of the most challenging years for Reg Reporting implementation. Primary driver for this is the fact that starting with April until Dec, CAT community will experience 8 independent go-live dates! To add to the frenzy, there will be multiple important milestones for testing, new tech spec releases, etc. Further, each new go-live will introduce significant challenges, and should be treated as an independent initiative. Please see below referenced 2020 go-live cheat sheet, for actual dates refer to FINRA CAT Timelines.

    Readiness Assessment

    CAT implementation timeline modification from ‘big bang’ to ‘phased out’ go-live has been of tremendous benefit to the industry, and according to some experts, CAT would not have been anywhere near as far along if not for this change. There is tremendous opportunity for the industry to avoid a typical costly and draining ‘remediation process’. With CAT there is a unique opportunity to take a pulse check very early on and as you progress through phases by conducting an independent ‘Health Check‘, which will yield very important output, e.g. Inform soundness of current implementation, influence future controls, inform upcoming phases, and make overall change managements much more cohesive.

    Practical Recommendations:
    Engage with internal stakeholders and/or external resources to access and validated various aspects of the implementation. Some examples include: (a) Ensuring the rule interpretation is complete and signed off (b) Requirements are consistent and traceable to the rule (c) Data sourcing is documented appropriately (d) RAID Log is complete and closed, among various other points.

    Expected Outcome:
    You will identify gaps / potential issues, very early on in the process. Having the ability to prioritize ‘known’ issues and having a list available for external / internal audit or interested parties will proof invaluable!

    BAU Transition

    Due to multiple go-live dates, the transition to BAU is not a trivial / typical exercise as it relates to CAT. The resources working on the immediate implementation will likely have to continue to roll out future phases. The strategy will be unique to each firm / size / location etc. Note: It’s not obvious at first hand, but as pointed out above, there are 8 expected go-live production dates for CAT in 2020 alone; BAU should be appropriately designed to scale.

    Practical Recommendations:
    To get you started, some low hanging fruit are: (a) Ensure that you have an ongoing process and plan for knowledge transfer. Don’t leave critical knowledge as it relates to decisions, internal limitations, etc. to only remain with the implementation team (b) Create relevant content on confluence page / SharePoint or procedures, to easily share with appropriate team members (c) have documentation such as training materials, escalation procedures, clearly mapped and updated (d) Design a process that fits your company and business e.g. regional ownership vs. follow the sun(e) lastly, one of the most critical components, due diligence on accurate initial headcount requirements, will ensure your team can cope with work and not generate backlog.

    Expected Outcome:
    This effort will yield much fruit. For starters, your firm will be ready to deal and focus on exceptions / errors and escalations. You will be able to scale as the scope grows, because you will have all necessary components in place. Withstand queries from senior stakeholders and interested teams (auditors, compliance, etc.). Lastly, this will ensure that you are not relying on any key ‘go-to-person’ to ensure you can keep the shop open.

    Controls

    Controls are fabric that gives senior management, auditors, regulators some level of comfort to ensure accuracy, timeliness, and completeness when it comes to regulatory reporting. Unfortunately, typically controls are built ‘hind-sight’ after a major flaw is uncovered or audit points out specific weakness. Although, at times necessary, the sequence for building controls on the back of an incident is far from ideal. Firms should consider the implementation and assumptions and build solid controls unique to their implementation, ‘new business’ process and risk tolerance. Consider using independent tools to conduct some controls; Can help your firm establish credibility in addition to benefiting from ‘crowd sourcing’ approach to controls and thereby avoiding a silo viewpoint.

    Practical Recommendations:
    This section largely depends on the size of your organization but will likely be relevant to all in one way or another. Start with (a) defining a control framework (b) look to existing controls already in-place for other reg reporting obligations (c) involve impacted teams to generate critical controls (d) itemize your list of controls that spans ‘critical’ to ‘important’ to ‘nice to have’ (just sample buckets), as this will help you define your strategy. (e) think about timeline of controls (I.e. pre / post reporting) (f) ensure that controls are owned by the correct actor, a control that doesn’t have an appropriate user will not only be useless, but can actually cause you pain points down the road (e.g. Why do you have a control that no one is looking at).

    Expected Outcome:
    This effort will yield much fruit. For starters, your firm will be ready to deal Controls will be designed based on pro-active and thoughtful approach vs. Reactive.

    Service Level Agreements (SLAs)

    One of the hot button topics for the industry is the ‘error correction cycle’ and its impact on ‘exception management’. Essentially firms will have 1 ½ days to correct errors (T+3 correction requirement is from Trade Date and FINRA will provide broker dealers with errors by 12pm next day). SLAs with key players in the process to manage error corrections in 1 ½ is a very worthwhile consideration.

    Practical Recommendations:
    (a) identify various actors in your business process flow (b) further identify who needs to be involved resolving the issue (e.g. Reg Reporting IT Team, Trade Capture group, Front Office, etc.) (c) link error types with potential users (d) generate a proposal of expected actions and timeline (e) negotiate the final SLAs (f) create an escalation process for all impacted teams for instances where SLAs are not adhered to or bottlenecks are created.

    Expected Outcome:
    BAU teams will successfully comply with managing exceptions and errors and have a solid plan for dealing with anomalies.

    Traceability

    With the passing of time, and natural attrition of your SMEs working on the implementation, knowing the ‘why’ ‘how’ ‘who’ ‘when’ as it relates to your program will be critical. It is inevitable that assumptions are made, unique rule interpretation specific to a business line are penned, bespoke code to deal with a unique problem, etc. Are all important components of your program. It may be obvious now why something was done or implemented a certain way; it is NOT the case with the passing of time. Ensuring that you have clear traceability, evidence of sign off, approval of critical decisions, will not only shield your work and withstand the test of time, it will make the lives of people who own the process after you that much easier. Although this item will not show up for a very long time, eventually your due diligence will pay off and earn your work a solid reputation. This section is closely correlated to your data strategy, storage, and lineage.

    Practical Recommendations:
    (a) Define a strategy on traceability (b) ensure consist tooling is used to capture and high-light traceability (c) avoid any black-box solutions, make as much as possible transparent to all relevant users (d) have a framework of why items have to trace to each other e.g. Regulatory Rule to a specific Rule Interpretation to a specific User Story or a Reportable Attribute to Stored Data Attribute to System Generated Attribute.

    Expected Outcome:
    Following consistent and pre-agreed defined approach, your implementation will be easy to validate, employ change, and maintain.

    Surveillance

    With the passing of time, and natural attrition of your SMEs working on the Most firms are focused specifically on getting past the hitting the expected go-live dates, and are having a hard time keeping up with requirements, changes to tech specs, internal implementations, etc. Who has time or presence of mind to think about what happens 2 or 3 years from now? Answer is: Not many can afford this luxury! However, thinking about implication of regulators starting to leverage CAT data for surveillance for reporting purposes is of paramount importance. If you look back at how CAT was born, the initial whitepaper and legislation has, what feels like, infinite number of references to how they intent to utilize the data to improve surveillance. And they give examples of what they currently unable to do vs. What they intent to do with new data points (e.g. both new products, PII info, and new events). It’s worthwhile exercise to plan and speak to your technology and compliance teams to give your firm an opportunity to be at the forefront of the initiative vs. having to be caught off guard.

    Practical Recommendations:
    (a) Review initial whitepapers and public comments prior to NMS Plan being approved, as well as, other public sources that speak to how the information is intended to be utilized (b) make a list of new surveillance practices or limitations and determine if this impacts your firm or business lines (c ) work with your business partners to determine there are any requirements to improve internal surveillance capability or functionality.

    Expected Outcome:
    Will be able to have foresight on how the regulators will be leveraging the new data and ensure that it doesn’t have an adverse impact on your business.

    Data Lineage & Governance

    With the passing of time, and natural attrition of your SMEs working on the Most firms are focused specifically on getting past the hitting the expected go-live dates, and are having a hard time keeping up with Although data governance is distinct from lineage, the two components are very much correlated. Therefore, as you go through the implementation process, it’s important that the way your data is stored / transferred and shared is fit for purpose.

    Practical Recommendations:
    (a) Review initial whitepapers and public comments prior to NMS Plan (a) Define your data strategy (b) ensure that procedures are in place to govern data points that may impact your reporting obligations, with proper escalation process (c) consistency or known dependencies for data points & usage is highlighted (e.g. Sell x all systems is represented as S)

    Expected Outcome:
    Will be able to have foresight on how the regulators will be leveraging the new data and ensure that it doesn’t have an adverse impact on your business.

    Change Management

    With the passing of time, and natural attrition of your SMEs working on the CAT is not a small or short program; it has multiple phases and goes out for few years (see above referenced timeline). It’s important that you appropriately design the program and plan for changes in personnel / new business / etc.

    Practical Recommendations:
    (a) Gantt chart that spans the entire program, to accounts for all the phases. Although I appreciate that there are multiple dependencies (e.g. we don’t have Tech Specifications for all the phases) (b) Consolidate ownership as much as possible. Single vision / execution will yield stronger / better reporting results and reduce re-work. Naturally, there will be workstreams unrelated to each other e.g. FDID, Options business units, Equity business units, etc. However, they will nevertheless have multiple dependencies and contingencies on each other, not to mention potential re-use. Single point to run and execute the program will yield continuous benefit.

    Expected Outcome:
    Consistent implementation across various business units. Further, easier ability to remediate and manage changes / strategy shifts for future implementations.

    Personally, Identifiable Information (PII)

    CAT requirement to send PII data for all relevant accounts is a very sizable challenge. Although the notion of FDID has been made so much more reasonable when FINRA introduced the concept of ‘Relationship ID’, still I want to caution you that FDID and PII associated with each trading account, investment advisor, beneficiary, etc. will not be trivial to solve even for Broker Dealers with sophisticated reference data governance / strategy. There is still an opportunity for firms to review and improve the client and associated reference data.

    Practical Recommendations:
    (a) Gantt chart that spans the entire program, to accounts for all the phases. Although I appreciate that there are multiple dependencies (e.g. (a) Identify groups accountable for client data (e.g. Client onboarding, reference data team, etc.) (b) capture various processes and use cases and determine implication for CAT (c) ensure you have access to key PII for all use cases (This will be especially an important point when it comes to Wealth Management related accounts e.g. custody account for minors)

    Expected Outcome:
    Either have a clear way to identify and tag appropriate PII data for CAT reporting or understand known gaps to articulate to senior management / regulators as appropriate. Knowing gaps, managing the talking points and having a solution that your firm is working towards, can make a meaningful difference in risk assessment and regulatory review.

    Cyber Security

    CAT requirement to send PII data for all relevant accounts is a very sizable Last but certainly not least, cyber security has a paramount role in the story of CAT. In majority of instances, the reporting data and personal information is largely already being provided and is accessible to the regulators. So, the question then is: Why has ‘security’ been such a focal point when speaking about CAT? There are numerous answers and considerations to answer this question. One that’s top of mind is the fact that all previous reporting to date has been done in silo e.g. OATS provides trade activity, while Electronic Blue Sheets provided the ‘actor’ (there are multiple other examples); With CAT, for the first time, firms will be associating the ‘WHAT’ with the ‘WHO’ on each order. That is a very significant change in how reporting is done today. And firms, as well as other interested parties, should rightfully be concerned about the security of this data and stability of financial markets.

    Practical Recommendations:
    (a) Involve the experts! Identify the right folks within your firm who own and understand cyber security, so that they can appropriately evaluate the tools being proposed by the industry, and various implications. Don’t confuse ‘Technology’ expert with ‘Cyber Security’ experts, the two often are not aligned.

    Expected Outcome:
    With proper input from internal experts, your firm will have a stronger appreciation of the risks and tools associated with reporting PII and trading activity and take appropriate precautions or advocate for alternative tools / solutions.

    Summary

    All in all, as with any other complicated topic, there are multiple other items that firms should be thinking about now, the ones that were covered above seem to be most practical to tackle at this stage, but you should NOT stop here! Use this as an opportunity to have an internal discussion and create your critical list of items that your firm should be focusing on. Wishing you a successful go-live and overall smooth implementation program.

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Consolidated Audit Trail (CAT) is Live. What’s Next?

    Author

    Alex Rabaev
    Head of Regulatory Reporting Practice
    Alex.Rabaev@RiskFocus.com

    Compliance beyond Phase 2a/b – Top 5 things firms should be considering as they near initial go-live milestones.

    It seems that the industry has resigned to the fact that CAT going live is no longer a matter of ‘IF’ but ‘WHEN’. FINRA’s ability to work through thorny issues and keep up with deliverables / promises to date has been proving the naysayers wrong. General view is that 2a will most definitely happen on time!

    While the industry at large is working very hard to achieve go-live with successful testing and April 2020 go-live of 2a, this article aims to give firms time to pause and consider other critical items. The list is not exhaustive and there is no intention to cover the obvious challenges i.e. Phases 2c / d, Legal Entities and FDID, linkages, representative orders, customer data, error corrections, etc. That’s entirely a different topic and would require appropriate focus.

    Regardless of your CAT solution (i.e. internal, vendor, etc.) the aim is to provide practical considerations that will yield appropriate fruits and make CAT implementation more accurate, meaningful, and sustainable.

    Readiness Assessment

    CAT implementation modification from ‘big bang’ to ‘phased out’ go-live has been of tremendous benefit to the industry, and according to some experts, CAT would not have been anywhere near as far along if not for this change. There is a tremendous opportunity for the industry to avoid a typical costly and draining ‘remediation process’.

    With CAT there is a unique opportunity to take a pulse check very early on, as you progress through phases by conducting an independent ‘Health Check’, which will yield very important output, e.g. Inform soundness of current implementation, influence future controls, inform upcoming phases, and make overall change managements much more cohesive.

    BAU Transition

    Due to multiple go-live dates, the transition to BAU is not a trivial / typical exercise as it relates to CAT. The resources working on the immediate implementation will likely have to continue to roll out future phases. The strategy will be unique to each firm / size / location etc.

    To get you started, some low hanging fruits are:

    • Knowledge transfer, documentation
    • Training materials
    • Regional ownership vs. follow the sun
    • Initial headcount requirements
    • Ways to scale as the scope grows

    Controls

    Controls are fabric that gives senior management, auditors, regulators some level of comfort to ensure accuracy, timeliness, and completeness when it comes to regulatory reporting. Unfortunately, controls are typically built in ‘hind-sight’ after a major flaw is uncovered or audit points out a specific weakness. Although, at times necessary, the sequence for building controls on the back of an incident is far from ideal. Firms should build solid controls unique to their implementation, ‘new business’ process and risk tolerance. Consider using independent tools to conduct some controls that can help your firm establish credibility in addition to benefiting from ‘crowd sourcing’ approach to controls and thereby avoiding a siloed viewpoint.

    Service Level Agreements (SLAs)

    One of the hot button topics for the industry is the ‘error correction cycle’ and its impact on ‘exception management’. Essentially firms will have 1 ½ days to correct errors (T+3 correction requirement is from Trade Date and FINRA will provide broker dealers with errors by 12pm next day). Drafting and finalizing SLAs with key players in the process (e.g. Middle Office, Trade Capture, Technology team, etc.) to make appropriate changes needed to facilitate a reasonable exception management and error corrections process is a very worthwhile exercise.

    Traceability

    With the passing of time, and natural attrition of your SMEs working on the implementation, knowing the ‘why’ ‘how’ ‘who’ ‘when’ as it relates to your program will be critical. It is inevitable that assumptions are made, unique rule interpretation specific to a business line are penned, and a bespoke code to deal with a unique problem are developed. It may be obvious now why something was done or implemented a certain way; it is NOT the case with the passing of time. Ensuring that you have clear traceability, evidence of sign off, approval of critical decisions, will not only shield your work and withstand the test of time, it will make the lives of people who own the process after you that much easier. Although this item will not show up for a very long time, eventually your due diligence will pay off and earn your work a solid reputation.

    All in all, as with any other complicated topic, there are multiple other items that firms should be thinking about now i.e. Impact on surveillance, data lineage and governance, change management, etc. 5 that were covered above seem to be most practical to tackle at this stage, but you should NOT stop here! Wishing you a smooth implementation and a successful go-live!

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Digital Lipstick?

    Author

    Cary Dym
    Head of Business Development, DevOps
    Cary.Dym@RiskFocus.com

    Having had a few days to clear my head from a full week at my first AWS re:Invent in Las Vegas, I got to thinking about how to make sense of the announcements, customer testimonials, conversations and sheer magnitude of the event. Was there some common thread tying together the 100’s of sessions spread across multiple casinos, 65,000 visitors from around the world and a fantastic exhibitors hall (including a really cool analytics demo from TIBCO via a stationary bike time trial)? Everyone is trying to capture and define the elusive concept of Digital Transformation to be able to RE-INVENT their business, their technology and/or themselves. This then begs the questions, what does REAL DIGITAL TRANSFORMATION look like, who will be the ultimate winners, and who will go the way of the Dodo bird?

    Despite noise of Microsoft Azure nipping at their heels, AWS is still the undisputed King of the Cloud. Attendance at re:Invent was up around 10-15% YoY, a 10x increase since the first show 7 years ago! Amazon announced 77 new products and services – 20 alone just around Machine Learning (no surprise here since this has been a steady drumbeat from AWS over the past year). We also heard compelling stories from their Enterprise clients, and I was glad to see Amazon moving to make AWS a more enterprise-friendly platform with new products like Control Tower.

    A high point of the week was a dinner hosted by Tim Horan from Oppenheimer, where we discussed Cloud, Digital Transformation and the impacts of politics with industry experts. A key topic of conversation was what to make of Microsoft Azure’s gain in cloud market share over the past few years. AWS cloud market share has dropped from ~80% in 2016 to an estimated 63% in 2019, while Azure’s share has climbed from 16% to 28% over that same time period. When looking at Enterprise workloads the race is much tighter; a RightScale 2018 survey shows Azure adoption grew 35% YoY while AWS adoption in this group increased by 15%. But the Azure numbers are worth a closer look. Microsoft buries its Azure revenues in a much larger pile of “Commercial Cloud” revenues that include Office 365. So, while Microsoft announced a 73% growth in Azure cloud revenue, it’s impossible to put a hard dollar to that number. Industry experts are in agreement that the lion’s share of Microsoft’s commercial cloud growth comes from Office 365. Therefore, it’s safe to assume that the majority of Enterprise workloads running on Azure are O365 which begs the question, “is this real digital transformation?”

    In Roxane Googin’s article on July 25, 2019 in the High Technology Observer entitled “Reality v. MSFT: Real versus Fake Digital Transformations”, she concludes that “a true digital transformation is about more than replatforming existing operations. In fact, it does not happen by making ‘personal productivity’ better. Rather, it is about rethinking operations from the ground up from the customer point of view, typically using real-time ‘AI infused’ algorithms to replace annoying, time-consuming and unpredictable manual efforts.”

    I’d argue that the shift from PC windows + office to O365 is merely a replatforming exercise to improve productivity. While this move can certainly help businesses reduce expenses by 20 to 30% and drive new revenues, it does not fundamentally alter the way a business operates or interacts with clients. Therefore, perhaps this change should be viewed as Digital Transformation “lipstick”. We do, however, have great examples of Real Digital Transformations; AWS re:Invent was full of transformational testimonials and, at Risk Focus, we are fortunate to be partnering with a number of firms that are also embarking on Real Digital Transformations. I’d like to highlight a couple below. The first story is about a NY-based genomics company looking to re-invent healthcare. They understand that current healthcare providers use just a tiny portion of information from the human body and little or no environmental data to categorize a patient as either sick or well. They are building predictive patient health solutions leveraging a much richer, deeper and broader set of information. To deliver on this mission they must unleash the power of the cloud; that is the only way they can meet the challenges presented by the scale, sensitivity and complexity of the data and sophistication of their probabilistic testing algorithms. They are not leveraging the cloud to run traditional health-care solutions, but re-inventing what healthcare looks like.

    The second use case is an institutional, agency-model broker known for their technology-first approach. They were a FinTech company before the term existed. Sitting on years of institutional data consisting of 100s of petabytes of tick trade data, they are looking to harness the power of this information as a vehicle for changing how they do business. Leveraging a highly performant data lake and building sophisticated AI algorithms, the firm wants to crunch billions of records in seconds to deliver recommendations on trade completion strategies both for their internal consumers and ultimately in an “as a Service” offer. Once again, this is a mission that can only be tackled leveraging the scale and flexibility of the cloud.

    Who wins? Do large, multi-national organizations have enough size and staying power that they can afford to take a “lift and shift” approach to the Cloud, replatforming their existing enterprise workloads and then taking a slow methodical approach to transformation? Or is the pressure from upstarts across every industry – the new HealthTechs and FinTechs – going to be so disruptive that the incumbents need to rethink their transformation strategy and partners?

    The race is just beginning as, by most estimates, only 10-20% of workloads have moved to public cloud. Over the next two years we will reach a tipping point with more than half of all workloads predicted to be running in public cloud. Microsoft is well-positioned with Enterprises over this timeframe. However, if Amazon continues their pace of delivering innovative, disruptive services and couples that with their increased focus on Enterprise marketing and sales, expect them to retain the throne. One thing is certain, the rate of change will only continue to accelerate, and the winners won’t win by sitting still.

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Using Salt and Vagrant for Rapid Development

    Author

    Peter Meulbroek
    Head of DevOps & Cloud
    Peter.Meulbroek@RiskFocus.com

    One of our main jobs at Risk Focus is to work closely with our clients to integrate complex tools into their environments, and we use a plethora of technologies to achieve our clients’ goals. We are constantly learning, adopting, and mastering new applications and solutions, and we find ourselves constantly creating demos and proofs of concept (PoCs) to demonstrate new configurations, methods, and tools.

    We use a variety of applications in our deliverables but often rely on Salt for the foundation of our solutions. Salt is amazing: it combines a powerful declarative language that can be easily extended in Python, a set of components that supports a diverse array of use cases from configuration and orchestration to automation and healing, and a strong supportive community of practitioners and users.

    In my own work I’m often off the grid, traveling to clients, at a client site, or in situations where a lack of internet access precludes me from doing work in the public cloud. In these situations, I rely on technology that allows me to experiment or demonstrate some of the key concepts in DevOps from the laptop. Towards this end, I do a lot of this work using Vagrant by Hashicorp. I love Vagrant. It’s a fantastic platform to quickly create experimental environments to test distributed applications. Vagrant’s DSL is based on Ruby, which fits in well with my past developer experience. Finally, Vagrant is easily extensible: it delegates much of the work of provisioning to external components and custom plugins and comes with Salt integration out of the box.

    After working with Salt and Vagrant on a set of demos, I’ve decided to share some of the tools I put together to improve and extend this basic integration. The tools are aimed at two goals: faster development of new environments and environmental validation. In pursuit of the first goal, this post is about host provisioning and integrating Salt states. In pursuit of the second, I will post a follow-up post to describe how to generate ServerSpec tests to validate newly created hosts. These tools offer a quick path to creating and validating virtual infrastructure using Salt.

    Background

    The ironic aspect of using Vagrant is that, although it is implemented in Ruby and the central configuration file (the Vagrantfile) is implemented in Ruby, the actual implementation of Vagrantfiles is un-Ruby-esque. The syntax can be ugly, validation of the file is difficult, and the naïve implementation is not DRY*. However, with a bit of coding, the ugliness can be overcome. I’ve put together a set of provisioning configuration helper classes written in Ruby that allow me to more succinctly define the configuration of a test cluster and share code between projects. The idea behind the classes is very simple: extract from the Vagrantfile all of the ugly and repetitive assignments so that creating a simple Salt-ified platform is trivial.

    The code for this post is found at https://github.com/riskfocus/vagrant-salt. Readers are encouraged to check it out.

    TL;DR

    The post assumes you have Vagrant installed on your local machine, along with a suitable virtualization engine (such as Oracle’s VirtualBox).

    1.) In your vagrant project directory, check out the code
    git clone https://github.com/riskfocus/vagrant-salt

    2.) Copy the configuration file (vagrant-salt/saltconfig.yml.example) to the vagrant project directory as saltconfig.yml

    3.) Copy the sample vagrant file (vagrant-salt/Vagrantfile.example) to the vagrant project directory as Vagrantfile (replacing the default), or update your Vagrantfile appropriately

    4.) Set up default locations for Pillars and Salt States. From the Vagrant project directory:
    ./vagrant-salt/bin/bootstrap.sh

    5.) Initialize the test machine(s)
    vagrant up

    Congratulations. You now have an example Salt cluster with one minion and one master.

    Uses

    The examples directory contains a few sample configuration files that can be used to explore Salt. These are listed in vagrant-salt/examples/topology and include:

    • One master, two minions
    • One master, one syndic, two minions

    To use either of these topologies, copy the example Vagrantfile and saltconfig.yml to the Vagrant project directory and follow instructions 2-5, above.

    Deep Background: The Classes

    The development necessary to get Vagrant and Salt to work together seamlessly is key management and configuration – creating and installing a unique set of keys per cluster to avoid reuse of keys or potential vulnerability.

    The code consists of a factory class to create the configuration for Salt-controlled hosts and a set of classes that represent each type of Salt host (minion, syndic, master). Each class loosely follows the adapter pattern.

    When put into action, the factory class takes a hash that specifies the hosts to be created. Each host created is defined by a value in this hash. It is quite convenient to use a yaml file to initialize this hash, and all examples given below list the configuration in yaml. Example yaml configurations are provided in the code distribution.

    There are three classes that can be instantialized to create configuration objects for each of the Salt types. All configuration objects need two specific pieces of information: a hostname and IP. If a host is to be included in Salt topology, the configuration must include the name of its Salt master.

    • The base for “salt-ified” hosts is the minion class, which corresponds to a host running a Salt minion.
    • The master class corresponds to a host running a Salt minion and also the Salt master process.
    • The syndic class corresponds to a Salt master that also runs the syndic process.

    The Configuration Structure

    The configuration structure is used by the factory class to instantiate a hash of host objects. It has three sections: defaults, roles, and hosts. The defaults section specifies project-wide defaults, such as number of virtual CPUs per host, or memory per host, as follows:
    defaults:
    memory: 1024
    cpus: 1


    Entries in this section will be overwritten by the role- and host-specific values. This section is particularly useful for bootstrapping strings. The roles section specifies values per-role (minion, master, syndic). This gives a location to specify role-specific configuration, such as the location of configuration files.
    roles:
    minion:
    grains:saltstack/etc/minion_grains


    The hosts section specifies the hosts to create and host-specific configuration. Each host must, at minimum, contain keys for role, IP, and the name of its master (when it has one):
    hosts:
    minion1:
    ip: 1.2.3.4
    role: minion
    master: master
    master:
    ip: 1.2.3.4
    role: master
    master: master


    Note that per-host values (such as memory or cpu count) can be added here to overwrite
    defaults.:
    hosts:
    master:
    cpus: 2
    memory: 1536
    ip: 1.2.3.4
    role: master
    master: master

    The Vagrantfile

    Incorporating the configuration classes into a Vagrantfile greatly simplifies its structure. The factory class creates a hash of configuration objects. Executing the Vagrantfile iterates through this hash, creating each VM in turn. The objects store all necessary cross-references to create the desired topology without having to write it all out. The configuration objects also hide all the messy assignments associated with the default Salt implementation in Vagrant and allow the Vagrantfile to remain clean and DRY.

    The file Vagrantfile.example in the distribution shows this looping structure.

    Integrating Salt States

    The above description shows how hosts can be bootstrapped to use Salt. Of much more interest is integrating Salt with the configuration and maintenance of that host. This integration is fairly trivial. Included in the vagrant-salt/bin directory is a bash script called “bootstrap.sh” that will create a skeleton directory for Salt states and pillars. This directory structure can be used by the Salt master(s) by including the appropriate Salt master configuration. For example, with default setup, the included Salt master configuration will incorporate those directories:
    hosts: master: role: master ip: 10.0.44.2 memory: 1536 cpus: 2 master: master master_config: file_roots: base: – /vagrant/saltstack/salt pillar_roots: base: – /vagrant/saltstack/pillar

    Conclusion

    Salt is a very powerful control system for creating and managing the health of an IT ecosystem. This post shares a foundational effort that simplifies the integration of Salt within Vagrant, allowing the user to quickly test deployment and implementation strategies both locally and within the public cloud. For developers, it gives the ability to spin up a new Salt cluster, validating configuration, states, and the more advanced capabilities of Salt such as reactors, orchestrators, mines, and security audits. The classes also provide an easy way to explore Salt Enterprise edition and the visualization capabilities it delivers.

    The next post in this series will focus on validation. As a preview, at Risk Focus we strongly believe in automated infrastructure validation. In the cloud or within container management frameworks, addressable APIs for all aspects of the environment mean that unit, regression, integration, and performance testing of the infrastructure is all automatable. The framework described in this post also includes a test generation model, to quickly set up an automated test framework for the infrastructure. Such testing allows for rapid development and can be moved to external environments. In the next post, we’ll go through using automatically generated, automated tests for Salt and Vagrant.

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Data Masking: A Must for Test Environments on the Public Cloud

    Author

    Subir Grewal
    Principal Solutions Architect
    Subir.Grewal@RiskFocus.com

    Eat your own cooking

    Why mask data? Earlier this month, the security firm Imperva announced it had suffered a significant data breach. Imperva had uploaded an unmasked customer DB to AWS for “test purposes”. Since it was a test environment, we can assume it was not monitored or controlled as rigorously as production might be. Compounding the error, an API key was stolen and used to export the contents of the DB.

    In and of itself, such a release isn’t remarkable; it happens almost every day. What makes it unusual is that the victim was a security company, and one that sells a data masking solution; Imperva Data Masking. This entire painful episode could have been avoided if Imperva had used their own product and established a policy to require all dev/test environments be limited to masked data.

    The lesson for the rest of us is that if you’re moving workloads to AWS or another public cloud, you need to mask data in all test/dev environments. In this blog post, we will consider how such a policy might be implemented.

    Rationale for Data Masking

    Customers concerned about the risk of data loss/theft seek to limit the attack surface area presented by critical data. A common approach is to limit sensitive data to “need to know” environments. This generally involves obfuscating data in non-production (development, test) environments. Data masking is the process of irreversibly, but self-consistently, transforming data such that the original value can no longer be recovered from the result. In this sense, it is distinct from reversible encryption and has less inherent risk if compromised.

    As data-centric enterprises move to take advantage of pubic cloud, a common strategy is to move non-production environments first; the perception is that these environments present less risk. In addition, the nature of the development/test cycle means that these workstreams can strongly benefit from the flexibility in infrastructure provisioning and configuration that public cloud infrastructure provides. For this flexibility to have meaning, dev and test data sets need to be readily available, and as close to production as possible so as to represent the wide range of production use cases. Yet, some customers are reluctant to place sensitive data in public cloud environments. The answer to this conundrum is to take production data, mask it, and move it to the public cloud. The perception of physical control over data continues to provide comfort (whether false or not). Data masking makes it easier for public cloud advocates to gain traction at risk-averse organizations by addressing concerns about the security of data in the cloud.

    Additionally, regulations like GDPR, GLBA, CAT and HIPAA impose data protection standards that encourage some form of masking in non-production environments for Personal Data, PII (Personally Identifiable Information) and PHI (Personal Health Information) respectively. Every customer in covered industries has to meet these regulatory requirements.

    Base Requirements

    Masking solutions ought to provide some number of the following requirements:

    • Data Profiling: the ability to identify sensitive data across data-sources (eg. PII or PHI)
    • Data Masking: the process of irreversibly transforming sensitive data into non-sensitive data
    • Audit/governance reporting: A dashboard for Information Security Officers responsible for meeting regulatory requirements and data protection

    Building such a feature set from scratch is a big lift for most organizations, and that’s before we begin considering the various masking functions that a diverse ecosystem will need. Masked data may have to meet referential integrity, human-readability or uniqueness requirements to support distinct test requirements. Referential integrity is particularly important to clients who have several independent datastores performing a business function or transferring data between each other. Hash functions are deterministic and meet the referential integrity requirement, but do not meet the uniqueness or readability requirements.

    Several different algorithms to mask data may be required depending on application requirements. These include:

    • Hash functions: e.g., use a SHA1 hash
    • Redaction: (truncate/substitute data in the field with random/arbitrary characters)
    • Substitution: with alternate “realistic” values (a common implementation samples real values to populate a hash table)
    • Tokenization: substitution with a token that can be reversed, generally implemented by storing the original value along with the token in a secure location

    Data Masking at Public Cloud Providers

    AWS has several white-papers, reference implementations, including:

    • An AI powered masking solution for personal health information (PHI) which uses API gateway and Lambda to retrieve and mask PHI in images on S3, and returns masked text data posted to API gateway
    • A design case-study with Dataguise to identify and mask sensitive data in S3
    • A customer success story of a PII masking tool built using EMR and DynamoDB
    • An AWS whitepaper which describes using Glue to segregate PHI into a location with tighter security features.

    However, none of these solutions address masking in relational databases or integrate well with the AWS relational database migration product, DMS.

    Microsoft offers both versions of its SQL Masking product on Azure:

    • Dynamic Masking for SQL Server: which overwrites query results with masked/redacted data
    • Static Masking for SQL Server: which modifies data to mask/redact it.

    For the purposes of this discussion, we focus on what Microsoft calls “static masking” since “dynamic masking” leaves the unmasked data present on the DB, failing the requirement to shrink the attack surface as much as possible. We will also focus this discussion to AWS technologies to explore cloud-native versus vendor implementations.

    Build your own data masking solution with AWS DMS and Glue

    AWS Data Migration Service (DMS) currently provides a mechanism to migrate data from one data source to another, either at one time or via continuous replication as described in the diagram below (from AWS documentation):

    DMS currently supports user-defined tasks that modify the Data Definition Language (DDL) during migration (e.g. dropping tables or columns). DMS also supports character level substitutions on columns with string type data. A data masking function using AWS’ ETL solution Glue could be built to fit into this framework, operating on field level data rather than DDL or individual characters. An automated pipeline to provision and mask test datasets and environments using DMS, Glue, CodePipeline and CloudFormation is sketched below:

    When using DMS and Glue, the replication/masking workload is run on AWS, not in the customer’s on-premises datacenter. Un-masked or un-redacted data briefly exists in AWS prior to transformation. This solution does not address security concerns around placing sensitive data (and accompanying compute workloads) on AWS for clients who still gingerly approach public clouds. Still, for firms that look for a cloud-native answer, the above can form a kernel of a workable solution, when combined with additional work around identification of data needing masking and reporting/dashboarding/auditing.

    Buy a solution from Data Masking Vendors

    If the firm is less concerned about cloud-native services, there are several commercial products that offer data masking in various forms which meet many of these requirements. This includes IRI Fieldshield, Oracle Data Masking, Okera Active Data Access Platform, IBM Infosphere Optim Data Privacy, Protegrity, Informatica, SQL Server Data Masking, CA Test Data Manager, Compuware Test Data Privacy, Imperva Data Masking, Dataguise and Delphix. Several of these vendors have some form of existing partnership with cloud service providers. In our view, the best masking solutions for the use case under consideration is the one offered by Delphix.

    Buy: Data Masking with Delphix

    This option leverages one of the commercial data masking providers to build data masking capability at AWS. Delphix offers a masking solution on the AWS marketplace. One of the benefits of a vendor solution like Delphix is that it can be deployed on-premises as well as within the public cloud. This allows customers to run all masking workloads on-premises and ensure no unmasked data is ever present in AWS. Some AWS services can be run on-premises (such as Storage Gateway), but Glue, CodeCommit/CloudFormation cannot.

    Database Virtualization

    One of the reasons Delphix is appealing is the integration between its masking solution and its “database virtualization” products. Delphix virtualization lets users’ provision “virtual databases” by exposing a filesystem/storage to a database engine (eg. Oracle) which contains a “virtual” copy of the files/objects that constitute the database. It tracks changes at a file-system block level, thus offering a way to reduce the duplication of data across multiple virtual databases (by sharing common blocks). Delphix has also built a rich set of APIs to support CI/CD and selfprovisioning databases.

    Delphix’s virtualized databases offer several functions more commonly associated with modern version control systems such as git. This includes versioning, rollback, tagging, low cost branch creation coupled with the ability to revert to a point along the version tree. These functions are unique in that they bring source code control concepts to relational databases, vastly improving the ability of CI/CD pipeline to work with relational databases. This allows users to deliver ondemand, masked data to their on-demand, extensible public cloud environments.

    A reference architecture for a chained Delphix implementation utilizing both virtualization and masking would look like this:

    Conclusion

    For an organization with data of any value, masking data in lower environments (dev, test) is an absolute must. Masking such data also makes the task of migrating dev and test workloads to public clouds far easier and less risky. To do this efficiently, organizations should build an automated data masking pipeline to provision and mask data. This pipeline should support data in various forms, including files and relational databases. Should the build/buy decision tend toward purchase, there are several data masking products that can provide many of the core masking and profiling functions such a masking pipeline would need, and our experience has led us to choose Delphix.

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.

    Featured Insight

    Consolidated Audit Trail (CAT) Resurrects the Age-Old Question: ‘Build vs. Buy?’

    Author

    Alex Rabaev
    Head of Regulatory Reporting Practice
    Alex.Rabaev@RiskFocus.com

    Well, like any other complicated problem there isn’t a ‘one size fits all’ answer. Multiple variables will be at the heart of your decision making, including your firm’s scope, long-term strategy, maintenance, etc. Hopefully below will give your firm some guidance on things to consider making the right strategic decision.

    Case for working with a dependable vendor:

    Price: Your budget will likely be a #1 driver. Cost that will be associated with building an inhouse solution will likely far exceed a pre-determined solution. Depending on your size, it may be economical to give up convenience of proprietary built vs. out of the box solution.

    Time: Conforming with expected regulatory timelines is critical, both from reputational standpoint and having to avoid a potential regulatory fine/action. Possibility of slippage is a reason enough for you to consider a vendor solution.

    Industry knowledge: SME knowledge is not replaceable, but where proficiency is lacking utilizing a vendor may be optimal way to go to market. A reputable vendor will ensure the solution aligns to the actual rule and regulator’s expectations.

    Scale: Overtime the vendor will receive continuous feedback as it relates to the solution, and economies of scale will dictate that their solution will continue to improve and serve your
    broader needs.

    Reasons to consider looking to internal solution:

    Accountability: Don’t confuse ‘service offering’ that meets your needs with your obligations. Just because you are utilizing a given solution that seems to ‘work’, it doesn’t alleviate your overall responsibility for the accuracy of the reporting. ‘Safety in numbers’ will not work when external audit is conducted.

    Ongoing maintenance: Going with a vendor is never truly ‘plug n’ play’. You will have to ensure you implement and deploy the solution. Involvement and scope will depend on size/needs of your firm.

    Limitations: You will find yourself locked into the solution and proposal that aims to address broader needs but may not be customizable to your specific business and overtime may generate unwanted limitations which will be difficult to overcome.

    Cost: up-front price may be attractive but consider the longevity and ongoing dependability. Price point shouldn’t be limited to ‘go-to-market’ mentally.

    In Conclusion

    Overall, in-house built is not replaceable, but it may be more practical to consider outside solutions. The decision is never one dimensional and never in the moment, as it will transcend scope and time. Your decision should balance practical short-term considerations and long-term strategy/vision.

    Learn More

    Risk Focus is a consultancy solving capital-markets business problems with technology and insight. We combine business domain knowledge, technology expertise, and a disciplined process to ensure the success of the most challenging projects in the industry. Many of the largest exchanges and investment banks operate on systems built by Risk Focus teams. Our practices include Custom Application Development, Regulatory Reporting & Compliance, DevOps & Cloud, Streaming Architectures, and IT Strategy. We’re a Premier Confluent Systems Integrator and an AWS Advanced Consulting Partner with Financial Services, Migration, and DevOps Competencies. Clients count on us to provide outcomes that advance their objectives on time and on budget.