In our last matchup for our database bracketology series, today we’ll be talking about a specific use case that pits the newly favored distributed database against an old favorite — traditional relational (RDBMS). In case you’re just tuning in, check out our first post to learn more about database bracketology and see our previous matchups.
With a need to balance dynamic and challenging regulatory and compliance requirements along with the end-user desire for more information at their fingertips, financial services organizations have been put in a tough spot when it comes to their data management. More and more data enters their system, but with their legacy database environments, it makes it nearly impossible to manage the unstructured data — let alone keep it secure.
In addition, because of the sheer amount of data collected by financial services organizations, analytics and business intelligence continues to move to the forefront of their operations. Real-time information, which is readily available within the organization, gives them the ability to quickly make rapid decisions — some of which are driven by regulatory requirements and others by competitive market dynamics.
For this financial services customer, they had specific requirements for how they wanted to handle their data:
Able to handle large amounts of data. Depending on the size of the organization, thousands — if not millions — of transactions occur every single day. Handling and processing large amounts of data needs to be the basis of any database solution that they leverage moving forward.
Process information at fast speeds. Because of the dynamic regulatory requirements for financial services organizations, as well as the large amount of transaction data that they capture, it is imperative that they are able to process data quickly. It gives them the opportunity to course correct if, for instance, they aren’t meeting a specific requirement.
Provide insights into business and market requirements. Storing the data serves as one side of the coin for financial services organizations, but the ability to dig in and analyze the data becomes imperative. The large amounts of data provides essential insight into performance against key industry trends, regulatory and financial requirements and even, competitors.
Secure with low risk. Because of the sensitivity of data handled by these organizations, security tops the list of needs to ensure data adheres to requirements set forth by various regulatory and compliance requirements. Minimizing risk becomes a primary driver for any internal infrastructure.
In this case, I recommended that this organization first take a look at Apache Hadoop. Distributed databases like Hadoop have recently become favorites in the data world, mostly because they are able to store large amounts of data — a requirement for any organization in today’s business world.
However, for the best results, a hybrid approach of both Hadoop and a traditional database would serve this organization well. Because financial services organizations deal with such large sets of data — both on an external and internal aspect — Hadoop offers the ability to store and process large, complicated data sets.
On the other hand, the traditional database model provides fast data analysis for smaller data sets. For example, quick access and analysis of transactional data can give financial services organizations essential insight into services use, helping them make data-driven decisions to meet the dynamic market.
A combination of both can be challenging to monitor and manage. Leveraging a database performance monitoring tool like SelectStar provides real-time analysis of your database functionality. In turn, reduce the complexity of the hybrid database environment and gain essential visibility into how your databases are performing. After all, your data is only as good as its infrastructure.