Building a Health-Data Platform
I have had the opportunity to work on several complex and challenging projects. One such project was a health-data platform designed as a modular monolith with a plethora of complex requirements to fulfill.
As a health-data platform, ensuring the security and compliance of our users’ sensitive information was paramount. To that end, we made it a priority to align with both HIPAA and SOC-2 standards.
Compliance and Security Measures for a Health-Data Platform
SOC-2 Compliance
For SOC-2 compliance, we implemented various strict controls to guarantee the safety of our user’s data. This included implementing multi-factor authentication, such as FIDO tokens, for added security. We also conducted regular security assessments and backups to ensure that any data would be protected in the event of a security breach. Additionally, we employed SOC-2-compliant cloud providers to host and store sensitive data. Furthermore, a robust Governance, Risk management, and Compliance (GRC) program were established and regularly reviewed to ensure compliance with SOC-2 standards.
Securing sensitive data through strict compliance and security measures, we guarantee the safety of our user’s information and maintain adherence to SOC-2 and HIPAA standards.
HIPAA Compliance
Concerning HIPAA compliance, we employed robust security measures such as encrypting data in motion and at rest. In addition, regular security risk assessments and penetration testing were conducted to identify and address vulnerabilities. We implemented strict access controls and audit logging to ensure that only authorized personnel had access to protected health information (PHI). Furthermore, all vendors and business associates that had access to PHI were required to sign Business Associate Agreements (BAA) and follow HIPAA compliance. Additionally, we implemented multi-tenancy and data segregation to ensure that different user data was isolated and could only be accessed by authorized personnel. While these measures successfully maintained security and compliance, we faced scalability challenges as the user base, and requests for report generation expanded.
Challenges with a Modular-Monolith Architecture
Advantages of Modular-Monolith
As mentioned previously, the initial design of the platform was based on a modular-monolith architecture. The idea behind this pattern is to break the system into smaller and more manageable modules, each of which is responsible for a specific set of functionality. This allows for easier maintenance, scalability, and improved code reuse. [Read more about it here]
Challenges with Tight Coupling of Modules
However, in our case, the modules were tightly coupled, meaning they had a high degree of dependency. This is a common mistake when there needs to be more experience in designing and building large-scale systems. In addition, the tight coupling of the modules made it challenging to change or update one module without affecting the others, which led to increased maintenance costs and decreased flexibility.
A modular-monolith architecture may offer the promise of easier maintenance and scalability, but without proper upfront planning and understanding of its implications, it can lead to unforeseen scalability challenges and technical debt.
Unforeseen Scalability Challenges - The Consequences of Poor Upfront Planning in Platform Architecture
The root cause of this problem was poor upfront planning and a lack of understanding of the implications of such an architecture. The team responsible for designing and building the platform had yet to fully consider the scalability and performance requirements that the system would need to meet as the user base grew. Additionally, they had yet to fully assess the technical limitations of the chosen technologies and how they would affect the system’s ability to meet these requirements.
As the user base grew and the requests for report generation and PHI statistics increased, it became clear that this architecture would not be able to handle the scalability demands of the system. In addition, the tight coupling of the modules made it challenging to add new features or scale the system horizontally, and the lack of proper planning and understanding of the implications of the chosen architecture left the system with some technical debt.
Limitations of NoSQL
The primary storage used for the platform was MongoDB, a NoSQL solution. While this provided some benefits in terms of scalability and flexibility, it also had its limitations. One of the main limitations was the need for more support for complex queries and aggregations, which made it challenging to generate the reports that were required by the users. In addition, MongoDB’s lack of support for transactions made it tough to ensure data consistency and integrity. Transactions are a crucial feature for many systems, as they allow for multiple operations to be executed as a single atomic unit, ensuring that the data remains consistent despite failures or errors. With transactions, it becomes easier to ensure that the data remains consistent and correct.
Scalability Challenges and Addressing Technical Debt
As our health-data platform faced increasing scalability challenges with a rapidly growing user base and a growing demand for report generation and protected health information (PHI) statistics, it became clear that the current architecture needed to meet the system’s demands. The tight coupling of modules and lack of proper planning had resulted in a significant amount of technical debt, which had to be repaid.
Implementing CQRS and Event Sourcing
We implemented the Command Query Responsibility Segregation (CQRS) and Event Sourcing pattern to address these scalability challenges. CQRS’s separation of commands and queries and Event Sourcing’s storage of state changes as a sequence of events provided improved scalability and flexibility and enhanced support for complex queries and aggregations.
The migration process began by clearly defining the bounded contexts within the system. This involved identifying and isolating distinct areas of the system with their own domain models and business logic, such as user management, data management, and reporting. Once these contexts were identified, we implemented the CQRS pattern by separating the command and query responsibilities within each bounded context. This decoupling of the command and query sides improved scalability and flexibility. We then moved on to implementing the Event Sourcing pattern by storing all changes to the system’s state as a sequence of events. This allowed us to easily replay past events to restore the system to a previous state, which is helpful for auditing and debugging. It also allowed us to quickly generate complex reports and statistics based on historical data. Finally, as each bounded context was clearly separated, it was easier to scale each service independently.
The technologies at the core of our new solution are the magical trio of Go, Kafka, and Postgres.
Challenges and Benefits of the Migration to CQRS and Event Sourcing
The migration to CQRS and Event Sourcing was not without challenges, but the benefits were undeniable. The improved scalability and flexibility provided by the separation of commands and queries, and the ability to quickly generate complex reports and statistics based on historical data, made it a worthwhile effort. In addition, the architecture’s clear separation of concerns and the ability to scale each service independently has enabled us to continue meeting the system’s growing demands. Unicorns and fairies, right? Not so quick!
Conclusion: The Journey to Success
In conclusion, the journey toward success was long and arduous, but the outcome was worth it. The initial development of the monolith may have taken over a year, but it was a necessary step to lay the foundation for the project’s next phase. Identifying and addressing mistakes, addressing requests, and pivoting the business strategy, allowed us to take a step back and re-evaluate our approach, making it more effective and efficient. The separation of contexts and the acquisition of the necessary skills to evaluate the proposed changes to the system took nearly half a year with a team of four experienced architects, one of whom was an ex-DevOps professional, but it resulted in a more robust and scalable solution. Implementing the new solution also required significant resources, but we were able to successfully manage and allocate these resources to ensure the project’s success. We faced many challenges, but we persevered and ultimately overcame them. The power of proper planning, execution, and a never-give-up attitude resulted in a system that met and exceeded the demands of its users.
These materials helped me along the way, so they might benefit you as well:
- “Event-Driven Architecture” by Ben Stopford
- “Building Microservices: Designing Fine-Grained Systems” by Sam Newman
- “Event Sourcing” by Martin Fowler (article)
🎨 Crafting software is an art, and our canvas is simplicity. We believe in creating solutions that are not only elegant in design but also robust and tested to withstand the test of time. Our approach is to provide a solution that meets stakeholders’ requirements and ensures long-term maintainability and scalability. Our ultimate aim is to deliver efficient, effective, and adaptable software to the ever-evolving needs of businesses without succumbing to the allure of unnecessary complexity.
If that is what you seek, then contact us at contact@decantera.dev or via our site decantera.dev . 🚀