August 31, 2021 — Clinical samples are moving across an increasing number of physical/virtual locations and data is delivered in an expanding array of file formats as clinical trials become increasingly more complex and data rich (report).
Biospecimens are analyzed using a variety of assay technologies, each generating its own set of reportables, quality control metrics and data/file formats. Data is delivered through multiple, disconnected pipelines (Figure 1).
This complexity creates obstacles for many functional groups within sponsor organizations:
- Clinical/Biomarker Operations: need visibility into site-level performance, need to know where all samples are in the sample lifecycle (transit, processing, storage) at a given point in time and identify gaps in sample collections
- Translational Research: need early visibility into sample quality, visibility into project data availability to inform study decisions – for example, they need to know how many participants have pre- and post-treatment samples for data forecast generation
- Data Management: need to quickly identify discrepancies and manage queries
- Office of the CIO: needs visibility into entire data ecosystem to ensure that significant investments in biomarker assay lab data are realized
Disparate, Disconnected Information Streams Breed Data Chaos
Figure 1. Each of the distributed physical locations playing a role in a clinical trial has its own underlying source system. Results data and ancillary information (such as images) are shared with the sponsor, but typically via disparate file formats and data pipelines.
Exploratory biomarker data can provide critical operational insights and can enable the generation of scientific insights that justify significant lab/translational budgets. To best facilitate this insight generation, sponsors should consider implementing solutions that enable collaboration across teams.
Many data management platforms, including in-house solutions, have been able to aggregate individual datasets provided by vendors into a single data warehouse (e.g., on-premise or cloud storage solutions). However, these data streams typically remain fragmented in need of standardization, harmonization and integration, and the resulting data structures remain opaque and unconducive to cross-functional collaboration.
Watch our on-demand webinar for a demonstration of QuartzBio’s Biomarker Data Management & Integration platform, which centralizes, standardizes, and harmonizes clinical, pharmacokinetic (PK), and (exploratory) biomarker data from any source, in any format into a complete, integrated translational data hub. Integrated within the hub is the capability to flexibly query, explore, visualize and share the data.
QuartzBio’s solution is available to sponsors via either web-based user interface or API integration with existing systems, delivering cross-functional value to organizations (Figure 2):
Figure 2. Integrating data from exploratory biomarker, PK, and clinical data sources in a translational hub enables teams to interrogate trends and reach insights faster.
By creating a centralized, harmonized single source of truth, QuartzBio gives translational, operational, data management and information technology teams the power of collaborative, flexible data access, exploration, reporting and collaboration.
Drug developers will be compelled to further leverage data management strategies and tactics — includiing real-time interim assessment, risk-based approaches, automation, and augmented analytics — to support scientific and operating decisions.
—Ken Getz, Director, Tufts Center for the Study of Drug Development