top of page

Is it time to rethink how your Life Science Data Dictionaries are managed?

Updated: Aug 13, 2021



While those working closely with product data at a quality and compliance level are quite rightly pre-occupied with the evolving rigours of regulatory adherence, the pan-organisational aspects of data quality management are being missed in many cases.


And the trouble is that, as long as teams are operating within traditional departmental silos and job roles/remits, the scope for real process innovation will be stymied. Everyone is well aware now that the future of regulatory activity and of internal content management will be data-driven. In other words it will be much more dynamic, and more efficiently coordinated - with ever lighter touch as structured authoring, for single-use documents and product labelling, takes over from manual content creation.


The trouble is that companies are a long way from being able to manage data and content along a single continuum – one that transcends departmental or application-specific requirements.


Even now, much of the source content - from which any future master data source will be developed and honed - exists in duplicate or overlapping form, in function-specific document management systems, databases and/or spreadsheets. Cleaning all of that up, and making it reliably reusable on an enterprise-wide basis, is beyond the capability or capacity of a single existing team.


This is probably the single biggest challenge facing companies as they approach the reality - and broader opportunity - of EU IDMP compliance and adoption of its target operating model (the submission of standardised data as the primary means of regulatory information exchange).


Unlocking the wider value of departmental data


As they move to regulator-owned data sources such as EMA substance, product, organisation and referential (SPOR) data - extending to IDMP in due course – businesses must adapt their own content and processes so they can exchange information with these external resources.


The ultimate business value, though, will come in using SPOR and similar referentials right across the organisation and not just within isolated systems. Internal data terms, ISO values and values sourced from SPOR, MedDRA and others need to be managed holistically. Users must able to select values that are meaningful at a business or functional level, without compromising the ability to map to synonyms, aggregate terms, historical values or translations that can be used in cross-functional and regulatory outputs.


The good news is that modern regulatory information systems (RIMs) - at least those that have kept pace with the latest requirements - are designed to support standardised, searchable information in the context of the current need. Properly structured RIM systems organise information by global event, market-specific activities, and submission assembly, for instance – holding contextual/process detail about content at each of those levels. This makes it much easier to track what’s going on.


So the mechanism for building an advanced, detailed, continuously evolving source of compliant product and manufacturing truth is within relatively easy reach - even if the scale of the migration task required to consolidate and organise all existing information in an enterprise-wide RIM, ready for IDMP/SPOR compliance and a whole host of other future use cases, feels daunting.


Appointing a data quality team


The part of the transition that needs more thought is who the data owners and custodians will be.


Who will be responsible for keeping data clean, consistent and trustworthy - as the definitive record of a product’s make-up, of its regulatory/market status, and of the processes and data editing that have led up to that point?


Although an enterprise-level master data source suggests a diffusion of responsibility out to all functions that contribute to or touch that information (Trials Management, Regulatory Affairs, Manufacturing/R&D, etc), there is a danger of having too many hands involved – or none, as people deflect accountability in the certainty that data vetting and policing must be someone else’s job.


The answer is likely to be that a new team needs to be assembled, with the specific remit of assessing, consolidating, standardising and maintaining a central source of data for multi-purpose use by everyone who needs it.


This team could sit within the Quality organisation, or float somewhere above that and adjacent functions. The important point is that this data management or data quality team is empowered to advise and be proactive in reviewing the various contributing data sources, checking their veracity and authority, and improving and maintaining their quality, completeness and currency.


Budget allocation/planning will need to be given similar consideration, so that a single department is not expected to carry the cost of a resource with such broad eventual application. There is even benefit in considering a Chief Data Officer if such a post doesn’t already exist in your organisation.


The broader the application, the broader the benefits


Absolutely, there is an uphill task ahead to bring all of this into line. But, from a business perspective, the chance to share and work with the same data, as dynamic vocabularies that support multiple applications and functions of the organisation – from manufacturing to purchasing and sales – promises to be truly transformational in improving productivity, reducing operational overheads, and shortening time to market.


Life Science companies already sit on a wealth of rich data. The opportunity now - from a business and not just a regulatory/risk management perspective – is to ensure all of that information is the best it can be, readily available, and used to maximum potential by every part of the company.


And that’s definitely something Generis and our CARA Life Science Platform can help make possible.

- David Warner

bottom of page