Process Models are Leading the Way in Bioprocess Development

compass.jpg

The BioProcess International West conference, held in San Francisco CA from March 19-22 2018, reinforced the state of the bioprocess industry for biotech companies looking to accelerate and intensify their manufacturing processes. The most consistent theme that emerged was the importance of establishing data-driven process models across each of the functional areas within process development.

Focus on speed to market continues to be a focal point for young and established companies alike. Bioprocess development groups are changing their methodologies to accomplish three goals: (i) accelerate First-In-Human trials through precision technology transfer, (ii) adopt continuous processing technologies, where applicable, to reduce upfront CapEx and better match supply with demand, (iii) Reduce process risks early to avoid scale-up challenge later. PD teams are using data-driven process models to accomplish each of these objectives.  

 

Accelerate First-In-Human trials through precision technology transfer

The key milestone for bioprocess development is reaching First-In-Humans (FIH) clinical stage. The FIH stage requires that many due diligence steps around safety, dosing, and delivery have been completed. This also implies that a drug candidate can be produced reliably (though not necessarily efficiently) in sufficient quantities to support these early-stage activities (pre-clinical and Phase 1 clinical trials). It’s only after FIH when the drug’s potential for success starts to take form. Reducing the time to reach this stage, then, may be expensive, but it can de-risk the overall program.

Achieving an accelerated timeline, however, takes significant planning and effort. Javier Femenia, a Sr. Scientist in Process Development at BioMarin discussed his team’s need to de-risk their processes in the earliest stages of development. BioMarin’s objectives are to reach First-In-Human trials for new drug candidates within 1-2 years. Nearly all of their drug candidates, however, require new and sometimes novel unit operations. This represents a significant challenge as there’s no established set of processes/activities from which they can draw for these new drug candidates. Previous models of product development would have required researchers to generate lab data to assess each risk individually, a process that often took years and many millions of dollars. In order to meet their goal of FIH in 1-2 years, BioMarin needed to be innovative. They now bridge their information gap by relying on process models generated using early stage, small-scale data, as well as data gleaned from previous drug programs with similar dynamics.  Where they lack experience, they leverage published data and published process models.

 

Adopt Continuous Processing Technologies Where Feasible to Reduce CapEx and Match Supply & Demand

Production volume has become more challenging for drug manufacturers to predict. Orphan drugs (which, by definition, require low production volumes) represent nearly 50% all drug candidates. Biosimilars and generics are entering the market, leading to decreased demand for blockbuster drugs. Therapeutic breakthroughs are being found using T-cell and gene therapy methods, each requiring tailored and sometimes patient-specific product delivery and manufacturing methods.  To support these innovations, the industry is investigating a broader set of methods for producing the widening range of products. Several options have emerged among the greater industry including continuous perfusion processing, flexible skid-mounted systems, and single-use technologies. This “intensification” of bioprocessing is a central concern for the biotech industry and was the theme of many talks given at BPI West.

Daryl Powers, Associate Director of Upstream Process Development at Sanofi shared a data-driven comparison between a batch-fed and continuous perfusion process for the same monoclonal antibody drug candidate.  The results show subtle, yet material differences between each method. Overall, the data indicate that unstable molecules are better suited for continuous upstream processing while more stable molecules are best served by batch fed.  

 

Reduce Process Risk Early

Re-evaluating process development steps and cutting timelines to the bone tend to exacerbate the already challenging task of developing a new process.  Achieving accelerated timelines can only be reached by reducing risk very early in the development cycle. In this sense, early-stage process development — say, cell line development — must refocus its activities to developing a mature process (i.e. cell line) without revisiting this process in later stages. The payoff is avoiding iterative development steps and avoiding many costly challenges in tech transfer and scale-up.  

Heather Oakes at Lonza Pharma & Biotech showed the benefits of a Protein Sequence Variance analysis for identifying potential risks to the quality of clinical drug candidates. Mutations can occur during cell line development processes, and these can be exacerbated by variations in the fermentation environment (media variability, over-expression of undesired products, etc). It’s imperative to screen for DNA mutations early in the the upstream development process to de-risk their impact in later-stage development. The remaining challenge, however, is linking the genetic sequences of screened cell lines to downstream process development results to provide an analytical basis to identify which mutations are the cause of observed performance. 

 

A New Competitive Field

The crucible of competition and innovation is driving process development groups to invest in the speed afforded by data-driven models for risk identification and mitigation. The desired reward is reduced development timelines. At the heart of these techniques is high-quality data that can be used to validate that a model is trustworthy and robust. Ubiquitous access to trustworthy and transparent data has become the definitive differentiator for every life science company.  

Many of Riffyn’s customers have struggled to create robust data-driven models and decision-making processes, not because the required data are not collected, but rather because they are not readily accessible. Data are often collected in silos, such as flexible, yet unstructured ELNs, or structured, yet overly-rigid LIMs systems. When these systems fall short, data end up in spreadsheets. The goal of most of our customers is to gather insights across all these different systems, yet the connections between them are unclear at best, and impossible to decipher at worst. The result is a lot of manual effort spent searching for related datasets, followed by a lot of copy/pasting, data cleaning, and organizing before a single analysis can be run. At Riffyn, our goal is to eliminate the effort involved in pulling together data. We have created a unique process-based approach that allows us to automate data contextualization and aggregation -- as soon as data are put into the Riffyn SDE, they are contextualized and joined with all relevant datasets upstream and downstream. The result is data that can be used for analyses such as process models and machine learning algorithms within minutes, or even seconds, after collection.

Douglas Williams