Could response-based dose be the future of patient care?

Could response-based dose be the future of patient care?

FDA report showed that most drugs are effective in only 25-62% of the patients. That probably make you wonder why they are approved and prescribed to patients? The answer to this question is not simple. Many approved drugs have a range of therapeutic dose which could be prescribed to the patients which allows clinicians to adjust the dose based on patient response and toxicity.

This variation of efficacy is another argument in support of precision medicine where patients are assessed based on their medical history, genetic background and other information in order to be provided with the optimal dose and the most adequate treatment for their condition.

What are the disadvantages of current clinical trial designs when it comes to dose individualisation?

Some small size clinical trials in specific indication like blood pressure, blood sugar, pain, seizure and coagulation could use different dose titration models to identify the best dose for the patients. However, large clinical trials often use fixed dose in their research.

Statistical model using data from clinical trials shows that in large clinical trials with fixed dose the response rate can vary between 20 and 80% while clinical trials with individually adjusted dose have much higher response rate.

Why cannot we simply use individualised dose instead of fixed dose?

While no doubt having option to adjust doses as per individual response would be the best solution to the problem there are some complications that prevent this.

  • For example if the drug is eliminated by kidneys and patient has renal function impairment they should be treated with lower dose to reduce the toxicity. And while this sounds like a logical decision the risk of giving the patient sub-optimal dose should also be considered.
  • This could not be done in oncology trials because the dose calculated is the maximal tolerated dose which is selected to maximise the effect.
  • Dose reduction is not an option for HIV drugs because of the high risk of drug resistance.
  • Drugs for slow progressing disease which require extended treatment also cannot be assessed adequately for efficacy.
  • Acute conditions which require one-off treatment cannot use dose titration.

There are several considerations for cases where dose titration is possible:

  1. The condition has to be stable to allow efficacy assessment.
  2. The drug should rapidly achieve steady pharmacokinetic and pharmacodynamics state otherwise the clinical trial will be very long considering the patients have to take more than 1 dose.
  3. Efficacy and toxicity should be quantifiable and relevantly stable once steady state is reached.
  4. The response to the drug should have quick onset and offset to avoid washout period.
  5. There should be an upper limit of dose-escalation to ensure patient safety.
  6. Subtracting the response of placebo – for example, patients could expect better response at higher doses and this could increase the efficacy of placebo arm.
  7. The treatment duration will be longer because patients will need more than one dose and this could increase the risk of drop out.

While dose titration has its challenges it has future in precision medicine and it is a logical option when assessing patients’ treatment.


The remarkable therapeutic potential of response-based dose individualisation in drug trials and patient care

Published on 1 Aug 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us:
Adopting orphan drugs in different therapeutic areas

Adopting orphan drugs in different therapeutic areas

What happens if a newly developed drug fails in the tested indication?

Very often such drugs are abandoned if the developers think they will not be able to be used for different indications or therapeutic areas. In such cases these drugs are classified as ‘orphaned drugs’.

Where the term ‘orphaned drug’ comes from?

The focus of drug development is shifting towards diseases that affect smaller amount of the population, also known as rare or ‘orphan’ diseases. In USA a disease is considered ‘orphan’ if affects less than 200 000 people or roughly 1 per 1500 people. The term ‘orphan drug’ refers to drugs used to treat orphan diseases and its derived from legislation like Orphan Drug Act of 1983.

Not surprisingly oncology is viewed as one of the major therapeutic area where orphan drugs are used because more and more evidence suggest that cancer is a collection of orphan diseases.

Vicus Therapeutics has developed a model which allows adoption of such orphan drugs for new cancer indications.

Step 1:  Hierarchical Network Algorithm (HiNET) – This is an algorithm that allows modelling of the disease by evaluating tissue energetics, homeostatic control and biochemical pathways.

Step 2: Drug Selection: In this step it is used a data base which contains information for off-patent drugs, their target and human efficacy data in similar diseases, potential adverse events and pharmacokinetic profiles.

Step 3: Due to the complexity of cancer rarely one single drug could be used, therefore the model created potential treatment regimens.  Then the suggested regimens are evaluated for their potential safety and efficacy.

The use of such models in repurposing the orphan drugs is a novel and smart way of speeding up drug development process and identifying new therapies for rare diseases which in many cases have no treatment options.


Adopting orphan drugs: developing multidrug regimens using generic drugs

Published on 4 July 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us:
Benefits and Challenges of Developing Companion Diagnostics

Benefits and Challenges of Developing Companion Diagnostics

Personalised medicine is a very popular topic in healthcare as a path towards better and cost-effective treatment of patients. However, we are not there yet due to various reasons. One of the major reasons is lack of diagnostic tests that will equip clinicians with the correct tools to make the right medical decisions.

In this review we will discuss the benefits and challenges of developing diagnostic tests or so called companion diagnostic devices which provide information regarding safety and efficacy of the corresponding therapeutic product.

What could be the benefits of companion diagnostics?

  • They help identify patients who will respond better to the therapy; have fewer side effects and improve their quality of life;
  • They allow clinicians to identify the best treatment options for their patients, save time and money from unnecessary procedures and treatments;
  • They will help regulatory agencies to better understand and evaluate the safety and efficacy of new therapies;
  • They will allow improving drug development timelines and reducing costs.

These are some of the benefits of having powerful diagnostics to help make decisions regarding patients’ treatment.

What kind of challenges are there for developing companion diagnostics?

  • In cases where there is limited information about the disease or the mechanism of drug action developing companion diagnostics could be a serious challenge;
  • Often drug development and diagnostics development follow their own pathways which means that by the time the drug reaches phase 3 the diagnostics may not be ready to be used to select the correct patient population;
  • The reliability of the test is also important – high prevalence of false positive or negative results will compromise its use;
  • Current healthcare systems often do not reimburse the full costs of such tests which discourage their usage;
  • The laws for intellectual property are different for companion diagnostics and often they allow creation and marketing of cheaper but not validated tests;
  • Companion diagnostics are regulated differently than medicinal products, which mean there could be significant delay before they reach the market.

While there are definitely challenges in developing companion tests they are the future of precision medicine so we hope we can see changes that will provide greater support for developing and testing companion diagnostics.


Developing companion diagnostics for delivering personalised medicine: opportunities and challenges

Published on 3 June 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us:
Can we predict adverse reactions?

Can we predict adverse reactions?

One of the big challenges in drug development is our limited ability to identify potential adverse reactions associated with new therapeutics. Pharmacogenomics provides a great opportunity in understanding the mechanisms of action of drugs and predict not just their adverse reactions but also their efficacy. But as all opportunities it has some limitations too. In this review we will discuss the usage of pharmacogenetics in drug development and adverse reactions prediction.

But let’s start with what is adverse reaction and why it is important in drug development. Adverse drug reactions include a range of expected (and unexpected) toxicities to therapeutic failures and rare, severe reactions. Monitoring and preventing these reactions is top priority in drug development. How much we can predict the adverse reactions depends on various factors. In some cases where the nature of the studied drug is known there are some anticipated adverse reactions; similarly if the metabolic pathway of the drug is known there are some expected adverse reactions.

How could pharmacogenomics contribute in monitoring drug safety?

For example, codeine is activated to morphine by liver enzyme CYP2D6, however if the patient has multiple copies of active CYP2D6 gene they may be exposed to higher doses of morphine. If the enzyme is with low activities, on the other side, patients will have lower levels of active drug. This same enzyme is responsible for activating one of the cancer therapeutics – tamoxifen – and patients with low activity of the enzyme could be exposed to lower doses of tamoxifen. So why is this important? Patients with active CYP2D6 could be at risk of overdose when taking codeine; cancer patients who do not metabolise well tamoxifen will have lower doses of active drug and this could affect their treatment.

In some cases the consequences are quite drastic – for example, data from clinical trials with patients with metastatic colorectal cancer show that if the tumor cells active mutation in KRAS gene this leads to lack of effect of the anti-cancer drugs, cetuximab and panitumumab.

All these examples show the importance of genetic information when treating different medical conditions, which is why some clinical trials are collecting biogenetic markers for analysis.

What are the challenges in using pharmacogenetics information?

  • There is still limited data regarding many drugs – sometimes this is result of patent protections, in other cases just lack of data or unknown drug action mechanism or metabolic pathway.
  • In cases of very limited treatment options for the patients there is an ethical dilemma is patients have to be excluded from treatment because of unfavourable genetic profile.
  • Genetic testing is expensive and adds cost to patients’ treatment.
  • Collecting information after drug approval is out of drug developers’ control.

While there are challenges in using pharmacogenomics methods in identifying adverse reactions, it will have its place in the future of drug development.


Pharmacogenomic strategies in drug safety

Published on 1 May 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us:
Can AL help speed up drug development?

Can AL help speed up drug development?

Artificial intelligence and machine-learning are novel exciting tools which we hope will help drug development. AI is analysis of very large data sets with statistical machine-learning methods. The simple way of explaining the role of AI is that it could be used in drug development to analyse big sets of data in order to identify potential new drugs.

But before we discuss challenges and opportunities that AI could provide we can explore a bit more in the current drug development process. According to the statistics there are 16 new molecular entities launched per year between 1950 and 2014. Slow drug development process and all its challenges cause costs to go up by 8% per year. Some of the potential challenges could be that little is known about the biological mechanisms of the particular disease; the potential drug target is difficult to be reach or result in other complications; some molecules are from classes, which cannot be used as drugs due to various reasons, etc. Another factor is publication bias and patent limitations which prevent scientists, working in drug development to have adequate information about potential molecules. The tendency of pharma companies to outsource parts of drug development process to CROs is another challenge for scientists because it prevents them from access to important information.

While artificial intelligence and big data are bring exiting new opportunities they also have their own risks.

  • One of the biggest risks of creating big data base with all information in medical chemistry will be that may end up with `everyone doing the same thing`;
  • Identifying new compounds is very complex process;
  • AI has been tested mainly on training data so far;
  • Lack of interpretability – it does not show what parameters it has taken into account to produce the final result;
  • There is no method to assess how adequate are the final results which could potentially result in increased costs;
  • There is a risk of algorithmic bias.

All these risks shows that in order to use artificial intelligence you need a large sets of data that could be used to train and establish method to analyse and interpret the final results. However, this should not discourage the usage of AI in drug development because there are great opportunities ahead of it.


Can we accelerate medicinal chemistry by augmenting the chemist with Big Data and artificial intelligence?

Published on 1 April 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us:
3D printing and its potential application in drug development

3D printing and its potential application in drug development

Three-dimensional printing (3D printing) is gaining popularity in different areas from manufacturing to medicine. It is believed to be considered the beginning of the new industrial revolution.  Not surprising 3D printing is also gaining popularity in drug development although its potential is not fully discovered. Currently 3D printing is explored as a potential method for producing solid oral dosage forms (like tablets and capsules) and its role in developing personalised medicines. However, like all new technologies 3D printing has its advantages and challenges.

What types of 3D printing technologies are used?

  • Vat photopolymerisation is a process that utilises a light source (e.g., laser) to selectively cure a vat of liquid photopolymer, transforming it into a solid object. Examples of such are stereolithography (SLA), digital light processing (DLP), and continuous liquid interface production (CLIP) technologies;
  • Binder jetting (BJ) revolves around the selective binding of solid powder particles by spraying a liquid agent;
  • Powder bed fusion is a selective thermal process that involves the fusion of powder particles by the application of a laser or other heat source. It includes selective laser sintering (SLS), multijet fusion (MJF), direct metal laser sintering/selective laser melting (DMLS/SLM), and electron beam melting (EBM);
  • Material jetting is a selective technique in which liquid droplets of materials are deposited on a surface. These droplets spontaneously solidify [known as drop-on-demand (DOD)] or can be cured or fused using an ultraviolet (UV) light [known as material jetting (MJ)] or a heat source [known as nanoparticle jetting (NPJ)]; 5
  • Direct energy deposition is a process that selectively deposits a form of focused thermal energy (e.g., laser) directly onto powder particles, causing them to melt and fuse. It involves two technologies; laser engineering net shape (LENS) and electron beam additive manufacturing (EBAM);
  • Sheet lamination; compromises the bonding of materials in the form of sheets (e.g., cut paper, plastic or metal) to fabricate 3D objects. It is often known as laminated object manufacturing (LOM) or ultrasonic additive manufacturing (UAM);
  • Material extrusion is a technology that involves the selective dispensing of material in a semisolid form. This technology is further subdivided into fused deposition modelling (FDM), which utilises thermoplastics, and semisolid extrusion (SSE), which utilises gels and pastes.

Advantages of 3D printing in drug development

  • Drug research is an extensive and expensive process, which requires sophisticated supply chain. 3D printing can reduce the costs of clinical research phase by allowing producing small or ‘one-off’ batches of formulations or drugs. This is especially important in early stage – in drug discovery, pre-clinical studies and first in human (FIH) studies. For example, chemists from University of Glasgow have produced successfully ibuprofen. Another team has synthetized baclofen.
  • This method could be very successful in producing different molecules on a small scale which normally have high cost or poor stability.
  • 3D printing could also enable producing drugs on a small scale in remote locations, which otherwise will not support the process.
  • 3D printing was used in pre-clinical drug discovery by producing 3D print of animal and human tissues, which allows these tissues to be used in studying drug toxicity and metabolism. For example, team from Harvard University was able to 3D print the first cardiac microphysiological device. Also there are number of organs that have been 3D printed like stomach, pancreas and small intestine, which gives new opportunities for in vitro drug testing and reduce the number of animal models.
  • 3D printing could speed up the drug manufacturing process on a small scale.
  • 3D printing does not require serious modifications and major labour input.
  • 3D printing is fast – it could take average of 6.5 min to produce a small object which will take 3.5 up to 11.5 hours with conventional methods.

Challenges in 3D printing

  • The biggest challenge is the high price of the 3D printer, which could vary from £1500 to £4 million.
  • Another big issue is potential toxicity due to presence of unreacted monomers.
  • The final product of the 3D printing has low mechanical properties – low friability and hardness values.
  • Another potential risk is drug degradation during the 3D process because of the high temperatures that are used during printing.

While current 3D printing technologies have their limitations it is still early phase of development and probably in the future some of these challenges will be overcame. 3D printing is definitely existed field which has its potential application in drug development.


Reshaping drug development using 3D printing

Published on 1 March 2019

Author: Olga Peycheva, Director at Solutions OP Ltd. 
Olga has been working in clinical research since 2005 and has extensive experience in Eastern and Western Europe

Please follow and like us: