First patient dosed with COVI-VAC, an intranasal COVID-19 vaccine candidate

five nasal sprays releasing liquid - idea of intranasal COVID-19 vaccine deliveryThe first patient has been dosed in the Phase 1 clinical trial of COVI-VAC, a single-dose, intranasal, live attenuated vaccine against SARS-CoV-2, the virus that causes COVID-19.

The randomised, double-blinded, placebo-controlled dose-escalation study will evaluate the safety and tolerability COVI-VAC at multiple dose levels in a total of 48 volunteers. In addition, the study will evaluate the vaccine’s ability to provoke an immune response – this will be assessed by measuring neutralising antibodies, mucosal immunity in the airway and cellular immunity.

Codagenix, the company developing the COVI-VAC vaccine candidate, expects to report initial data from the study by mid-2021. Pending results of the Phase I trial, it expects to begin advanced clinical testing in mid-2021. The trial is being conducted by hVIVO in London, UK.

“Dosing the first patient in the Phase I trial is an important milestone in the development of COVI-VAC, which we believe has significant advantages over other vaccines against COVID-19,” said Dr J. Robert Coleman, Chief Executive Officer of Codagenix. “Importantly, as a live attenuated vaccine, COVI-VAC has the potential to provide a broader immune response in comparison to other COVID-19 vaccines that target only a portion of the virus. This could prove critical as new variants of SARS-CoV-2 have begun to emerge.”

“Given the impressive efficacy signals from the vaccines that have already received Emergency Use Authorization in the US, it is tempting to take your foot off the gas if you are developing a different vaccine construct for this terrible virus,” said Charlie Petty, principal at Adjuvant Capital and Codagenix board member. “It may sound cliché, but the reality of this pandemic is that none of us is safe until all of us are safe – globally – and that will require billions of vaccines that can be easily delivered and administered. We are optimistic that COVI-VAC can play an important role in achieving equitable access to protection from SARS-CoV-2, and the Serum Institute of India is the ideal partner to achieve our large-scale distribution amibitions.”

COVI-VAC was developed with Codagenix’s Synthetic Attenuated Virus Engineering (SAVE) platform, which uses synthetic biology to re-code the genes of viruses into safe and stable vaccines. COVI-VAC is designed to deliver a safe, live attenuated version of SARS-CoV-2 that may induce a more robust immune response and long-lasting cellular immunity against SARS-CoV-2 compared to other vaccines against the virus.

Additionally, the company states that COVI-VAC has the potential to address several key logistical challenges to immunisation against SARS-CoV-2 at a global scale, including that COVI-VAC requires minimal training to administer and will not require a needle and syringe, nor ultra-low temperature freezers; and that the vaccine can be manufactured at large scale using technologies already in place at most global manufacturing facilities.

The Serum Institute of India is manufacturing COVI-VAC.

The post First patient dosed with COVI-VAC, an intranasal COVID-19 vaccine candidate appeared first on European Pharmaceutical Review.

EMA receives authorisation application for AstraZeneca COVID-19 vaccine

The European Medicines Agency (EMA) has announced that it has received an application for conditional marketing authorisation (CMA) for the COVID-19 vaccine developed by AstraZeneca and Oxford University.

According to the EMA, the assessment of the vaccine will proceed under an accelerated timeline. An opinion on the marketing authorisation could be issued by 29 January during the meeting of EMA’s scientific committee for human medicines (CHMP), provided that the data submitted on the quality, safety and efficacy of the vaccine are sufficiently robust and complete and that any additional information required to complete the assessment is promptly submitted.

The EMA says it has already reviewed some data on the vaccine during a rolling review. During this phase, the agency assessed data from laboratory studies, data on the vaccine’s quality and some evidence on safety and efficacy from a pooled analysis of interim clinical data from four ongoing clinical trials in the UK, Brazil and South Africa. Additional scientific information on issues related to quality, safety and efficacy of the vaccine was also provided by the company at the request of CHMP and is currently being assessed.

The Oxford-AstraZeneca COVID-19 vaccine uses an adenovirus that has been modified to contain the gene for making the SARS-CoV-2 Spike (S) protein. The adenovirus itself cannot reproduce and does not cause disease.

Once administered, the vaccine delivers the SARS-CoV-2 gene into cells in the body. The cells will use the gene to produce the S protein. The person’s immune system will treat this S protein as foreign and produce natural defences − antibodies and T cells − against this protein. If the vaccinated person comes into contact with SARS-CoV-2 later, the immune system will recognise the virus and be prepared to attack it. 

If the EMA concludes that the benefits of the vaccine outweigh its risks in protecting against COVID‑19, it will recommend granting a CMA. The European Commission (EC) will then fast-track its decision-making process with a view to granting a CMA valid in all EU and EEA Member States within days.

The post EMA receives authorisation application for AstraZeneca COVID-19 vaccine appeared first on European Pharmaceutical Review.

Amgen sets target of achieving carbon neutrality by 2027

Amgen and sustainabilityAmgen has announced the launch of a new seven-year environmental sustainability plan, which includes a commitment to achieve carbon neutrality, while also reducing water use by 40 percent and waste disposed by 75 percent.

“As a science-based company with a mission to serve patients, we understand the profound impact that climate change is having on human health around the world,” said Robert Bradway, Chairman and Chief Executive Officer at Amgen. “Our new commitments expand on our previous achievements and drive Amgen’s continued leadership on environmental sustainability that will benefit our patients, staff, shareholders and communities.”

According to the company, since 2007, Amgen has implemented projects resulting in a 33 percent reduction in carbon emissions, a 30 percent reduction in water use and a 28 percent reduction in waste. 

Amgen is set to invest more than $200 million to achieve these 2027 environmental commitments and expects that such investment will help the business to become not just more environmentally sustainable but also more flexible and productive, resulting in reductions in operating costs from such efficiencies over the same period.

The company says it will focus on the use of innovative technologies to significantly reduce carbon emissions from Amgen-owned operations, as well as on sourcing renewable energy. For example, the business’ newest biomanufacturing plant in Singapore generates 70 percent less carbon than traditional biomanufacturing facilities. The company has built a second such plant in Rhode Island.

“Our approach to reducing Amgen’s environmental footprint focuses on driving innovation across our business operations, increasing the efficiency of existing processes and integrating purchased or on-site renewable energy. We will implement sustainable practices in the areas of research, process development, manufacturing, transportation and distribution, sourcing and products and packaging,” the company said. 

Where carbon emissions cannot be eliminated from its operations, Amgen will invest in sustainability projects that sequester or avoid greenhouse gas emissions. In addition, the company will engage with its suppliers to assist and encourage carbon reductions throughout its value chain.

The post Amgen sets target of achieving carbon neutrality by 2027 appeared first on European Pharmaceutical Review.

New Year, New You Sweepstakes Official Rules

Quest Presents: New Year New You SWEEPSTAKES Official Rules NO PURCHASE NECESSARY.  A PURCHASE WILL NOT IMPROVE YOUR CHANCE OF WINNING. PROMOTION DESCRIPTION:  The “Quest Presents: New Year New You Giveaway” Sweepstakes (the “Sweepstakes”) begins on or about January 13, 2020 at 12:01 a.m. Pacific Time (“PT”) and ends on […]

RNA Vaccines And Their Lipids

So now that people (not enough of them!) are getting vaccinated in the US with the Pfizer/BioNTech and Moderna mRNA vaccines, let’s talk about some more details of what are in those injections and what happens once the shot is given. The workings of an mRNA vaccine touch on a lot of different cellular processes and a lot of drug-delivery issues, so we can Talk Corona while also talking drug discovery, biology, and chemistry at the same time. I want to start off by recommending this piece by Bert Huber on the workings of the Pfizer/BioNTech vaccine – Bert goes into a lot of detail that I’m going to run through rather quickly in the next few paragraphs, and you’re probably going to have a better shot understanding it from him than you do from me!)

One theme that will show up many times in this post is that these vaccines were not invented from scratch. There’s a long list of things that had to be worked on in order for the field to be in the shape it was in at the beginning of 2020, and that’s why things ran so quickly. “RNA as a therapeutic agent” is an idea that has had billions of dollars of work poured into it over the last twenty or thirty years, so when you hear about these vaccines as something new, remember that’s only for certain definitions of “new”.

As all the world knows, these vaccines are based on messenger RNA (mRNA). That, of course, is the type that’s produced in a living cell by reading off a given stretch of DNA and assembling the matching RNA, after which it goes off on its own to be fed into a ribosome which will assemble proteins based on its code, reading off by three “codon” letters at a time. So messenger RNA has its feet in both worlds, if it had feet: it’s down there in the nucleus being put together next to an exposed and unwound strand of DNA, but afterwards it’s also present right in the middle of the ribosome machinery, as amino acids get brought in and spliced together into a growing protein strand. Genetic information gets turned into proteins (there’s the Central Dogma of molecular biology for you), and mRNA is how that happens.

Now, the specialists in the room will appreciate the huge number of details that go into both those processes. The concepts are pretty straightforward (read off DNA to make mRNA, read off mRNA to make protein), but the execution is something else again. It’s worth going into those in a little detail to explain why the mRNAs in the vaccines look the way that they do, and why designing a good one is a lot harder than it looks.

As a new mRNA strand is generated by the action of the RNA polymerase II machinery on a stretch of DNA, it gets a “cap” attached to the end that’s coming out from the DNA (the “5-prime” end), a special nucleotide (7-methylguanosine) that’s used just for that purpose. But don’t get the idea that the new mRNA strand is just waving in the nucleoplasmic breeze – at all points, the developing mRNA is associated with a whole mound of specialized RNA-binding proteins that keep it from balling up on itself like a long strand of packing tape, which is what it would certainly end up doing otherwise.

So the 5-prime end is capped, and then the other one (the “3-prime” end) undergoes some processing of its own. It has a certain number of residues scissored right back off, and then a stretch of “poly-A” (one adenosine residue after another) is added on – these processes are done by another big complex of enzymatic and scaffolding proteins working on that end of the molecule. By the time that’s finished, an mRNA can have a couple of hundred A residues tailing off its 3-prime end. This doesn’t get turned into protein, though – otherwise every protein that gets made would come out of the ribosome with a long tail of lysines on it, since the “AAA” codon under other circumstances means “Lys” to the translation machinery.

Then there’s another key step. In most organisms, the DNA doesn’t just read off the uninterrupted code for a whole protein. It has interruptions of other stretches of code (“introns”), and at this point those are clipped out and the actual mRNAs are spliced together by assembling their pieces (the “exons”) into their final form. That may seem like a rather weird process if you haven’t run into it, and it certainly was a surprise when it was discovered back in the late 1970s. This is done by yet another Death-Star-sized mass of proteins, the “spliceosome”, and it provides opportunities for “splice variants” along the way that will produce different proteins when a ribosome gets ahold of them. And that’s a big reason why we have a lot more different proteins in our bodies than we have different genes: many of them can be mixed-and-matched into these different variants back at the mRNA level.

I mentioned the poly-A tail, but there are also key regions at both the 5-prime and the 3-prime ends of an mRNA strand that also don’t get translated into protein. These contain important regulatory information for how that translation should go. There are “start” and “stop” codons that aren’t associated with any particular amino acid, but rather convey those instructions to the ribosomes. The “leader” sequence at the beginning of the mRNA and sections at the other end as well can have profound effects on how readily it gets taken up by any given ribosome and how efficiently it moves through. Ribosomes themselves have at least two ways to feed an mRNA into their protein-making machinery: the normal way, which requires a “capped” mRNA and an “internal ribosome entry site” (IRES) that doesn’t care, and the use of these is also mediated by the untranslated RNA regions. It goes on and on! The last 30 or 40 years of biology have seen these details brought to light through vast amounts of effort in the lab (and similarly vast amounts of staring out windows trying to sort out mentally what’s going on), and that process is nowhere near at an end.

I’ve rambled on about all this to bring us back to the mRNA vaccines. You can see from that quick tour of the machinery that it would be a bit too hopeful just to produce a plain stretch of RNA that codes for the viral Spike protein and expect that to work right off the bat. No, you’re going to have to optimize both ends of it so that ribosomes are enthusiastic about it and zip right down the strand producing that Spike for you. (And remember, the vaccines we have are also producing a variation of the Spike that keeps it stable in its final active shape, the better to have antibodies recognizing that, so you’re not even coding for the “native” Spike from the very beginning).

And as you’ll know if you’ve read that article from Bert Huber that I linked to at the beginning, the mRNA vaccines also feature a good deal more such engineering. The three-letter codons for amino acids have some redundancies in them, but not all of those are processed with the same alacrity. Ones that are heavier in C and G residues seem to be run through more efficiently, so the sequences are biased that way. There are also the modified bases like pseudouridine/1-methylpseudouridine that get read off at the ribosome like their native cousins (in this case, good ol’ uridine, U) but make the mRNA strand both more stable and less likely to set off an immune response against itself. So the sequences in the vaccines have human fingerprints all over them – see Bert’s article for more.

But all that engineering availeth one not if the mRNA doesn’t make it to the cells and inside the cells. And that takes us to the formulations, which are another essential part of the whole mRNA vaccine story. Cell and molecular biologists tend to think of RNA molecules in general as pretty fragile things, and that reputation has been earned. They’re intrinsically less stable than the corresponding DNA molecules, and the odds are further stacked against them in the body by our own immune system’s defenses against foreign RNAs from pathogens like the current coronavirus. Just for starters, there are plenty of “RNA-ase” enzymes out there ready to tear any wandering RNAs to bits – the body can use circulating RNA molecules as signals, but these things are under tight control. So if you just inject a naked RNA sequence into someone’s blood, it’ll get stripped down to nothing before it’s traveled very far.

What are your alternatives for a more suitably clothed RNA? Well, as mentioned earlier, mRNA vaccines are not a new idea, nor is the idea of therapeutic RNA in general (remember siRNA?). So there’s been a lot of work over the years to find suitable carriers (see this 2016 review for an overview). It was not obvious which of these possibilities (lipids, carrier proteins, synthetic polymers, and more) would work out, of course. The only way to find out was (and is) to spend the time, spend the money, and go run the experiments. One thing that many of these ideas have in common is the carrier molecules having numerous positive charges on them, though, because RNA (and DNA) have lots of negatively-charged phosphate groups, and these would match up together to form a stable complex. Results from those experiments have tended to elevate the idea of lipid nanoparticles as a carrier, because they can help out in two ways simultaneously: they protect the mRNA construct itself as it travels through the bloodstream, and they seem to help it cross cell membranes and get from the blood into its destination. That’s not something you can just assume is going to happen on its own.

That point deserves a quick elaboration, because one thing that you have likely noticed is that there’s been a lot more work during this pandemic on RNA vaccines as opposed to DNA ones, even though DNA has that stability advantage mentioned above. There are several reasons for that, but one big one is that an RNA payload just has to get into the cell to encounter its site of action (the ribosomes, which are all over the place). A DNA therapeutic, though, has to get into the nucleus to do anything, and that’s yet another membrane to cross (and one with its own set of properties and gatekeepers). There’s also the possibility for a DNA species to get mistakenly incorporated into a cell’s own genome, which for a vaccine you don’t want (as opposed to a gene therapy), and using RNA completely takes that off the table, but the “just get into the cytosol” advantage is a real one, too.

So what are these lipid formulations like? They’ve been investigated for many years themselves, because these sorts of carrier properties could of course be useful for a lot of other therapeutic agents beside RNA. Here’s a short article at STAT about them. There are a lot of variations on the lipid idea, but they all tend to involve a sort of spherical bubble of lipid – generally a bilayer, as with our own cell membranes, because lipid molecule just naturally stack up like this, with greasy interior layers and the polar parts facing the solvent on the outside (see above, illustration by SuperManu via Wikipedia). In this case, the “hydrophilic head” will tend to incorporate some sort of positively charged group (as mentioned above). The payload will be in that little blue area in the middle, safe and secure as it drifts along. The cell membrane is largely made of phospholipid bilayer, with the outside hydrophilic part being negatively charged, so these positively charged nanoparticle have all the more reason to stick to them.

When that happens, it appears that endocytosis kicks in, the general process of importing larger particles into a cell. There are several varieties of endocytosis, but they tend to end up with the external particle emerging on the other side of the cell membrane wrapped in a new endosomal vesicle of its own (can’t be too careful, from a cellular perspective). A well-chosen lipid nanoparticle formulation can actually help the RNA payload escape such an endosomal compartment and finally make it into the cytosol itself, ready for action.

Now we get into a forest of picky details. There is also no way to be sure from first principles which of the many, many, many possible lipid nanoformulations is going to work out the best for carrying therapeutic mRNAs. Small amounts of various other lipid species present in the bilayer can affect their properties a great deal, so you have a lot of experimentation to do and lessons to learn, and years of work have already been spent on just that sort of thing. For example, one broad lesson has been that nanoparticles formed from lipids that have permanently charged head groups (like quaternary amines) don’t seem to perform as well as ones made from amines that are charged by having ionizable H atoms on them. You don’t want to have to discover all this on your own at the same time you’re working out the details of the RNA construct, so therapeutic development has almost invariably been through partnerships.

The Pfizer/BioNTech vaccine uses lipid nanoparticles developed by the Canadian company Acuitas, who have (under one name or another!) been working in this area for over a decade now, trying out countless variations on various lipid combinations. Back then, it was mostly for siRNA delivery, but the lessons learned from that work have been invaluable for mRNA vaccine delivery. Meanwhile, Moderna has been involved in a vigorous and long-running patent dispute with a smaller company called Arbutus, who have also been investigating lipid nanoparticle formulations and whose technology Moderna once licensed. Arbutus has been claiming that Moderna’s research programs (and indeed their now-launched vaccine) avail themselves of Arbutus’ intellectual property, while Moderna (naturally) disputes this with equal vigor. I Am Not a Patent Attorney, and a damn good thing, too, so I have no useful opinion about who’s in the right. If Arbutus has a case, I would expect them to eventually get a judgement giving them some royalties off the Moderna vaccine, but my only solid prediction is that a number of lawyers will have steady employment thanks to this issue for some time to come.

A closer look at the Pfizer/BioNTech vaccine shows that it has four lipid components, two of which appear to be proprietary to Acuitas. One of these is ALC-0315, and the other is ALC-0159. You’ll note that both of those are tertiary amines (protonated to a positive charge under physiological conditions) and not quaternary charged ones, for the reasons mentioned above. The other two lipids are 1,2-distearoyl-sn-glycero-3-phosphocholine (DPSC), which is a well-known phosphotidylcholine lipid (as evidenced by the number of references in that link) and cholesterol, which is rather better-known still. These four components are of course present in a specific ratio, which I would rather not try to exfoliate out of the patent filings. But that should give you some idea of what’s in a formulation like this and what the lipids themselves look like. The physical process by which you reliably prepare such nanoparticles is another thing that needs experimentation, of course, but they’re cranking out the vials as we speak.

So that’s a look under the hood, and as promised, there’s a lot in there. It’s all the more remarkable that these therapeutics came together as quickly as they did, but if it had not been for the years of prep work in all of these areas, we would still be waiting!

FDA monitoring impact of SARS-CoV-2 mutations

SARS-CoV-2 and mutationsThe US Food and Drug Administration (FDA) has announced that it is monitoring the potential impact of SARS-CoV-2 viral mutations, including a variant from the UK known as the B.1.1.7 variant. The FDA has alerted clinical laboratory staff and healthcare providers that the agency is using authorised SARS-CoV-2 molecular tests and that false negative results can occur with any molecular test for the detection of SARS-CoV-2 if a mutation occurs in the part of the virus’s genome assessed by that test.

The FDA says it is taking additional actions to ensure authorised tests remain accurate by working with test developers and conducting ongoing data analysis to evaluate all currently authorised molecular tests. The FDA believes the risk that these mutations will impact overall testing accuracy is low.

According to the FDA, it has been monitoring SARS-CoV-2 viral mutations and the potential impact on testing throughout the pandemic. The presence of SARS-CoV-2 genetic variants in a patient sample can potentially change the performance of a SARS-CoV-2 test. However, tests that rely on the detection of multiple regions of the genome may be less impacted by genetic variation in the SARS-CoV-2 genome than tests that rely on detection of only a single region. 

“The FDA will continue to monitor SARS-CoV-2 genetic viral variants to ensure authorised tests continue to provide accurate results for patients,” said FDA Commissioner Dr Stephen Hahn. “While these efforts continue, we are working with authorised test developers and reviewing incoming data to ensure that healthcare providers and clinical staff can quickly and accurately diagnose patients infected with SARS-CoV-2, including those with emerging genetic variants. At this time, we believe the data suggests that the currently authorised COVID-19 vaccines may still be effective against this strain.” 

Three currently authorised molecular tests, MesaBiotech Accula, TaqPath COVID-19 Combo Kit and Linea COVID-19 Assay Kit, may be impacted by genetic variants of SARS-CoV-2, the FDA says, but the impact does not appear to be significant. Importantly, the detection pattern that appears with the TaqPath and Linea diagnostic tests when certain genetic variants are present may help with early identification of new variants in patients to reduce further spread of infection. The recently identified B.1.1.7 variant has been associated with an increased risk of transmission, therefore early identification of this variant in patients may help reduce further spread of infection.

The FDA has reminded clinical laboratory staff and health care providers about the risk of false negative results with all laboratory tests, including molecular tests. Laboratories should expect some false results to occur even when very accurate SARS-CoV-2 tests are used. Today’s announcement also provides important information and recommendations for clinical laboratory staff and health care providers who use molecular tests for the detection of SARS-CoV-2.

The post FDA monitoring impact of SARS-CoV-2 mutations appeared first on European Pharmaceutical Review.

4 Predictions for Diabetes Management and Technology in 2021

Is Glucometrics The Missing Data in the Fight Against COVID-19?
Jordan Messler, MD, SFHM, FACP, Executive Director, Clinical Practice, Glytec

The COVID-19 pandemic continues to put an unprecedented amount of strain on the entire healthcare sector, and the industry has responded by accelerating the ways innovation is developed and adopted. Health leaders have faced a generational challenge and the result has been the rapid deployment of technology to address dire patient needs. Health systems, physicians, and frontline staff pulled together in remarkable fashion to implement these emerging technologies that enhance the ways patients access and receive care. 

The global health crisis also illuminated the severity of worse outcomes for patients with underlying conditions and comorbidities, and providers now have a deeper understanding of how they impact overall health. Diabetes and poor glycemic management have emerged as crucial underlying conditions that have negatively affected patient outcomes during the pandemic. While it has long been associated with worse outcomes, higher readmission rates, and increased cost of care, mounting research has shown that uncontrolled glycemia has led to higher mortality rates for hospitalized patients regardless of a pre-existing diabetes diagnosis. It is critically important that we are able to take lessons learned from this pandemic and apply them in the year ahead. 

Here are four key innovations in diabetes care and technology that will emerge as a result of a tumultuous year where the industry faced many challenges but also learned so much more.

1. FDA Clearance for Hospital CGMs Just Got Halved

Before COVID, FDA clearance for the use of Continuous Glucose Monitors (CGMs) in the hospital seemed at least five years away. The pandemic fueled trial adoption of this powerful technology through an emergency allowance that helped limit providers’ exposure to patients and conserves PPE while maintaining proper glucose management. In 2021, I believe we’ll see a wealth of data and retrospective studies from these real-world implementations that will help accelerate the timeline for the official FDA clearance of in-hospital CGM use. This reality may now come to fruition in just 2-3 years instead of the 4-6 that many predicted not long ago. 

2. Pandemic Sparks Improvement in Real-Time Patient Dashboards 

Electronic Health Records are full of data, but information overload and inefficient usage often lead to underutilized intelligence across departments. The pandemic sparked a change, forcing hospitals to create COVID-19 dashboards that offered real-time views of infected patients, their location, and treatment plans. In 2021, I expect to see hospitals expanding these dashboards to optimize data use and track significant disease states, including heart failure, pneumonia, diabetes, and more. 

3. Blood Glucose Becomes the Next Vital Sign 

Today, body temperature, pulse, respiration rate, and blood pressure are the four main vital signs healthcare providers monitor. As research continues to confirm the impact blood sugar has on patient mortality – especially those hospitalized with COVID-19 –  and the use of glucose telemetry systems rise, providers will expand their vital checklist to include glycemic monitoring. Eventually, I believe blood glucose levels will be added as the fifth main vital sign. 

4. Advances in Consumer Diabetes Tech Drives Hospitals to Do Better

Diabetes is the only chronic disease where a patient is in charge of their own care, and people living with diabetes assumed even more responsibility in 2020 as the pandemic dissuaded them from routine provider visits. From CGMs and mobile applications to smart insulin pens and the prospect of closed-loop systems, consumer diabetes technology has rapidly outpaced what providers use in the inpatient setting. Today, many health systems still rely on homegrown algorithms, finger pricks, and paper protocols to manage diabetes in the inpatient setting. In 2021, I expect patients will demand better care in the hospital because of the tech they use at home, creating a groundswell of change.

The COVID-19 pandemic has proven just how quickly the healthcare industry can identify and execute innovative methods to improve patient outcomes. Much like the industry as a whole, diabetes care and management have also accelerated the pace in which it widely accepts and utilizes emerging technology. Insulin has been in use for nearly a century, and there are still 80,000 amputations because of uncontrolled blood glucose, 60,000 people who lose kidney function and need dialysis, and 5,000 people who are blinded each year because of their diabetes. 

Seizing the lessons we’ve learned this year will help to force an important inflection point in glycemic and diabetes care. Key stakeholders will use this window of opportunity to reduce provider burden, decrease patient length of stay, and lower the overall cost of diabetes on the healthcare industry, and ultimately consumers. These four innovations are an ideal place to start. 


About Jordan Messler, MD

Jordan Messler, MD, SFHM, FACP is the Executive Director, Clinical Practice with Glytec. He trained in internal medicine at Emory University in Atlanta and subsequently served as an academic hospitalist at Emory University for several years after residency. He is the former medical director for the Morton Plant Hospitalist group in Clearwater, Florida (serving BayCare Health), where he continues to work as a hospitalist. He is the current physician editor for the Society of Hospital Medicine’s (SHM) blog, The Hospital Leader.


Rapid microbiology testing market set to be worth over $6 billion in 2026

Microbiology testingNew research has predicted that the global rapid microbiology testing market will increase to be worth $6.48 billion in 2026, up from $3.45 billion in 2018. According to the report, the market will register a compound annual growth rate (CAGR) of 8.2 percent in the forecast period 2019-2026. 

The researchers, from Data Bridge Market Research, say that the increase in the global rapid microbiology testing market is partly due to the rising incidence of infectious diseases, as well as the increasing food safety concerns. According to the researchers, rapid microbiology testing is the technology that allows the user to get microbiology test results faster when compared with traditional methods. 

The researchers highlight several key developments in the market, including Abbott acquiring Alere to become market leader in October 2017. They say that this allowed Abbott to gain leadership in the $5.5 billion point-of-care segment and broaden its diagnostics footprint and enhance access to diagnostics channels. Another development listed is AdvanDx’s acquisition of OpGen in July 2014. This included a family of US Food and Drug Administration (FDA) and CE-marked rapid molecular tests.

“The global rapid microbiology testing market is highly fragmented and the major players have used various strategies such as new product launches, expansions, agreements, joint ventures, partnerships, acquisitions and others to increase their footprints in this market,” write the authors of the report. 

Some of the major market competitors listed in the report include Abbott, Charles River, Danaher,  Sartorius Group, Thermo Fisher Scientific and Shimadzu Corporation. The report also includes market shares of rapid microbiology testing market for global, Europe, North America, Asia Pacific, South America and Middle East and Africa.

The report can be found here

The post Rapid microbiology testing market set to be worth over $6 billion in 2026 appeared first on European Pharmaceutical Review.

Collaboration to investigate nasal drug delivery using nanoparticle technology

Nanoparticles to cross blood-brain-barrierA new collaboration has been established to enhance the nasal drug delivery of Parkinson’s therapies using nanoparticle technology. 

Nanoform Finland and Herantis Pharma announced that they have signed a letter of intent to work together on the latter’s CDNF and xCDNF therapies. 

The planned and non-exclusive collaboration is intended to assess the utility of Nanoform’s platform technology for biologic drugs. The technology was recently launched, post filing of a provisional patent application with the US Patent Office, to enable production of biological nanoparticles as small as 50nm.

Subject to finalising definitive agreements, Nanoform will conduct two proof-of-concept studies on the CDNF and xCDNF molecules, leveraging its in-house formulation expertise. The goal of the planned collaboration is to increase the probability of success for enhanced blood-brain-barrier (BBB) penetration in the nasal drug delivery route for CDNF and x-CDNF.

Nanoform says that it is committed to supporting Herantis in the development of these programmes and has undertaken to invest, subject to certain customary conditions, €1,600,000 in a planned immediate directed share issue by Herantis.

“We are delighted to support Herantis Pharma in their development programmes in CDNF and latest generation xCDNF molecules. Completing this deal validates the strong market interest in and potential value that Nanoform’s nanoparticle platform technologies can add to pharmaceutical development programmes and to the patient,” said Professor Edward Hæggström, Chief Executive Officer (CEO) of Nanoform.

“We look forward to working together to enhance and enable superior formulations of the pioneering new drugs we have developed. Nanoform’s technologies show much promise for enhanced drug delivery applications in this complex and challenging field. It is our hope that this will open up new possibilities for improving the lives of patients with Parkinson’s and other related diseases. We value the opportunity to enter into collaboration with Nanoform and look forward to what the future brings,” said Dr Craig Cook, CEO of Herantis Pharma.

The post Collaboration to investigate nasal drug delivery using nanoparticle technology appeared first on European Pharmaceutical Review.

THCB Gang, Episode 37, Jan 7 – LIVE 1pm PT- 4pm ET

Episode 37 of “The THCB Gang” will be live-streamed here 1pm PT / 4pm ET on Thursday, Jan 7. You can see it below!

Matthew Holt (@boltyboy) will be joined by regulars: data & privacy expert Deven McGraw, (@Healthprivacy), Patient entrepreneur extraordinaire Robin Farmanfarmaian (@RobinFF3) & consultant/author Rosemarie Day (@Rosemarie_Day1). Balancing them out will be the Y chromosome owners futurists Ian Morrison (@seccurve), Jeff Goldsmith & THCB regular Kim Bellard (@Kimbbellard)

Other than than mob riot in the Capitol, the Georgia senate race, most of the world on COVID lockdown, the vaccines, the new Administration, and wishing each other a happy new year, there’s little to talk about….

If you’d rather listen, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels

New technique to grow crystals from nanoscale droplets developed

Small crystalsResearchers have grown small crystals from nanoscale encapsulated droplets. According to the scientists, their innovative method, involving the use of inert oils to control evaporative solvent loss, has the potential to enhance the drug development pipeline.

The study was conducted at the University of Newcastle and University of Durham, both UK, in collaboration with SPT Labtech. 

Through the use of this new method, called Encapsulated Nanodroplet Crystallisation (ENaCt), the researchers have shown that hundreds of crystallisation experiments can be set up within a few minutes. Each experiment involves a few micrograms of molecular analyte dissolved in a few nanolitres of organic solvent and is automated, allowing for rapid set up of hundreds of unique experiments with ease. Concentration of these nanodroplet experiments results in the growth of the desired high quality single crystals that are suitable for modern X-ray diffraction analysis.
Lead researcher Dr Michael Hall, Newcastle University, said: “We have developed a nanoscale crystallisation technique for organic-soluble small molecules, using high-throughput liquid-handling robotics to undertake multiple crystallisation experiments simultaneously with minimal sample requirements and high success rates. This new method has the potential to have far-reaching impact within the molecular sciences and beyond. Fundamental research will benefit from highly detailed characterisation of new molecules, such as natural products or complex synthetic molecules, by X-ray crystallography, whilst the development of new drugs by the pharmaceutical industry will be accelerated, through rapid access to characterised crystalline forms of new active pharmaceutical ingredients.”

The researchers say that understanding these new crystalline forms, known as polymorphs, is essential to the successful generation of new pharmaceutical agents and drugs. The ability to investigate these forms quickly and on a vast scale, while minimising the amount of analyte required, could be a key breakthrough enabled by the new ENaCT protocol.

Dr Mike Probert, Newcastle University, commented: “…this new approach to crystallisation has the ability to transform the scientific landscape for the analysis of small molecules, not only in the drug discovery and delivery areas but also in the more general understanding of the crystalline solid state.”

The study was published in Chem

The post New technique to grow crystals from nanoscale droplets developed appeared first on European Pharmaceutical Review.

European Commission approves first treatment for peanut allergy

The European Commission (EC) has approved PALFORZIA® [defatted powder of Arachis hypogaea L., semen (peanuts)] for the treatment of peanut allergy, making it the first treatment for the condition. Produced by Aimmune Therapeutics, a Nestlé Health Science Company, PALFORZIA is indicated in patients aged four to 17 years with a confirmed diagnosis of peanut allergy in conjunction with a peanut-avoidant diet and may be continued in patients 18 years of age and older.

PALFORZIA is a complex biologic drug used with a structured dosing approach that builds on a century of oral immunotherapy (OIT) research. With OIT, the specific allergenic proteins are ingested initially in very small quantities, followed by incrementally increasing amounts, that can result in the ability to mitigate allergic reactions to the allergen over time. PALFORZIA is a pharmaceutical-grade OIT for peanut allergy with a well-defined allergen profile to assure the consistency of every dose, from 0.5mg (equivalent to 1/600th of a peanut) to 300mg.

“Today’s approval is a historic moment for the millions of people living with potentially life-threatening peanut allergy and we are proud to bring PALFORZIA to patients in the EU who, until now, have not had an approved therapeutic option,” said Andrew Oxtoby, President and Chief Executive Officer of Aimmune Therapeutics. “We are grateful for the efforts of the peanut allergy community who contributed to the development programme. Now we turn our efforts toward working with health authorities to ensure access of this first-of-kind treatment for those children with peanut allergy for whom our product is appropriate as we prepare to launch in Germany and the UK in May 2021.” 

The approval was based on a package of data including two pivotal Phase III clinical trials, PALISADE and ARTEMIS. The trial tested the treatment in 671 participants with peanut allergy in North America and Europe. In both studies, PALFORZIA treatment resulted in a significant increase in the amount of peanut protein tolerated, compared to placebo. Participants underwent an initial dose escalation period for 20 to 40 weeks starting at 3mg until the 300mg dose was reached. Participants then underwent six months (PALISADE) or three months (ARTEMIS) of maintenance immunotherapy with 300mg PALFORZIA or placebo until the end of the study.

“Results from landmark Phase III clinical trials have shown more than half of patients treated with PALFORZIA were able to tolerate the equivalent of seven to eight peanut kernels after up to nine months of treatment. These compelling data highlight its potential to mitigate against severe allergic reactions, including anaphylaxis in the event of unintended exposure to peanut protein,” said Professor George du Toit, study investigator for the PALISADE and ARTEMIS trials. “Today’s announcement is a very important step and means that we are closer than ever before to being able to provide an approved treatment for patients with peanut allergy.”

The post European Commission approves first treatment for peanut allergy appeared first on European Pharmaceutical Review.

Microbial surface recovery superior for vinyl than stainless steel

Contact plate for microbial recoveryA new study has shown that microbial surface recovery was superior for vinyl when compared with stainless steel, two surfaces common to pharmaceutical facilities. 

According to Tim Sandle, the author of the paper, viable environmental monitoring methods in healthcare and pharmaceutical settings remain primarily culture based, with one example being the contact plate. However, although this is a commonly used method, there are some sampling aspects that remain under-researched.

The researchers highlight that the factors affecting surface recovery relate to microbial adhesion, the type of surface, the sampling method and the time and pressure applied. The researchers investigated the contact plate and whether method improvements – in the form of applicators designed to control time and pressure – can improve the recovery of microorganisms from surfaces or at least aid with the consistency of sampling practice. To examine the effect of time when a consistent pressure is applied, in relation to microbial recovery, the team studied the organism Staphylococcus aureus on stainless steel and vinyl. In the study, an applicator that controls both time and pressure was used. While the pressure was fixed, the factor of time was varied.

They found that surface recovery was superior for vinyl compared with stainless steel. For both surface types, a 20 second sampling time was shown to lead to a better recovery compared to a 10 second sampling time (with a 30 second sampling time not leading to a significant improvement to the microbial surface recovery).

According to the scientists, the data from the study also revealed the typical proportion of microbial numbers that a contact plate can consistently recover from a surface. This proportion was found to be relatively low, at no more than 25 percent.

“Microbiologists must remain cognizant that the levels of organisms recovered from a cleanroom surface are likely to be an underestimation of the organisms attached to the surface,” write the researchers in their paper. 

The study was published in EJPPS.

The post Microbial surface recovery superior for vinyl than stainless steel appeared first on European Pharmaceutical Review.

Variants and Vaccines

Well, here I am with the first “In the Pipeline” post of 2021, and damn itall, I’m right back to the stuff I was writing about last time. I still expect this year to be the time when we beat back the coronavirus pandemic, and (as a minor side effect for me) to be the year when I can spend more time blogging about other things than viruses and vaccines. But that time is not yet.

No, definitely not. There are a lot of things happening right on top of each other at the moment, and it’s impossible to say yet how they’re going to balance out. On the plus side, we have two vaccines approved in the US, and other countries are starting to use the Oxford/AstraZeneca vaccine or one of the Chinese vaccines. And we have more promising candidates that will be reporting very soon (J&J and Novavax). The minus side is that we’re going to need those very much, because manufacturing and distribution constraints are very real problems. We can argue (a lot) about those, their extents, who’s at fault or not, and all the rest, but I think that we can stipulate with no problem that they are indeed constraints. We have to get a large number of people vaccinated in a short period of time, the largest in the shortest, and as it stands right now neither of those numbers are anywhere near what we need.

I have every expectation that the pace of vaccination will pick up. But the other factor at work is the new coronavirus variant. Since I wrote that post, it’s become even more clear that yes, B.1.1.7 is indeed more infectious. The data from the UK are no longer consistent with its numbers being due to any sort of statistical accident, and it’s now been reported in numerous countries and several US states. At this point, it seems likely that it may follow the same pattern in those areas – and in the US – that it did in the United Kingdom, spreading more rapidly until it becomes the dominant strain in these populations.

That’s not good. Reports so far don’t show B.1.1.7 leading to more severe infections, but spreading the same disease we have now more quickly is still one of the last things we need. The latest data would seem to point to increased viral load in the upper respiratory tract as a big part of the problem – people are presumably shedding more infectious particles more quickly, which would certainly do it. There are many people talking about the cellular entry part of the infection process and whether B.1.1.7 is better at that, but I’m still reading up on the details. That could well be what leads to the increased viral load, but there are other possibilities, too. We’re going to know more about the details, and soon – a huge amount of work is going on in real time – but the increased R for this variant seems hard to refute.

So it’s the UK that’s in the worst shape with this variant right now, from what we can see, and what are they doing about it? This Helen Branswell piece at Stat will get you up to speed. As many will have heard, there are proposals from the British government to delay the second dose of the existing vaccines in order to get first doses into as many people as possible. It appears that our existing vaccines do indeed protect against B.1.1.7 infection, although more data on that would be welcome, but they sure don’t protect the people that aren’t dosed with them; on that we can all agree.

The delayed-dose idea had been floated before, and I wasn’t exactly an early adopter, but the more contagious version of the virus has made me reconsider. But as I was going on about on Twitter the other day, we have to be clear that this is, in fact, an experiment on the population. It seems likely that delaying these doses will likely work out OK. But we don’t have much evidence either way. I’m in favor of doing it, but I’m not happy about ending up in that position. I don’t trust immunology to always work the way that I think it should work, but it seems that we have little choice.

And by “we”, I mean all of us. As mentioned, B.1.1.7 is showing up around the world, including areas whose medical capacities are already being strained. The U.S. is very much included – look, for example, at the situation in Southern California. If things go badly, we could be seeing a big wave of this variant across many parts of the country in the next weeks, and it could be spreading much faster than our vaccination program can knock things back down. We have to get ready for that possibility, and there are already proposals here to adopt the delayed-second-dose protocol. Just in the last day or so, in fact, there’s been another proposal to use 50µg doses of the Moderna vaccine instead of the 100µg doses authorized in the EUA. Moncef Slaoui pointed out that the data submitted by Moderna show that the two doses produce similar immune responses in the 18-55 age group.

That’s another one that you can say will probably work, but there are things to worry about, both in the Moderna dosage idea and the general delay-the second-dose plan. I’ve been watching some very competent people argue these points both ways: here’s Florian Krammer with the possibility that these ideas could end up generating more resistant variants of the coronavirus. The Stat article linked above has similar worries from Paul Bieniasz at Rockefeller and Isabella Eckerle in Geneva, along with other experts who still think it’s the right way to go. But the worries are not just scaremongering from randos online or anonymous bureaucrats who don’t want to fill out more forms; it’s a real possibility, and its chances have to be weighed against the effects of the greater spread of the existing variant with slower vaccination schedules. Both of these could lead to very bad outcomes. Not dosing more people could exacerbate the problem of regions getting overwhelmed with the more contagious variant, with needless deaths due to the loss of hospital capacity. But if we spread out such vaccinations too much and manage to generate another variant that partially or even completely escapes the existing vaccine response, we will be in even worse shape.

I do not know how to make this decision. I really don’t. We have degrees of harm, probabilities of harm, logistics, timing, public health capabilities, politics and more to consider, and not a lot of time in which to consider them. Anyone who uses the phrase “no-brainer” to describe this call should be dropped from your list of people to take advice from. This is the opposite: it’s a decision that all our brainpower may still not be sufficient to make clear. But we’re going to have to make it anyway.

Concerns raised over India’s approval of COVID-19 vaccine with incomplete data

COVID-19 vaccine

Following India’s approval of the internally developed Covaxin COVID-19 vaccine, experts have raised concerns over whether the jab should have been authorised without sufficient publication of its safety and efficacy data. The vaccine has been given permission to be used “in clinical trial mode… specifically in the context of infection by mutant strains.” 

According to BBC News, Covaxin was granted Restricted Emergency Use (REU) approval in India at the same time as the Oxford-AstraZeneca COVID-19 vaccine, AZD1222. However, the only data on humans has been published in pre-prints, meaning that no peer reviewed Phase I or Phase II results have been released. Furthermore, the Phase III trial to study the vaccine, produced by Bharat Biotech, is incomplete. 

The All India Drug Action Network (AIDAN) has said that it is “shocked” and that there are “intense concerns arising from the absence of the efficacy data” as well a lack of transparency that would “raise more questions than answers and likely will not reinforce faith in our scientific decision making bodies.” AIDAN has requested the Drugs Controller General of India (DCGI), VG Somani, to reconsider the recommendation to use Covaxin. 

However, Indian Prime Minister Narendra Modi has hailed the vaccine as a “game changer” and Somani, said that Covaxin was “safe and provides a robust immune response.”

Both the Covaxin and Oxford-AstraZeneca vaccines can be stored at normal refrigeration temperatures. 

In an interview with the Associated Press, Adar Poonawalla, whose Serum Institute of India (SII) is manufacturing the AstraZeneca Oxford vaccine, said this vaccine was given emergency authorisation on the condition that it would not be exported outside India. Poonawalla also said that his company was prohibited from selling the inoculation on the private market.

The post Concerns raised over India’s approval of COVID-19 vaccine with incomplete data appeared first on European Pharmaceutical Review.

USV Pvt. Ltd- Walk-Ins for Multiple Positions On 2nd January 2021

USV Pvt. Ltd- Walk-Ins for Multiple Positions On 2nd January 2021

Job Description

Walk-Ins for Multiple Positions -Production / QA / QC / Microbiology / Warehouse / Engineering / Technology Transfer @ USV Pvt. Ltd

Walk-In at USV Pvt. Ltd. Daman On 02 January 2021 from 10:00 AM to 03:00 PM, Venue: Plot no 16, 17. Mahatma Gandhi Udhyog Nagar, Near Dhabel checkpost, Daman.

Note : Candidate should be dressed in formal clothes.

If unable to attend interview candidate can share their CV mentioning Department name and position applied in mail subject line. Please forward your CV On [email protected],[email protected],[email protected].

•Technician-Production
Criteria
Qualification: 10th+ ITI
Experience: 1-4 years in Regulatory Company
Age: 21-26 years
With good skills on Compression and Coating machines operations.

•Technician-Engineering
Criteria
Qualification:  ITI/ Diploma
Experience: 3-7 years in Regulatory Company
Age: 20-26 years
Should have good Knowledge of plant maintenance.

•Sr. Officer-Production
Criteria
Qualification:  B.Pharm/ M.Pharm/ B.Sc/ M.Sc
Experience: 4-8 years in Regulatory Company
Age: 22-30 years
Should have good Knowledge of manufacturing process.

•Executive & Sr. Executive (IPQA & Qualification) -Quality Assurance
Criteria
Qualification:  B.Pharm/ M.Pharm/ B.Sc/ M.Sc
Experience: 4-8 years in Regulatory Company
Age: 22-30 years
Should have good Knowledge of IPQA

•Executive & Sr. Officer-Quality Control & Microbiology
Criteria
Qualification:  B.Pharm/ M.Pharm/ B.Sc/ M.Sc
Experience: 4-8 years in Regulatory Company
Age: 22-30 years
Should have Knowledge of analytical analysis.

•Technician/ Sr. Technician-Warehouse
Criteria
Qualification:  B.Pharm/ M.Pharm/ B.Sc/ M.Sc
Experience: 2-4 years in Regulatory Company
Age: 22-30 years
Good Knowledge of dispensing and Warehouse process.

•Sr. Officer-Technology Transfer
Criteria
Qualification:  B.Pharm/ M.Pharm
Experience: 3-6 years in Regulatory Company
Age: 22-30 years
Should have Knowledge of Technology transfer.

For any Query Contact On 0260- 6636232/6636229.

USV Pvt. Ltd- Walk-Ins for Multiple Positions On 2nd January 2021

Vaccine Roundup, Late December

There’s been a lot of news, so it’s time to survey the vaccine landscape. For this post, I’m only going to cover the big players that are either deep into human trials or have actually been rolling out vaccines to the general population – another post to come will go further down the list. But that still leaves us with plenty to talk about. The situation is. . .well, I’m going with “chaotic”, overused though it is.

I don’t have separate categories for the Pfizer/BioNTech and Moderna vaccines this time, since they’re already under EUA here in the US and people are being vaccinated as we speak. That rollout is worth a longer discussion, but it’s as much politics as it is medicine. Vice President Pence’s statement earlier this month of having 20 million people vaccinated by the end of the year is totally out of reach, though, and I believe that he has now altered that to having 20 million doses shipped (and I’m not even sure about that). The CDC says that vaccinated numbers should start rising steeply, and I certainly hope that’s the case.

Oxford/AstraZeneca: As the world knows, this adenovirus vector vaccine has been a messy one. I think that both partners need to take responsibility for some real mistakes in the trial execution and further mistakes in their announcements since the data became available. But I haven’t seen any sign of that (although I would be even happier than usual to be corrected on that point).

Last night, the UK authorities approved this vaccine for distribution there. Of special interest is the intent to give as many people as possible a first shot, without holding back supplies for the second round. I think that this is simultaneously the correct decision for them to make and also very bad news. It appears that the coronavirus variant first reported there is indeed more contagious: Trevor Bedford is convinced, and we have early data that would seem to only make sense if the R for this form is indeed higher. One mechanism for that may be higher viral load developing in patients more quickly, making them presumably more infectious (via shedding more viral particles). That said, it also appears (so far) that the course of disease with this variant is not actually worse than the other strains, but it’s not any better, either. And with higher transmission, that’s bad enough. (Note that the WHO believes that the South Africa variant is spreading quickly as well).

That situation in the UK appears to be one of the biggest factors driving the approval and rollout, and I see their point: this vaccine is indeed better than nothing, one shot for more people is likely to be better than two-shots-for-some, and it looks like they’re going to need all the help they can get. But “better than nothing” is a rough place to be. So what do we know about the efficacy of a single shot of the Oxford/AZ vaccine, and about the effect of waiting for a second one?

All I can say is that attempts to answer those questions land you immediately in a confusing mess. It’s a mess made worse by AstraZeneca, whose CEO has made statements about the vaccine’s efficacy that are not (so far) backed up by actual numbers. If you’d like me to name a major drug company that’s going to come out of this pandemic looking worse, it’s them. Anyway, as you’ll recall, initially there was a hint that a lower first dose followed by a standard second dose might be more protective overall (although I don’t think the evidence for that is very strong at all, considering the statistical spread in the data). But now there’s a report that increased efficacy might be driven by an even longer wait between the two doses. I don’t find that evidence very compelling, either (we’re getting into some pretty small subgroups by this point, and that is always a dangerous area to draw conclusions from). And if you’re going to leave people walking around with a half dose at first, or a full dose but with a longer wait for the second one, it makes the question above even more crucial: how protective is one dose?

We do not know. We don’t know for this vaccine, nor for the Pfizer/BioNTech one, nor for Moderna’s. No studies have been designed to find that out, so all we can do is guess based on what we’ve seen with the interval between doses in the two-dose studies. That’s been encouraging with the two mRNA vaccines, but remember: we don’t know how they are over a longer period, because no one was left without a second dose for that long. It’s certainly possible that without the second booster that the protection seen after one shot starts to wane. We do not know. And we know even less about the Oxford/AZ vaccine’s behavior under these conditions. Giving as many people in the UK as possible a single dose of that vaccine with a longer wait until the booster is a gamble, and you wouldn’t want to do it that way if the alternatives weren’t even worse. It’s the right move, unfortunately, and it’s a damned shame it’s come to this.

The US trial of this vaccine was paused for weeks, of course, while adverse events were investigated. It’s basically fully enrolled now, and the data will include many more elderly patients than have been investigated to date. I would assume that our current terrible infection rates will allow this trial to move along rather quickly, but I have no estimate of when we might see it report.

J&J: data on the one-dose clinical trial of this adenovirus vector candidate should be coming very soon indeed. It’s going to be of great interest, given the results from the Oxford/AZ effort, and given the deliberate one-dose protocol. The company has a two-dose trial underway as well, but we won’t be seeing data on that one until later.

CanSino: this adenovirus vector vaccine (Ad5) is said to be submitting data to Mexico shortly, presumably for regulatory approval. Trials have been underway there, as well as in Pakistan, Chile, and other countries. No efficacy or safety data have been reported publicly, however.

Gamaleya Research Institute: this two-adenovirus-vector two-dose vaccine has made some news as well. Earlier this month, a press release from the GRI said that the vaccine was 91% effective, based on a trial with over 17,000 vaccinated patients and over 5600 controls. The release also says that a full paper is in the works, to be published in a leading journal, and I very much look forward to that. It appears that the vaccine is now being shipped to Belarus, Argentina and Hungary, but Reuters reports that the Argentina shipment is for only the first dose, which is the easier of the two different adenovirus vectors to manufacture. Nothing on the other countries as yet, but the Hungarian shipment was quite small (6,000 doses), which tells you that it’s more in press-release territory anyway. It’s unclear what’s going on – Reuters had a source saying that the Argentine shipment was excess production from the manufacture of the first shot, and that they’re still catching up on the second. I have seen no reliable figures on the protection offered by just that first shot – the director of the GRI has said, though, that immunity from the first shot lasts only 3 to 4 months.

Meanwhile, the earlier reported collaboration between GRI and AstraZeneca seems to be real – a clinical trial has been registered. I’m quite curious to see how this is going to go, and whether it will produce results in time to make any sort of impact.

Sinovac: Word has just come in the last couple of days from a trial in Turkey of this inactivated virus vaccine. Turkish officials said that it was 91% effective, but we have no numbers to back that up yet. What we do know is that this was based on a rather small trial (752 people vaccinated, 570 in the control group), so the confidence interval on that number is surely going to be large. Sinovac, for its part, seems to have said nothing yet. I’m glad to see that this vaccine seems to be working, but you would really want to see a lot more data on both efficacy and safety.

SinoPharm/Beijing Institute: this inactivated-virus vaccine candidate has just reported data in The Lancet from its Phase 1/2 trials (safety and immunogenicity). And they have now announced that interim analysis of Phase 3 data show 79% efficacy, but with no actual numbers yet. Note that this is the same one that UAE officials announced an 86% efficacy for, but (as far as I can see) SinoPharm has still made no comment on that. Everyone would very much like to have a more complete look at the data, but there is no word on when that will be forthcoming. We don’t know how many people were in these trials, the inclusion or diagnostic criteria used, nor do we have any safety data at all. So this could be encouraging, but I myself would rather stay home and wait for something with more numbers behind it, rather than take a vaccine on this basis. More on this as more data appear.

Novavax: this should be the next trial we hear about after J&J reports, and a lot of people are waiting to see how this recombinant-protein candidate works out. These will be results from a trial in the UK – a US Phase 3 just launched this week. This one has much less rigorous storage requirements and is generally easier to manufacture, and it could be a big contributor if things work out.

Recipe: Peruvian Roast Chicken

Here’s another one that we make every so often around stately Lowe Manor. When we had it last week, I just had to roast it in the oven – normally I’d put it on the rotisserie on the grill outside, but weather conditions did not allow it. Like many such recipes, this one is all about the marinade. You’ll need oil, lemon or lime juice, soy sauce, garlic, salt, black pepper, paprika, cumin, oregano, and sugar in this case (along with around a four-pound whole chicken), but you’ll find a number of variations around those ingredients and their ratios if you look around at other recipes. But this one comes out pretty much like pollo a la brasa when I have it at a Peruvian restaurant, which is close enough for me. I’m also including a recipe for a “green sauce” like you’ll often see at those places – if you’re making that, you’ll need some mayonnaise, sour cream, oil, fresh cilantro/coriander, garlic, lime juice, salt, pepper, and some source of pepper heat (jalapeños or other hot green chilis, bottled green pepper sauce, etc.)

A small food processor of some sort will come in handy with the marinade, but you don’t have to use one. Either way, combine 3 tablespoons (45 mL) oil (I generally use olive oil for this), 1/4 cup (60 mL) fresh lime or lemon juice, four good-sized cloves of garlic (finely chopped if you’re not using a food processor, otherwise just tossed in with the rest), 1 or 2 teaspoons ground black pepper (about 3g), a tablespoon of kosher salt (13g if it’s Morton’s kosher, or weigh out other varieties of salt accordingly, because they sure do vary in density), one tablespoon (6g) of ground cumin (fresh is best if you can grind some from the seeds), one tablespoon (also about 6g) of paprika (it really does have a taste as well as a color, if you’re wondering – “Pride of Szeged” is a pretty solid supermarket brand of it), one teaspoon (1g) dried oregano, and 2 teaspoons (8g) sugar.

Mix all these up vigorously, by hand or machine, to produce a thick, intensely aromatic red/brown marinade. You can treat the chicken with it in a (nonreactive) bowl or in big plastic bag. But whatever you use, I recommend getting as much of the marinade under the skin of the whole chicken as you can manage (breast, thigh, wherever you can work it in without tearing the skin itself, making sure to keep the marinade stirred up to get the solid spice residue in there). Put some in the chicken cavity and pour the rest over it, and let it stand, with occasional repositioning, for several hours. Overnight in the refrigerator is not out of line.

Roast the chicken at that point in whatever way you usually do – I’m a 400 degree (F) oven guy myself or (as mentioned) an outdoor rotisserie if available, which is how the Peruvians tend to do it. You can either use a meat thermometer or use the “wiggle the leg” method to check for doneness, but I assume that a whole chicken will take at least an hour at that temperature and probably some more. You might need to tent it with some aluminum foil if it starts getting too brown on the surface – this recipe tends to do that, so don’t be fooled by the color into thinking that the whole thing must be done, because it may not be.

Now for the green sauce. If you’re making that in some quantity, a small food processor or something of the sort will again be useful – it’ll blend everything right up, but if you’re doing it by hand, just finely chop the garlic and cilantro and green peppers, if you’re using them.. You’ll need 1/2 cup mayonnaise (115 grams, 1/4 cup sour cream (60g), two cloves of garlic, the juice of one small-to-medium lime, about a teaspoon of table salt (6g), two tablespoons (30 mL) of olive oil, and about a cup of fresh cilantro. I’m told that the latter would weigh about 16 grams, but I’m sure that’s an approximation – well, actually, the weight (whatever it is!) is exact and the volume measure is the approximation, depending on how you pack it, but I hope that gives some idea. And as for peppers, this is a matter of taste. Two or three jalapeños should do it for this quantity, and you can decide how much of the seeds to include for heat. You can use other green chilis as you have available, but you’ll have to judge the heat on your own – another option is green chili sauce of some sort, of course, and no one will be the wiser if you use something red like Tabasco or Sriracha (the green of the cilantro will conquer all). The authentic ingredient would be yellow Peruvian chili peppers (aji amarillo), which can be pretty lively. But you’ll have to add any of these according to taste. Another ingredient often found in this sauce would be a couple of tablespoons of a grated hard salty cheese like cotija or Parmesan – I didn’t use this myself, but it’s probably closer to the source with it in there. Even closer to the source would be this same sort of recipe made with a Peruvian herb called huacatay instead of cilantro – sometimes you’ll see these served side by side with chicken or other dishes in a restaurant, and there are plenty of other Peruvian sauces where those came from.

The picture below is our kitchen table, though, with a chicken prepared as above, some of the green sauce, some fresh red onion-cilantro-lime juice relish, homemade French fries, and some choclo al comino. That was made by quickly boiling a frozen bag of Peruvian corn (choclo) and serving it with butter, freshly ground cumin, and lime juice. I have two college-aged kids in the house at the moment, so nothing was left of any of this (and there’s more food not in the picture!.) It’s like keeping Great Danes, although I don’t know what Great Danes think about Peruvian corn.

3 Common Missteps for Manufacturers to Avoid When Securing Medical Devices

3 Common Missteps for Manufacturers to Avoid When Securing Medical Devices
Dan Lyon, Sr. Principal Security Consultant at Synopsys

Over the last several years of my career, I’ve had the opportunity to work with a variety of global medical device manufacturers.  Recently, I have also started working with some new organizations that are not yet global in scale.  A theme that I have discovered is that these organizations haven’t yet established true ownership for security within the organization, even though there is increasing regulatory pressure for building security, evidenced by updated FDA guidance and recognition of international standards such IEC 62443, UL 2900, and AAMI TIR 57.  

Organizations are certainly aware of the increasing regulatory pressure. However, what many organizations are not yet aware of is how the software security industry has learned and evolved over the last 20 years to realize the importance of building security into the software that powers their offerings and overall business.  For any medical device manufacturer, it only makes sense to learn from where the industry has been and use that knowledge to start an initiative to address security for their medical products and systems.  

In my experience, many manufacturers start with only a vague understanding of what security is and how to achieve it, primarily informed by what not to do through sensationalized media headlines.  Independent security researchers and media exposure are a fundamental part of the security industry, yet they often neglect to address organizational support to build security into devices. 

Just like safety and reliability, building security into devices requires a mix of people, processes, and technology applied by an organization to achieve the appropriate security goals of a system.  This requires an organizational structure that can bring about the needed changes in processes and skillsets to create the right technological solutions.  Without the proper organizational structure that owns and drives these changes, security will be a piecemeal effort at best.  Any piecemeal effort is doomed to fail because security is a systems problem.  The organization needs to set itself up to address systems problems through the development organization and processes used to create products.

How can organizations structure themselves to address the security problem proactively?  First and foremost, looking at any security-mature organization, one will notice that responsibility is clearly established at a leadership level, with security being the sole responsibility for key roles such as a CISO.  The CISO tends to have broad responsibility for the entire organization, and products are but one of the many concerns.  

To address this, an emerging trend among medical device manufacturers is the creation of a new role for a Product Security Officer or Product Security Group, whose sole purpose is to help guide the product development processes and tools to adopt secure-by-design principles.

Avoiding common security missteps in securing medical devices

There are three common missteps I often see when organizations set up a new initiative around security.  These are things we often end up discussing with manufacturers to help them drive faster and more effective security programs.

1. A lack of responsibility and accountability within the organization.  

Too often I see organizations that do not have a product security function at all—either no one is thinking about security or security is supposed to be addressed by everyone.  When security is everybody’s responsibility, then no one owns it.  Such an organizational structure leads to basic security needs being left out of the development process.

2. Making security a part-time responsibility

It isn’t a good practice for organizations to assign security responsibility as a part-time job to someone who has other large responsibilities, such as quality or regulatory.  Having a single person responsible for security who is also responsible for additional aspects like project management or product quality is insufficient.  This type of organizational structure leads to security needs taking a backseat to items such as project cost, schedule, or performance.  This lowering of priority leads to increased risks for the organization with respect to regulatory approval or media exposure. In the worst case, it may also lead to increased safety risks.

These approaches both suffer from setting up a clear line of ownership, responsibility, and priority.    Product security is broad, complex, and different enough that it needs to have dedicated resources that focus all their time on security.  Much like building a safety program that drives the organization to compliance with appropriate standards such as ISO 14971, medical device manufacturers need to build that same organizational security capability.

3. Attempting to solve medical device product security through the corporate IT function.  

Organizations will often start out by assigning someone whose background is not product development, but rather information technology.  This organizational structure causes a lot of friction between the IT security group and the product development group because neither understands the other very well.  The solutions IT professionals are used to do not always apply very well on medical devices.  Likewise, the development organization struggles to identify and incorporate the true security needs from the IT security function.

Again, taking safety as an example, one would not assign product safety responsibility to a person or group with no background in building devices for patient care.  Addressing problems requires new and different skill sets.  There are two ways in which organizations grow their capability in this manner.  First is by hiring in resources with a security and product development background.  While sometimes possible, this mix of skills is very rare, and organizations have learned that the next best approach is to take an engineer already familiar with product development and teach them security.  While this approach can take time, it can also be a rewarding career path for the right resource.

Organizations all start their security journey in different places. There are significant challenges with building organizational capability and culture change.  Avoiding these three common missteps will cut years off the timeline it takes to build that new organizational capability. 

We have seen these missteps many times through the Building Security In Maturity Model (BSIMM), a study now in its 11th iteration which aims to understand how real-world organizations are executing their software security strategies. Any medical device manufacturer interested in security needs to be familiar with BSIMM in addition to the regulatory environment and most up to date medical devices security standards.  

Optimised formulation of gentamicin could reduce hearing loss

Hearing lossA novel method of purifying gentamicin, a widely used antibiotic, reduces the risk that it will cause deafness, according to new study. The new formulation to prevent hearing loss was developed at Stanford Medicine, US. 

Gentamicin is used in US hospitals to treat a variety of bacterial infections. It is a popular drug in developing countries because it is highly effective and inexpensive. However, researchers estimate that up to 20 percent of patients who are treated with it experience some degree of irreversible hearing loss. The researchers found a relatively inexpensive way to reformulate the drug, which belongs to a class of antibiotics called aminoglycosides, to be safer. 

“When a drug causes hearing loss, it is devastating and it is especially disturbing when it happens to a young child, as they rely on hearing to acquire speech,” said Professor Alan Cheng, co-senior author of the study. 

The gentamicin used in hospitals today is a mixture of five different subtypes of the antibiotic grown together in the same mixture. The mixture also includes as much as 10 percent impurities. Using methods such as high-performance liquid chromatography (HP-LC) and nuclear magnetic resonance (NMR) imaging, the researchers tried to figure out how to chemically separate each of the subtypes so they could be tested separately.

Once the researchers established methods of separating the different parts of the mixture, they tested these various subtypes of gentamicin individually on inner-ear tissues from animals. They identified the least toxic subtype as C2b and the most toxic as sisomicin. Both C2b and sisomicin showed the same highly effective antimicrobial properties comparable to the mixture as a whole. The researchers also found that by removing impurities from the mixture, toxicity to the ear tissue was reduced.

“What this study shows is that the formulation that is currently in a hospital bottle of gentamicin is not optimised,” said co-senior author Dr Anthony Ricci. “If we just use the subtype that is less toxic or change the formulation of this bottle, we can make the drug much less ototoxic.” 

The researchers are also working on plans to create a new aminoglycoside that could further reduce the risk of hearing loss, Ricci said. They have discovered that the inner-ear toxicity of the various subtypes highly correlates with the way they bind to the ion channels that open to the inner ear.

“This discovery lays the groundwork for the discovery of safer antibiotic alternatives and future drug development,” he said.

The findings were published in the Proceedings of the National Academy of Sciences.

The post Optimised formulation of gentamicin could reduce hearing loss appeared first on European Pharmaceutical Review.

Health in 2 Point 00 — with no video!

By MATTHEW HOLT (without JESS DAMASSA!)

Due to @jessdamassa being lost in America and my totally crap internet in the Sierras, there is no #HealthIn2Point00 this week.

So I’m going to write out a few things we would have said:

1. @OscarHealth raises another $140m and files to IPO. SPAC or no SPAC, a bunch of these startup health plans are going to try to get out the door while the window is open! 420,000 members ain’t a lot–I mean there are 5-6 Medicaid plans bigger than that in CA alone! I still predict someone big buys them but whether pre- or post crash I don’t know.

2. @LyraHealth is raising another $175m (apparently). That’s the 3rd trip to the well THIS YEAR! Mental health is sexy these days. Just how many online mental health cos can make it? I think Lyra needs to use these $$ for automated self-service tech, cos psychiatrists don’t scale, and they currently sell themselves as having a better network than anyone else.

3. @kyruus buys @HealthSparq (from @Cambia). No $$ announced. Unclear why a company that makes $$ routing patients to doctors within systems (& prevents “leakage”) needs a transparency tool that explains who’s charging what. But maybe an overall pivot to serving health plans?

4. @h1insights raises $58m (total is over $70m). It’s a database of doctors sold to drug companies to help them better target their marketing. Good to know that in the new world of health tech, helping big pharma push pills is a reliable way to make bank.

OK, so that’s what I would normally have covered in 2 mins on #HealthIn2Point00 yes, it’s much better with @jessdamassa on video and running the show while poking fun at me. Hopefully the internet works next week! #MerryChristmas2020

How did we get the COVID-19 vaccine so fast?

In short, because we had built years and years of previous science, particularly work related to another corona virus, the Middle East Respiratory Syndrome (MERS) virus. As This American Life reports on the speed of vaccine development:

That’s because the spike on this coronavirus that would cause the pandemic, it was very, very similar to the one on the MERS coronavirus, the one they had been studying.
David Kestenbaum: It’s often noted that these vaccines came together really quickly.
But it seems like the reason they came together really quickly is because of all this work that went on for years before.
Jason Mclellan: Yeah, I think that’s right. And yeah, it definitely is. Just–
David Kestenbaum: It’s not like it just happened in 30 days. It happened in 10 years.
Jason Mclellan: Yeah, I saw a nice graphic online showing that if SARS-COV-2 had emerged 10 years ago, we’d be nowhere this far along to having a vaccine.

The report also notes that while some politicians have taken to calling COVID-19 “kung-flu” or the “China virus in a pejorative nature towards China, some of the lead researchers in developing the vaccine were immigrants from China, including scientists like Nianshuang Wang.

The entire story is interesting and worth reading or listening to.

Holiday Break

I’ll be taking a holiday break, with intermittent blogging. I hope to put in a recipe or two (as often happens around here this time of year), and I’ll certainly pop up if we have some big news. Otherwise, from now until January 2nd I’ll be on my end-of-the-year schedule.

2020 has been. . .well, adjectives fail me. Or rather, too many of them are trying to get through the door at the same time. 2021 is going to have to be more sane, right? Reversion to the mean and so on? Anyway, best wishes to everyone celebrating at this time, and with any luck the next time you hear from me over the next few days it’ll be with cooking suggestions rather than coronavirus information. Deal?

The New Mutations

OK, time to write about the topic that’s been the talk of the coronavirus world the last day or two: the new strain that has been detected in the UK. I’ll go ahead and put the bottom line right here, and then go into the details: I’m not sounding any alarm bells, but this does bear watching. The signal/noise on this story started out rather low (as has been the case with all the breaking news during the pandemic), but it’s improved. Problem is, the most important questions aren’t going to come into any kind of solid focus for another week or two at best.

I can recommend Kai Kupferschmidt’s article here at Science for background. The short form is that earlier this month, UK authorities noticed that there was an upswing in reported SARS-CoV-2 cases in southeast England, and that sequencing was showing that these seemed to be tied to a new variant of the virus itself. (As Kai’s article notes, there’s a feature of the PCR testing that made it easier to pick this out). There have been cases in several other countries as well.

If you’re counting back to the original Wuhan type, this one has piled up 17 mutations, which is an unusually high number, for sure. Let’s take a look at those in the context of the whole virus’ sequence – below is the “Events” view from Nextstrain.org, which shows you how many reports they’ve had of mutations at particular amino acids.

Virus aficionados will know immediately what they’re looking at here, but if that diagram looks odd to you, here’s how to read it. The vertical-black-line part represents number of reported mutations in that particular three-letter “codon”, which is the unit of DNA/RNA that codes for one amino acid in the resulting protein. That tallest line, for example, represents 70 mutations that have turned up in a particular codon that codes for an amino acid in the viral ORF1a protein. You can see the associated protein by the color of the bar underneath it and its label. That bar is the whole coronavirus genome, laid out in terms of its various proteins. The (in)famous Spike protein is the third biggest, third in line: you go through the ORF1a protein, then ORF1b, and then you have S for Spike (the second green region). And you can see that there are plenty of reported mutations there – more than (say) in the ORF1b protein right before it , which is relatively quiet. So it’s not like we haven’t been seeing Spike mutations; they’re happening all the time.

Now, these lines in the chart are single changes, but as the virus rolls along, it piles mutations on top of mutations. The order that these occur in lets you put together a “tree” diagram based on the branching order in which these took place, and Nextstrain is a great place to see those in all their glory. When you look at the 17-mutation strain that’s of concern in England, several things stand out, as analyzed in this preprint. For one thing, all of these mutations are ones that lead to a different amino acid when that particular codon is read off. The three-letter code has some redundancies in it; some of the changes you can make will lead to the same amino acid in the end, but quite a few of the changes in this latest strain are real change-the-protein ones – there are 14 of those and three that lead to deletions, along with six silent ones that don’t change anything in the end. This strain is in Clade 20B on the Nextstrain charts, and there are charts here that will show you the relationships in detail.

There are three of these amino acid changes in the ORF1a and ORF1b regions, and one deletion, two deletions and six amino acid changes in the Spike protein, three more mutations in Orf8, and two changes in the nucleotide (N) protein. Three of the Spike region mutations have been already described as having significance for its function: the N501Y is on the of the key receptor-binding-domain amino acids at the very tip of the Spike, the part that contacts the human ACE2 protein. (Here are some new data that should make us a bit happier, though: this mutation does not seem to be associated with loss of neutralization when exposed to human antibodies) One of the deletions (69-70del) has shown up several times in other SARS-CoV-2 sequences (including a mink-related outbreak in Denmark), and is speculated to be involved in evading immune response (to some degree). And the P681H is right next to the “furin cleavage” site, which is known to be key for the process of virus going on to enter human cells.

According to that paper last linked above, this list suggests some real selection pressure is at work here, not just random mutational drift and wandering. As mentioned, several of these have shown up independently in the past, but this combination of them is new. The authors speculate that these multiple changes might have occurred, at least some of them, in an individual with long-term infection, where the virus had a chance to deal with the human immune system for an extended period. It’s a plausible idea, because something different had to happen to allow so many mutations to pile up more or less at once, but it remains unproven. It should be noted that not all of the mutations seen in this strain are known to be trouble: there’s a stop codon mutation in ORF8, and something like it was seen earlier in Singapore in a branch of SARS-CoV-2 that later died out thanks to that country’s stringent control measures. It seemed to be associated with milder infection, though its significance when combined with these other mutations in unknown. But as the Kupferschmidt article details, there are still a lot of explanations in play. The increase of this strain in the UK could certainly be down to higher infectiousness, but we’ve thought that about other strains in the past, and it hasn’t panned out. It’s in the “remains to be seen” category, because the vagaries of person-to-person transmission (and the vagaries of human movement!) can produce apparent patterns that aren’t what they appear to be.

I should note here that there’s another strain in South Africa that is bringing on similar concerns. This one has eight mutations in the Spike protein, with three of them (K417N, E484K and N501Y) that may have some functional role. You’ll note the N501Y is also in the UK strain, but the E484K is one that people are particularly watching, because that’s in a region that a number of antibodies seem to recognize. The South African one also seems to be moving up when you look at population sequencing data, suggesting that it might also be more transmissible.

But remember, antibody response (whether through infection or through getting vaccinated) is polyclonal: you raise a number of different antibodies that bind in different ways. That’s one of the key features of the adaptive immune system: hitting a new antigen from a number of directions at once. This makes it harder to slip through for a new pathogen variant – which is good, but at the same time, it’s not impossible, either. Thus the attention being paid to these two new strains.

That leads us to the other big factor that people are worried about: increased transmissibility is not good, of course, but what happens after you’ve been infected? Do these strains produce the same sort of disease in humans? This is completely unclear at this point – so far, no one is associating either of these variants with a notably worse clinical course of disease, though. Many infectious pathogens, in fact, gradually evolve versus a given animal host to be more infectious and less virulent over time. Remember, it’s not the job of a virus to make people deathly ill: it’s the job of a virus to make more virus. Overall, that is generally better served by strains that are easier to catch and that don’t rip into their hosts too viciously. But this is a big-picture statistical effect, and not necessarily at work in any given strain that emerges. There could be a new strain that is both more infectious and more harmful once caught, and we have to keep our guard up against such a thing. Even a more transmissible virus that produces the same level of illness would be (of course) a very bad thing.

So I think the amount of scientific attention being paid to these new strains is completely appropriate. The popular press might be another story. It’s important to remember that (as mentioned) we haven’t even established for sure that these strains are in fact more infectious, and that we don’t know a thing yet for sure about what effect they have in humans compared to the other variants. So if you see any headlines about Relentless March of the Supervirus, go read something else, because that stuff is (fortunately) way out ahead of the facts on the ground. These are by no means the last variants like these that we’re going to be seeing, and we need to learn how to cover them in a responsible way.

So what’s coming next, and when we will know more? We will have animal-model data coming soon to tell us something about infectiousness, and there are already studies underway using human antibody mixtures (from infected patients and vaccinated ones) to see if these new strains are any less susceptible to our immune response. This will be a matter of weeks; I wouldn’t expect to see any clarity before then. And that’s also at least the time scale we would need to start confirming the clinical effects in the human population – you can be sure that medical centers around the world will be monitoring patients who have been confirmed with these variants to see if there are any differences. We’ll also want to know how these look in different age cohorts, in people with pre-existing conditions, and so on, but all of this will take irreducible amounts of time to get a meaningful picture.

My speculations are worth what you’re paying for them. But I think that odds are reasonable that UK strain, based on what we’re seeing so far, may well be more infectious than the existing ones. I hope I’m wrong about that, and I want to re-emphasize that I very well could be. At the same murky level of clarity, I’m not seeing anything so far that makes me think that it causes a worse form of the disease, and I very much hope that I’m not wrong about that. As for vaccine effects, my money is on the antibody response from the vaccines still being protective – and that’s going to be some of the first hard data that we get, because those are some of the most straightforward experiments to run. Updates as we get them.

EC orders additional 80 million doses of Moderna’s COVID-19 vaccine

COVID-19 vaccineModerna has announced that the European Commission (EC) has exercised its option to purchase an additional 80 million doses of mRNA-1273, the company’s COVID-19 vaccine candidate, bringing its
confirmed order commitment to 160 million doses.

The first deliveries of mRNA-1273 to European countries from Modena’s dedicated European supply chain are expected to commence early in 2021, following regulatory approval by the European Medicines Agency (EMA).

These deliveries are subject to receipt of the positive opinion from the EMA’s committee for human medicines (CHMP) and the EC’s decision regarding the Conditional Marketing Authorisation (CMA)
for the vaccine. The CHMP meeting is planned for 6 January 2021.

mRNA-1273 is an mRNA vaccine against COVID-19 encoding for a prefusion stabilised form of the Spike (S) protein, which was co-developed by Moderna and investigators from the US National Institute of Allergy and Infectious Diseases’ (NIAID) Vaccine Research Center. 

“We appreciate the confidence in Moderna and mRNA-1273, our COVID-19 vaccine candidate, demonstrated by today’s increased supply agreement with the EC,” said Stéphane Bancel, Chief Executive Officer of Moderna. “As we shift our focus now to prepare for the delivery of our vaccine candidate, pending a positive opinion from the EMA and other regulators, we remain committed to working with governments and partners globally to address this pandemic.”

Moderna has confirmed the following supply agreements of committed orders totalling more than 470 million doses:

  • United States: 200 million doses with option for an additional 300 million doses
  • EU: 160 million doses
  • Japan: 50 million doses
  • Canada: 40 million doses with option for an additional 16 million doses
  • Switzerland: 7.5 million doses
  • UK: seven million doses
  • Israel: six million doses.

The post EC orders additional 80 million doses of Moderna’s COVID-19 vaccine appeared first on European Pharmaceutical Review.

Antibody-Dependent Enhancement

I’ve had several questions about antibody-dependent enhancement, which has always been a worry as the coronavirus vaccines have been developed. I figured it might be worth a look at just what we know about it, why one might be worried, and why (on the other hand) one might be hopeful.

The simple definition of ADE is “raising antibodies that don’t protect, but actually make a viral infection even worse”. And obviously, that’s the opposite of what you want. Remember that there are “neutralizing” antibodies as opposed to non-neutralizing ones – a neutralizing antibody, as the name implies, binds to its target in a way that shuts its function down. That’s generally done by blocking the “business end” of a given protein target, smothering the binding surface that it would need to do its usual job. For the coronavirus, a straightforward example of a neutralizing antibody would be on that binds to the tip of the Spike protein, the receptor-binding domain (RBD) that is the part that recognizes and binds to the human ACE2 protein on a cell surface. Block that thoroughly enough, and it would seem that you have blocked the virus’s ability to infect your cells.

There are other ways, as this blog post earlier this year makes clear (or tries to!) You don’t have to just completely cap the end of the Spike protein to shut it down – as it turns out, you can bind an antibody further down the Spike and have it be neutralizing, just so long as it interferes with the structural changes that the Spike protein needs to undergo when it starts binding to a human cell membrane. There’s a whole subunit of the Spike (S2) that is involved in the membrane-fusion step, so throwing a wrench into that will work for you, too. Proteins make all sorts of adjustments as they fit to each other: this part has to shift down and over, that bond has to rotate, and some of those adjustments may be non-negotiable (and thus targets for blocking the whole process).

So there are plenty of ways to get neutralizing antibodies, with various ways of binding to the Spike protein and in binding to other coronavirus proteins as well. But there are also plenty of ways to get non-neutralizing antibodies, ones that stick to some part of the coronavirus particle without really inconveniencing it much. That’s obviously useless, and antibody-dependent enhancement takes things down another notch, from useless to outright harmful. With ADE, the binding of such an antibody actually assists the virus, by (for example) actually making it easier for the virus to get taken up through the outer membranes of human cells. Another possibility is that an antibody that would be neutralizing if present in sufficient amounts can actually enhance infection in lower dilutions, which has been seen with influenza antibodies and other viruses as well. This seems to be through aggregation of viral particles, although other factors might be at work.

You really don’t want ADE through any of these mechanisms – bad things happen. Dengue fever is a classic example, because it infects humans through four distinct serotypes. If you are infected with one of these and raise a successful immune response, you may well be at increased risk of serious infection with one of the other serotypes. The neutralizing antibodies for one of the types are often not neutralizing for the others, but instead allow that cell-antibody-receptor mechanism to kick in (easier infection of human monocytes), known as “extrinsic ADE”. There’s also an “intrinsic ADE” seen with dengue, which leads to greater viral replication inside infected monocyte cells before they burst and release their contents. The mechanisms for that are still being worked out, but seem to involve suppression of cytokine pathways.

ADE has been seen with HIV infection (where it may be mediated by one of the complement pathways, which kicks in after an antibody binds to its target), with Ebola (where a completely different complement-driven mechanism seems to be operating), with coxsackievirus (and other picornaviruses), and in many others. It should be noted that inappropriate complement activation can cause troubles of its own, which can contribute to the severity of ADE-driven disease – this is particularly noticeable in respiratory viruses (influenza and others) and their effects in the lungs.

Evolutionarily, you’d figure that developing such things would be under positive selection pressure: higher organisms are constantly fighting off viral infections by raising antibodies to them, so something that causes this to backfire would probably be an advantage for any virus that hits on such a mechanism. So ADE is not some weirdo exception in viral infections, unfortunately – it’s pretty widespread.

And in the same way that viral infections can involve ADE, so can the antibody responses raised by vaccines. There was an inactivated-virus vaccine tried in the 1960s against RSV (respiratory syntactical virus) that in human trials actually caused infants to come down with worse cases of RSV. This effect has been duplicated with RSV in cell cultures and in primate models, and one hypothesis has been that (as with the extrinsic-ADE of dengue), the exposed regions of the antibodies bound to the viral particles bind in turn to receptors on human cell surfaces, and allow them to be taken into the cell more directly. A 1960s inactivated measles vaccine candidate showed similar effects.

Here’s a recent paper taking all this into the context of the current pandemic. And since this post up until now has been rather gloomy, you’ll be glad to hear that the news starts to improve at this point. For one thing, the current coronavirus does not seem to productively infect macrophages, which are by far the main target for that antibody-receptor-uptake ADE mechanism. The related MERS coronavirus was able to do this, but not SARS-CoV-2, fortunately. So the two mechanisms seen in (for example) dengue do not seem to be as much of a worry. The complement-driven stuff is still on the table, though, and indeed matches up well with the “cytokine storm” lung damage seen in severe patients.

But as that new paper says, thus far “No definitive role for ADE in human coronavirus diseases has been established.” That may be a bit surprising, if you’ve been seeing worried stories about antibody-dependent enhancement over the last few months. That doesn’t mean that ADE can’t be operating, of course, just that we don’t have the solid evidence that it is. Another surprise in that line: there’s been a lot of talk about a possible protective effect of prior infection with the other respiratory coronaviruses. Well, there’s a flip side: antibodies raised against those could potentially make SARS-CoV-2 infection worse through ADE, if they’re non-neutralizing.

Now, there have been worries about ADE with coronavirus vaccines as well. This is another case where having all the work done against the 2003 SARS epidemic has paid big dividends this year. Some of the earlier attempts at a SARS vaccine showed ADE effects in mouse models, and further work showed that this seemed to be linked not so much to the antibody response as to the T cell response. Specifically, a “Th2” heavy response (as opposed to more Th1 or a balance between the two), was linked to lung pathology. Those are subdivisions of the CD4+ T cells, based on which cytokines they produce, and these results alerted everyone to keep an eye out for that. Mouse immunogenicity studies with the current vaccine candidates did not show these effects.

In primate models, there were reports on the earlier SARS front like this one: four different peptides as vaccine candidates, three of which seemed to generate protective responses and one of which made things worse. But that also reminded everyone to watch carefully, and it has to be noted that the primate models for the current SARS-CoV-2 vaccines showed no signs of this, either. Not all the earlier SARS work in primates did, either: these two studies went well, with no ADE signs. But immunology being what it is, one has to watch carefully as you move into humans, and the clinical trials that we have been seeing read out have been alert to these possibilities. So far, so good.

This has been why we’ve seen so many vaccines taking care to put the Spike protein into its “prefusion” conformation. The worry has been that if antibodies are generated to it after it’s had a chance to bind to human cells, that gives you a better chance for nonneutralizing ones (and thus potentially a better chance for ADE). And you’ll have noticed the emphasis on neutralizing antibody titers along the way as well – that would have been there anyway, but a high proportion of outright neutralizing antibodies is also a safeguard against antibody-driven enhancement of disease.

At this point, I would say that the main worry for any ADE effects would be if the coronavirus mutates to the point that the antibodies generated by the current vaccines become non-neutralizing. And honestly, I don’t see that happening (it certainly doesn’t seem to have happened yet). Targeting the Spike protein is another big benefit that we got from the earlier SARS work; which suggested that (for example) targeting the Nucleocapsid (N) protein was riskier. With the Spike, you put the virus in an evolutionary tight spot: evading the antibodies while trying not to lose the ability to bind to the human ACE2 protein. So far, that looks like too narrow a path for the virus to stumble through.

Autoantibody Problems

Here’s a preprint from a large team at Yale with a close look at a less-studied aspect of coronavirus infection. It’s been well established by now that a feature of severe cases is a misfiring immune response (the “cytokine storm”, etc.), and one reason that fatality rates have been going down for hospitalized cases is better management of this problem. But the details are still being worked out – and since we’re talking immunology, there are a lot of details.

And it looks like one of those details, potentially a very important one, is a striking correlation with autoantibodies. Those are antibodies to a person’s own proteins – the sort of friendly fire that you see in autoimmune diseases of all sorts (acute and chronic). This work features a new assay (Rapid Extracellular Antigen Profiling, REAP) against a displayed library of 2,770 extracellular (secreted) human proteins displayed via yeast cells, providing a high-throughput method to check a patient’s own serum for antibodies to these. 194 subjects (Yale patients and healthcare workers) were screened, with a wide range of disease severity, as compared to 30 uninfected controls. The new assay showed good correlation with standard ELISA assays as a reality check.

It appears that the more severe a coronavirus infection a patient has, the better the chances that they show a wide variety of autoantibodies towards their own cell-surface and secreted proteins (see the figures above). I wrote here about a study that showed that patients with antibodies towards some of their own interferons have a harder clinical course of the disease, and this new paper confirms that work and extends it. A set of patients were examined over time, and it appears that at least 50% of these reactivities were observed early enough in the course of the disease that they may well have been pre-existing. Around 10% of them were seen to increase over time, though, suggesting that the coronavirus infection was bringing on such autoimmune problems. Interestingly, about 15% of the antibody titers seemed to decrease over time, and I’m not sure what to make of that.

The paper goes on to make connections between specific autoantibodies and immune function – for example, some of the ones that target specific proteins on the surfaces of immune cells are associated in patients with decreased numbers of those cells. The team also looked for correlations between antibodies to specific targets (or those associated with specific tissues) and clinical outcomes. It’s a complex thing to untangle, though. If you think about some specific circulating cytokine protein, antibodies to it could help to clear it from the bloodstream more quickly, or to bind to it in a way that keeps it from working (either partially or completely, which seems to be the case for the interferon autoantibodies), or at the other end of the scale, to bind to it in a way that doesn’t interfere so much with its function and could even stabilize its levels in the blood.

But overall, there was no well-defined set of “COVID-19” antibodies that showed up in infected patients but not in controls, and no obvious ways to match up antibody profiles to specific outcomes. Some of that difficulty, though, may be due to the wide variety of responses seen. Instead of broadly obvious trends, what shows up are a great number of individual responses that can add up to real outcomes, but which are very hard to untangle. Immunology!

One of the things that needs to be done, then, is more extensive profiling in the population. I would assume that ideally you’d want to get a good-sized sample of healthy people, profile them for autoantibodies, and then watch over time to see what happens. This isn’t just a coronavirus story at that point. Are there people who have greater susceptibility to various diseases, or to worse outcomes, if they have particular autoimmune fingerprints? Or will it still be a big tangled ball of yarn if you try to track these things down? At the least, I would expect that if there is indeed a population who have some sort of partial failure of immune tolerance and thus show existing high levels of auto-antibodies, they they would be at greater risk of severe coronavirus infection. How many such people are there, and how many of them are currently unrecognized?

Beyond that, there’s the possibility that some of the autoimmune effects are being actually brought on by the infection. We already know about some of the larger, more obvious examples of this sort of thing (such as Guillain-Barré and others), but profiling via an assay like REAP could help to shed more light. There are already several mechanisms known for such tolerance failures, but it’s for sure that there’s a lot more to learn, and I would think that a good-sized longitudinal study might have a lot to tell us. (Of course, I’m not the person who has to go out and get funding for it, so that’s easy for me to say!)

Baricitinib Follow-Up: An AI Prediction for Coronavirus Therapy

I wanted to follow up on something that came up much earlier in the coronavirus pandemic. Back in February, a group at BenevolentAI proposed the kinase inhibitor baricitinib as a possible therapeutic for the coronavirus. They identified this through their company’s machine-learning approach to the medical literature and disease mechanisms, and identified the compound due to its proposed effects on endocytosis. The drug is used in arthritis therapy as a Janus kinase inhibitor, but it’s also known to inhibit adapter-associated kinase 1 (AAK1), which is where the endocytosis comes in (and that’s certainly a candidate for being able to affect viral entry).

Targeting this enzyme and related ones in the pathway the numb-associated kinase (NAK) family, had been suggested several times over the years as a possible antiviral therapy (the earliest paper I know of is from 2007). So it seems that this idea was going to come up one way or another, AI or not. Some of the coverage at the time was a bit breathless, as is often the case with AI stories that hit the popular press. I had some comments on this at the time, mostly to the effect that this was a good example of literature searching and curation of a useful database, and that there’s nothing wrong with that. If software can help us do that, so much the better – the literature is a gigantic shaggy mound, and we need whatever help we can get in extracting actionable things from it.

Since then, baricitinib and other JAK inhibitors have been tried out in the clinic. In August, a paper from Lilly, BenevolentAI, and other collaborators provided more details. Baricitinib did indeed seem to be effective in cellular models, and a case series of patients treated with it showed some promise. The FDA issued an Emergency Use Authorization for the combination of baricitinib and remdesivir, and now we have the data that led to that decision, published in the NEJM.

At the end of this process, what you see is that the patients getting both drugs recovered a median of one day faster than the ones getting remdesivir alone. Differences in mortality between the two groups showed a trend towards improvement in the dual-treatment group. There was evidence that the combination led to less use of oxygen and mechanical ventilation, and all of these differences seemed to be more pronounced in patients who were in more serious condition at the start of the study. The combination actually produced fewer adverse events than remdesivir alone. These numbers sound pretty similar to dexamethasone, but the paper notes that this trial and the RECOVERY one had different designs and can’t be directly compared. You’d have to run a head-to-head between a remdesivir/baricitinib group and a remdesivir/dexamethasone group to sort that one out, and the good news is that that trial is going to happen.

What we don’t know is whether baricitinib works via the mechanism proposed by the original BenevolentAI paper. Remember, it was first identified for the AAK1 activity; this was before we knew so much about damping down cytokine activity as a needed therapy in the later phases of coronavirus therapy in some patients. JAK inhibitors already do that, which is why they’re used in arthritis treatment, and were independently proposed for coronavirus therapy for that reason. But it’s worth noting that another JAK inhibitor (ruxolitinib) just failed to improve the recovery of coronavirus patients in a separate trial. It does seem that of the JAK inhibitors (which all should affect cytokine levels) that baricitinib has the best additional activity on the endocytosis-related kinase targets, so the original proposal is definitely still alive.

We’ll see from the dexamethasone comparison trial, though, how useful it is under real world patient care conditions (especially when compared to such an inexpensive drug as dexamethasone). That’s something you can’t predict with any AI in the world – not yet, anyway, and it’s going to be a long time before that’s feasible. We’ll take what we can get.

A Wider Variety of Vaccine Platforms Report

Well, it’s definitely been a Vaccine Week around here, but it’s understandable. And we’ll finish off the week with a look at some types that we haven’t seen report yet. The news is. . .mixed.

First off is a preliminary report on the inactivated virus vaccine from Sinopharm – more specifically, their China National Biotec Group division, and even more specifically CNBG’s Beijing Institute of Biological Products. As a side note, if you find the organizational structure of these Chinese efforts somewhat confusing, come sit right over here next to me. A press release from the United Arab Emirates’ Ministry of Health and Prevention says that a trial there showed 86% efficacy and that there were no serious safety concerns. I would bet that this is a higher figure than most people expected from an inactivated-virus candidate – it’s an older technology that doesn’t always come through (but has certainly also generated some very useful vaccines in the past).

Unfortunately, that’s about all the release has to say – there are few further details. The 86% is said to be for preventing coronavirus infection, but we don’t know if this was measured by number of patients with symptoms or counting in asymptomatics via PCR testing. The release also says that the vaccine was 100% effective in preventing moderate or severe cases, which is also very good, of course. It’s a two-shot dosing regimen, and the vaccine itself is said to have no special storage requirements. So there’s a lot of promising stuff here, but there’s a lot to wonder about as well. I know that we’ve had a lot of press releases in this vaccine clinical trial business so far, but most of them have them have been a bit more informative than this. And although some of the Chinese development efforts have released and published good data sets, others have been very much lacking, and this is one of them.

For example, neither Sinopharm nor the Chinese authorities have had any comment on this announcement. And that’s odd. Indeed, the New York Times reports that one of their reporters got through to Sinopharm only to have them hang up the phone when asked for comment. So I’m not just being hard on this one because it’s from a Chinese company – this is weird by any nation’s standards. If I’m going to give Oxford and AstraZeneca grief for the way they rolled out their recent trial announcement – and I do, because they deserve some – then I’m going to give Sinopharm some for this. If they weren’t ready for a press release from the UAE government, they should say so. Hanging up the phone on people is not the way to build confidence in your organization and its skills. At any rate, the efficacy numbers from this single press release are good news. The UAE has now approved it, and shipments of it are already going to Egypt and Indonesia, among other countries. I would just feel better if I could hear the actual developers of the vaccine say something about it, and with some more numbers attached.

We also had news this morning from the GSK/Sanofi effort, but the news is not good. They have been working on a recombinant protein vaccine with the addition of GSK’s adjuvant, but the companies announced that a look at immunogenicity in Phase I volunteers showed an inadequate response in older patients. The 18-49 year old cohort apparently looked good, but when compared to convalescent plasma antibody levels, the levels in the older vaccinated patients were not. The companies plan to start up again in the clinic in February with an improved antigen formulation, but this will definitely delay their vaccine to (probably) the end of next year, and that’s assuming that the new version works as hoped.

That’s unfortunate. We need as many solid vaccines as we can get, and I don’t think anyone was really expecting something like this from the Sanofi/GSK team (they certainly weren’t). The two companies have a long track record in this area, but immunology is what it is – these results show that we can take nothing for granted. In fact, together with the lower efficacy seen in the Oxford/AZ candidate, I have to say that this is making the mRNA platforms look stronger all the time. More on this in a separate post next week. Anyway, in the same way that the Oxford/AZ data make me very curious about what we’ll see from J&J’s trial, these results make me similarly ready to see what Novavax will report with their own recombinant protein/adjuvant candidate. But they did have reasonable antigenicity numbers, at least. And in that first link in this post, the team at Science says that they believe that another inactivated-virus candidate (from Sinovac) may soon report on its trial in Brazil, so we’re going to have plenty of comparisons to think about.

Finally, word has come that an unusual vaccine candidate has had to stop development. In my earlier huge vaccine roundup posts, I mentioned the University of Queensland and their “molecular clamp” technique. Here’s more on how that is supposed to work. The idea is to use a trimeric protein that’s from HIV, a part of its gp41 glycoprotein, as a molecular platform to display the antigen proteins from whatever other virus you choose. The Queensland candidate (which was being developed with biotech company CSL) displayed the coronavirus spike protein this way, in what was believed to be exclusively its pre-fusion conformation. The idea looked like it would be readily adaptable to many viral proteins, and easier to realize than some of the other scaffolding ideas like this that have also been looked at.

Unfortunately, the use of this protein fragment also caused antibodies to be raised to it as well as to the Spike protein part, and that led to false-positive HIV tests in the trial participants. It’s important to note that these are indeed false positives – the vaccines used only a piece of one HIV protein and this has nothing to do with an actual HIV infection, of course. But wide use of such a vaccine would basically blow up the ability to do routine HIV blood screening, and that’s not good. Especially when there are other vaccine candidates out there with no such liability. Work on this one has ceased, and this would appear to throw into doubt any further plans to use the gp41 platform for future vaccines.

Update: there’s some more news. AstraZeneca today says they’re looking at combinations of the Oxford/AZ adenovirus vector vaccine with others. That includes the mRNA candidates and (interestingly) the Russian “Sputnik” vaccine from Gamaleya. Details on that latter one have been very much lacking. I’ve written here about the possibility of people getting more than one type of vaccine, so I’ll be glad to see some controlled data on the mix-and-match approach in general, while keeping in mind that (immunology!) every one of these situations could be different, all the way down to what order the vaccines are given in. To me, this also suggests that AstraZeneca is hedging against the possibility that their candidate may simply not be able to compete as a standalone agent in the eventual coronavirus vaccine landscape. . .

The Latest on Coronavirus Mutations

For people looking for an accessible writeup on the coronavirus mutational landscape, I can recommend this Reuters article that came out today. It has a lot of good information in it, and a lot of very well-made graphics to show what’s going on. Past blog posts on this subject are here, here, here, here, and here.

And what’s going on, of course, is that the virus is mutating. It’s what viruses do. They don’t have a lot of overhead for lots of redundant error-checking machinery (although some have more of that than others), and honestly, total fidelity is not really an evolutionary advantage. A little sloppiness in copying the genetic material gives a more diverse population of viruses, with members that are more likely to be able to meet new threats or take advantage of new opportunities for infection. You would expect, over time, the viruses that can do a better job of that to be more represented. And remember, evolutionary time works different for viruses than it does for us. They turn around a new generation so quickly and in such huge numbers; they’re mashed down on fast-forward constantly.

So the mutational background is constant, but it’s important not to make the teleological error of picturing this all as being due to calculation, with the virus outwitting its adversaries. It can look that that, for sure, but it’s just millions of random chances spewing out everywhere – some work, some don’t, and what we see is the residue of some stuff that worked. You’ll see from the Reuters article that strains with a Gly in position 614 (the D614G one and its further offshoots) have become much more prominent. And those do seem to have a bit of an advantage in the binding behavior of the Spike protein – but if you look at some other situations around the world, you can see that other factors are at work. Singapore, for example, showed a lot of low-frequency mutations for a long time, apparently because these were showing up in foreign worker dormitory buildings which were then hit with vigorous quarantine measures. South Korea, for its part, had a lot more “V” family strains for a while due to a single superspreader event, which stood out against the country’s generally strong response. So there are a lot of extraneous factors and sheer accidents mixed into the data landscape.

The good news continues to be that none of the mutations studied so far in the general population seem to be able to evade the antibodies raised by the current vaccines. That doesn’t mean that it can’t happen – and as we start putting selection pressure on the virus by vaccinating people we’ll have to keep a close eye out for anything like that developing. But then we have to consider transmission. If an antibody-evading form of the virus also becomes harder to catch, well, it’s going to be less of a worry. But if we were to start doing a better job at not spreading the virus in general, that would be sort of nice, because that would reduce the chance that any nasty mutated forms get any kind of traction in general. If some sort of supervirus mutation occurs in a single patient who doesn’t then get close enough to other people for it to spread, then it’s a tree falling in a forest that doesn’t make much of a sound.

It’s all a race between several different factors. But here in the US we have so many people infected (and so much transmission going on) that frankly we’re making ourselves vulnerable to any more dangerous mutations that might crop up. In fact, if something like that were to emerge, the odds are better that it would do so here, from what I can see. We’re giving the virus every opportunity to reproduce and for the subsequent viral variations to then go out and try their luck infecting lots of other people. Vaccinating enough people quickly enough would interrupt these processes, and so would doing the sorts of public health measures that we’ve all been hearing about for months. But the first is going to depend on vaccine supplies, logistics, and public acceptance, and the second, well, look around you, si monumentum requiris.

It’s good news that no profoundly worse mutations have spread so far, and that might be a sign that they’re not so easy to come by. But then again, this virus has a relatively short acquaintance with human beings, so I think it would be prudent to assume that there are still a lot of things that we don’t know about its interactions with us. Let’s do everything we can not to give as few chances as possible.

The FDA Weighs Its First Coronavirus Vaccine

Pfizer and BioNTech have a date on Thursday in front of an FDA advisory committee to review their vaccine data, and the briefing document is available for all to read (here’s the FDA’s own document as well). It’s very interesting stuff, and far more information than we’ve had so far.

First off, safety. There continue to be no serious concerns that I can see. There were two deaths in the vaccinated group, and 3 in the placebo group (both about 19,000 people). More people withdrew from the study in the placebo group. There were 18 adverse events characterized as “life-threatening” in the vaccine group after one month of follow-up, but there were 19 of those in the placebo group, and so on (for two months of follow-up, those numbers were 10 in the vaccine group and 11 in the placebo). There were four incidents of Bell’s palsy (temporary facial nerve problems) across the total participants, all four of those in the vaccine group. That’s worth keeping an eye on, but it’s about the number of people you’d expect to show it in a population that size (Bell’s palsy is not extremely rare), and thus can’t really be associated with the vaccine with that number of incidents. The events that were clearly associated with the vaccine were reactogenic ones (injection site pain, soreness, stiffness, fever, headaches, etc.) Older patients tended to have fewer of these, having overall less active immune systems. Word has come this morning that two severe immune reactions have been seen in the UK, both in National Health Service workers with long personal histories of severe immune reactions, but in general, such people are at higher risk with any vaccine. I have not gone through every line of the safety material, but so far I do not see anything in there that would be a problem for Emergency Use Authorization or later full approval.

We also have a great deal of fresh data on the immunogenicity of the vaccine, both antibody response and T-cell responses. I’m not going to try to summarize those for now because (1) everyone mostly cares about the efficacy and (2) once we have more data of this kind with other vaccines we can try to make some sort of head-to-head comparison. In the Phase I studies, this vaccine showed robust neutralizing antibody levels and T-cell responses, and those numbers look to have continued in the larger Phase 2/3 study, as expected.

Now to efficacy. As we know, “overwhelming efficacy” was declared at the first (and only!) interim analysis, with 94 total cases split 90/4 between the placebo group and the vaccinated group. The final analysis shows 170 cases, split 162/8 (confirmed coronavirus infection at least 7 days after the second dose). That’s a VE (vaccine efficacy) of 95%, with the 95% confidence interval on that number running from 90.3% to 97.6%. A new and interesting data set cover what happened after the first dose and before the second. There were 50 cases of coronavirus in the one-dose treatment group, versus 275 in the placebo group. But look at how those 50 cases came on:

What you can see is that the great majority of those 50 cases happened in the first ten to fourteen days or so after the first shot. At that point, the vaccinated group and the controls diverge sharply, and that’s because ten to fourteen days is the time it takes to raise an antibody response after a vaccination. You can see it kicking in, right there in the chart. This vaccine, in fact, already meets the FDA’s threshold of 50% efficacy with just one shot (but has much greater efficacy, of course, with the two-shot regimen). It’s worth noting that 10 cases of severe infection developed in the treatment group after just one shot, versus only 1 such case after two shots.

Table 8 in the FDA’s document has a good look at the various subgroups in the vaccinated population, and it’s hard to see any real differences in there. Age, gender, body mass index, ethnicity, risk factors – to my eyes, there’s not much to choose from, and that’s a good thing.

So it looks like the vaccine has a very strong protective effect, and that’s the first thing you’d want to know. But there are many other things we don’t know yet. What, for example, is its effect on transmission of the disease? We should be seeing more on that early next year, but based on these figures, it seems very likely that this vaccination would cut down the transmission rate sharply. But that has to be proven; there are plenty of things that have seemed very likely over the years in drug development that haven’t worked out. Even at this point in the pandemic, we don’t have solid numbers about the correlation between just what sort of viral load a person is carrying and their infectiousness to others. Think about how you’d try to design such a study and you can see why the data are lacking! We also don’t know the effectiveness of the vaccine in preventing asymptomatic infections, because this trial was based on symptoms, not on constant PCR testing of its participants, which is the only way you’d accumulate such data. The stronger knockdown of severe cases makes one think that asymptomatic cases might be similarly decreased, but you could also imagine that you would take what would be “normal” symptomatic cases and knock them back to being real-but-asymptomatic ones. A priori, you can’t be sure what the real situation is. That question is tied up with the transmission one, of course. We also don’t know what the duration of protection is, for the very simple reason that time just has to keep on tickin’, tickin’, into the future, (S. Miller et al.) for us to have any data on that. There’s no other way; we can’t estimate these things. That we will certainly have data on – eventually.

The data that Pfizer and BioNTech have presented look like far more than would be needed for an Emergency Use Authorization. I expect the FDA to grant that, and very soon. All the concerns about the effect of one or more EUAs that I wrote about here are still real. But some of them are less troublesome now that both this vaccine and Moderna’s have read out with such high efficacies – I was most worried about the first vaccine candidate showing decent-but-not-great effects, and that fortunately has not happened. But questions about (for example) the Pfizer placebo group switching over to vaccine, about the effects of these EUAs on other vaccine trials, and so on are still with us, and are yet to be resolved.

At any rate, any EUA is going to be accompanied by a large monitoring requirement. We’re going to be collecting a lot more data on this vaccine and others as they move into larger populations, data on both safety and efficacy. Vaccine work has its unique challenges because of its unique situation of dosing huge numbers of people who aren’t sick yet, and because of its targeting of the wildly complicated, wildly variable immune system. If there is some nasty side effect that is literally at the one-in-a-million level, which given immunology is absolutely possible, we certainly would not expect to see it in a 38,000-person trial (huge though that is by clinical trial standards). But we certainly would see it after dosing a few hundred million people.

Remember, though, what an EUA is for. That word “emergency” is there for a reason: this authorization is for something extremely serious for which there is no available alternative. That’s exactly the situation we find ourselves in, on both counts, and I think that the risk/benefit ratio is clearly, overwhelmingly in favor. Let’s do it.

Gene Therapy, Absolutely and For Real

This weekend brought some really significant news in the long-running effort to use gene editing to treat human disease. As most readers will have heard, Bluebird and a Vertex/CRISPR effort both published papers in the NEJM addressing sickle-cell anemia and beta-thalassemia.

These diseases have long been linked when it comes to gene therapy ideas, because both of them have defects in the hemoglobin protein as their cause. And it’s long been thought that both could be treated by getting adults to re-express the fetal hemoglobin protein – it’s on a different gene entirely, and thus does not have any of the genetic problems that affect the adult hemoglobin gene. The normal course of events is for babies to stop expressing the fetal form and switch over to “regular” hemoglobin, and it’s been worked out that a particular transcription factor called BCL11a is a key player in that transcriptional repression of the fetal hemoglobin gene. That plays right into the usual way that we tend to think about therapeutic possibilities: whether it’s enzymes, receptors, or expression of whole proteins, we have a lot more tools to mess things up and interrupt processes than we have to make them run faster or better. So the possibility of interrupting BCL11a’s function has been a tempting one for many years.

It’s hard to do by traditional means, though. (Full disclosure: I have, at different times in my career, been involved with such efforts, but none have ever come near the clinic.) Transcription factors are notoriously hard to get a handle on with small molecule therapeutics, and many unsuccessful runs have been taken at BCL11a ligands to try to interrupt its functions in one way or another. My general impression is that the protein doesn’t much care about recognizing small-molecule ligands (and it’s far from the only one in that category, for sure). You’d think that if you ran a few hundred thousand (or a few million) various molecules past any given protein that you’d find a few of them that bind to it, but that assumption is too optimistic for most transcription factors. You’re also going to have a hard row to hoe (to use an old Arkansas expression) if you try to break up their interactions with their DNA binding sites: a significant amount of capital has gone down the chute trying to get that to work, with (as far as I can tell) not much to show for it.

There’s another complication: BCL11a has a lot of other functions. Every protein has a lot of other functions, but for transcription factors, the issue can be especially fraught. If you had a small molecule that really did interfere with its activity, what would happen if you just took a stiff dose of it? Probably a number of things, including some interesting (and not necessarily welcome) surprises. There have been a number of ideas about how to get around this problem, but a problem it is.

So it’s on to biological mechanisms. The Bluebird team reports on using RNA interference to do the job – they get cells to express a short hairpin RNA that shuts down production of BCL11a protein, with some microRNA work to target this to the right cell lines. And the Vertex/CRISPR team, naturally, uses CRISPR itself to go in and inactivate the BCL11a gene directly. Both approaches take (and have to take) a similar pathway, which is difficult and expensive, but still the best shot at such therapies that we have. You want the fetal hemoglobin expressed in red blood cells, naturally, and red blood cells come from CD34+ stem cells in the bone marrow. Even if you haven’t thought about this, you might see where it’s going: you take a bone marrow sample, isolate these cells, and then do your genetic manipulation to them ex vivo. Once you’ve got a population of appropriately re-engineered cells ready to go, you go kill off the bone marrow in the patient and put the reworked cells back in, so they’re the only source there for red blood cells at all. A bone marrow transplant, in other words – a pretty grueling process, but definitely not as much as having some sort of blood-cell-driven cancer (where the therapy uses compatible donor cells from someone else without such a problem), or as much as having full-on sickle cell disease or tranfusion-dependent thalassemia.

You can also see how this is a perfect setup for gene therapy: there’s a defined population of cells that you need to treat, which are available in a specific tissue via a well-worked-out procedure. The problem you’re trying to correct is extremely well understood – in fact, it was the first disease ever characterized (by Linus Pauling in 1949) as purely due to a genetic defect . And the patient’s own tissue is vulnerable to chemotherapy agents that will wipe out the existing cell population, in another well-worked-out protocol, giving the newly reworked cells an open landscape to expand in. You have the chance for a clean swap on a defined target, which is quite rare. In too many other cases the problem turns out to involve a fuzzy mass of genetic factors and environmental ones, none of which by themselves account for the disease symptoms, or the tissue doesn’t allow you to isolate the defective cells easily or doesn’t allow you to clear out them out for any new ones you might generate, and so on.

Both the Vertex/CRISPR and Bluebird techniques seem to work – and in fact, to work very well. There are now people walking around, many months after these treatments, who were severely ill but now appear to be cured. That’s not a word we get to use very often. They are producing enough fetal hemoglobin, more than enough to make their symptoms completely disappear – no attacks, no transfusions, just normal life. And so far there have been no side effects due to the altered stem cells. An earlier Bluebird strategy (involving addition of a gene for a modified adult hemoglobin) also seems to be holding up.

These are revolutionary proofs of concept, but at the same time, they are not going to change the course of these diseases in the world – not right now, anyway. Bone marrow transfusion is of course an expensive, complex process that costs a great deal and can only be done in places with advanced medical facilities. But what we’ve established is that anything that can cause fetal hemoglobin to be expressed should indeed cure these diseases – that idea has been de-risked. As has the general idea of doing such genetic alteration in defined adult tissues (either RNA interference or CRISPR). From here, we try to make these things easier, cheaper and more general, to come up with new ways of realizing these same goals now that we know that they do what we hoped that they would. This work is already underway – new ways to target the affected cell populations rather than flat-out chemotherapy assault, new ways to deliver the genetically altered cells (or to produce them “on site” in the patients), ways to make the switchover between the two more gradual, and so on. There are lot of possible ways, and we now know where we’re going.

Get Ready for False Side Effects

We’re in the beginning of the vaccine endgame now: regulatory approval and actual distribution/rollout into the population. The data for the Pfizer/BioNTech and Moderna vaccines continue to look good (here’s a new report on the longevity of immune response after the Moderna one), with the J&J and Novavax efforts still to report. The AZ/Oxford candidate is more of a puzzle, thanks to some very poor communication about their clinical work (which suffered from some fundamental problems itself).

Now we have to get people to take them. Surveys continue to show a good number of people who are (at the very least) in the “why don’t you take it first” category. I tend to think that as vaccine dosing becomes reality that more people will get in line for a shot, but that remains to be seen. I wanted to highlight something that we’ll all need to keep in mind, though.

Bob Wachter of UCSF had a very good thread on Twitter about vaccine rollouts the other day, and one of the good points he made was this one. We’re talking about treating very, very large populations, which means that you’re going to see the usual run of mortality and morbidity that you see across large samples. Specifically, if you take 10 million people and just wave your hand back and forth over their upper arms, in the next two months you would expect to see about 4,000 heart attacks. About 4,000 strokes. Over 9,000 new diagnoses of cancer. And about 14,000 of that ten million will die, out of usual all-causes mortality. No one would notice. That’s now many people die and get sick anyway.

But if you took those ten million people and gave them a new vaccine instead, there’s a real danger that those heart attacks, cancer diagnoses, and deaths will be attributed to the vaccine. I mean, if you reach a large enough population, you are literally going to have cases where someone gets the vaccine and drops dead the next day (just as they would have if they *didn’t* get the vaccine). It could prove difficult to convince that person’s friends and relatives of that lack of connection, though. Post hoc ergo propter hoc is one of the most powerful fallacies of human logic, and we’re not going to get rid of it any time soon. Especially when it comes to vaccines. The best we can do, I think, is to try to get the word out in advance. Let people know that such things are going to happen, because people get sick and die constantly in this world. The key will be whether they are getting sick or dying at a noticeably higher rate once they have been vaccinated.

No such safety signals have appeared for the first vaccines to roll out (Moderna and Pfizer/BioNTech). In fact, we should be seeing the exact opposite effects on mortality and morbidity as more and more people get vaccinated. The excess-death figures so far in the coronavirus pandemic have been appalling (well over 300,000 in the US), and I certainly think mass vaccination is the most powerful method we have to knock that back down to normal.

That’s going to be harder to do, though, if we get screaming headlines about people falling over due to heart attacks after getting their vaccine shots. Be braced.

Insights+ Key Biosimilars Events of November 2020

Biosimilars are developed to be highly similar versions of approved biologics in terms of safety, purity, and potency. Biosimilars are expected to be a cost-effective alternative to the high-priced branded biologics, offering significant and much-needed cost savings to both payers and the patients. Hence, the providers are more likely to adopt biosimilars as a “reference product to biologics” possessing similar therapeutic properties. During the month of November, Formycon and Bioeq reported the first patient dosing in the P-III study of FYB202 while Prestige collaborated with Teva to commercialize Tuznue. Our team at PharmaShots has summarized 21 key events of the biosimilar space of Nov 2020.

Celltrion Presented Results of CT-P17 (biosimilar, adalimumab) in P-III Study for RA at ACR 2020

Published: Nov 03, 2020

Product: CT-P17 (biosimilar, adalimumab)

  • The P-III study involves assessing CT-P17 (40mg, q2w) vs reference adalimumab for up to 24wks. in 648 patients with active moderate-to-severe RA despite MTX treatment
  • Results demonstrated that CT-P17 has equivalent efficacy to reference adalimumab i.e. ACR20 is 82.7% for both, 2EPs include ACR20/50/70 response rates, mean DAS28, CDAI & SDAI & EULAR (CRP) response, Ctrough of adalimumab is higher for CT-P17 & lower in the ADA positive subgroup than the ADA negative subgroup in both treatment groups, the safety profile is comparable
  • Additionally, comparable PK and safety data is presented for CT-P17 in comparison with EU-approved & US-licensed adalimumab in 312 healthy subjects. Celltrion also presented PK and safety data for two delivery methods for CT-P17, the auto-injector (AI) and pre-filled syringe (PFS)

Formycon Reported BLA Resubmission Strategy for FYB201 (biosimilar, ranibizumab)

Published: Nov 06, 2020

Product: FYB201 (biosimilar, ranibizumab)

  • Formycon & Bioeq reported the BLA resubmission strategy for FYB201 (biosimilar referencing Lucentis) has been adjusted
  • With the revised submission strategy, the companies expect a simplification of the approval procedure. The modified submission dossier is anticipated to be filed with the US FDA in H1’21
  • The adjustment of the regulatory strategy while optimizing the commercial supply chain is not expected to have any impact on the timing of the anticipated launch of FYB201 in the US & EU

Formycon and Bioeq Reported First Patient Dosing in P-III Study of FYB202 (biosimilar, ustekinumab)

Published: Nov 09, 2020

Product: FYB202 (biosimilar, ustekinumab)

  • The focus of the P-III study is to demonstrate the comparability of FYB202 & the reference product Stelara in terms of efficacy, safety & immunogenicity in patients with moderate to severe psoriasis vulgaris
  • FYB202 is being developed as part of JV b/w Aristo Pharma & Formycon along with Bioeq. Bioeq is responsible for the clinical studies which were developed in close cooperation with the US FDA & the EMA
  • The ustekinumab is mAb targeting the cytokines IL-12 & IL-23. Stelara is used to treat various severe inflammatory conditions such as mod. to sev. psoriasis, CD & for UC

Alvotech and Cipla Collaborated to Ensure Access to Biosimilars in South Africa

Published: Nov 09, 2020

Product: Biosimilar

  • Alvotech and Cipla entered an exclusive partnership to provide patients with better access to high quality and cost-effective biosimilar medicines in South Africa
  • Alvotech will be responsible for the development and supply of the products and Cipla will be responsible for the registration and commercialization
  • The biosimilar portfolio will include five biosimilars- two for oncology and three for treating auto-immune diseases

Genentech Filed Complaint Against Centus Over Proposed Bevacizumab Biosimilar

Published: Nov 13, 2020

Product: Proposed Bevacizumab Biosimilar

  • Genentech filed a complaint in the Eastern District of Texas alleging that the proposed biosimilar to Avastin (bevacizumab) product infringes 10 US patents
  • Genentech alleges that Centus and partners failed to disclose sufficient information about the proposed biosimilar to enable Genentech to do a sufficient analysis of potential patent infringements
  • Centus has a BLA under review with the FDA for the bevacizumab biosimilar candidate FKB238, and the company has filed a notice of intent to commercialize the agent

Prestige Signed an Exclusive Agreement with Teva to Commercialize Tuznue (biosimilar, trastuzumab) in Israel

Published: Nov 11, 2020

Product: Tuznue (biosimilar, trastuzumab)

  • Teva to get an exclusive right to commercialize Tuznu in Israel, leveraging its marketing capabilities and experience in bringing pharmaceutical products to market and will be responsible for local registration, sales, and marketing in Israel
  • Prestige will assume responsibility for product registration with the EMA and commercial supply of Tuznue from its manufacturing facilities in Osong, Korea
  • Tuznue is biosimilar referencing Roche’s Herceptin (trastuzumab), used to treat HER2-overexpressing BC & m-gastric adenocarcinoma. Additionally, the EMA has accepted an MAA for Tuznue based on the global clinical trial results

Samsung Bioepis Initiated P-I Study of SB16 Proposed Biosimilar to Prolia (denosumab)

Published: Nov 11, 2020

Product: SB16 proposed biosimilar to Prolia

  • The P-I study assesses the PK/PD, safety, tolerability of SB16 (denosumab) vs Prolia in 168 healthy male volunteers for osteoporosis. The study will be 3 arms study that involves dosing with SB16 either the EU or US-sourced Prolia
  • The proposed biosimilar references Amgen’s Prolia which was approved in 2010 for osteoporosis with a high risk of fracture
  • With the initiation, Samsung Bioepis continues to advance its biosimilar portfolio covering immunology, oncology, ophthalmology, and hematology

Henlius Reported First Patients Dosing in P-I Study of HLX14 (denosumab, biosimilar)

Published: Nov 11, 2020

Product: HLX14 (denosumab, biosimilar)

  • The first patient has been dosed in a P-I study of HLX14, conducted in 2 parts, Part 1 is a pilot study assessing PK/PD, safety, tolerability & immunogenicity of HLX14 vs EU-sourced denosumab (SC) in healthy male volunteers
  • Part 2 is a four-arm study assessing the bioequivalence of HLX14 vs US-, EU-, CN-sourced denosumab. The study also evaluates PD, safety, tolerability, and immunogenicity between HLX14 and the reference drug
  • Results from the P-I study will provide reference for the dosing scheme in the clinical studies of HLX14

Xbrane Reported Patient Enrollment Completion in P-lll XPLORE Study of Xlucane (biosimilar, ranibizumab)

Published: Nov 11, 2020

Product: Xlucane (biosimilar, ranibizumab)

  • Xbrane reported that the last patient has been enrolled into the P-III XPLORE study assessing Xlucane vs Lucentis in 580 patients with wet AMD
  • The company will conduct an interim read-out from the XPLORE study when the last patient has reached 6mos. of their treatment schedule. Top-line data is expected to be communicated mid-2021 and filing of the MAA/BLA to EMA and the US FDA anticipated to take place imminently
  • Filing of MAA/BLA is expected to take place mid-2021. With an expected 12mos. regulatory process upon filing, MAA is expected in the EU and the US mid-2022 allowing for the launch of Xlucane

Henlius Reported the NMPA’s Acceptance of HLX15 (biosimilar, daratumumab) to Treat Multiple Myeloma

Published: Nov 16, 2020

Product: HLX15 (biosimilar, daratumumab)

  • The NMPA has accepted HLX15’s IND to be used in the treatment of multiple myeloma. HLX15 is Henlius’ second self-developed product around blood tumor treatment
  • The company evaluated the biosimilar in a head to head clinical studies demonstrating that HLX15 is highly similar to its reference daratumumab while the safety profiles are also similar
  • The company has developed the HLX15 in accordance with the technical guidelines of development and evaluation of biosimilar drugs and EMA guideline on similar biological medicinal products

Samsung Bioepis Presented Results of SB11 Proposed Biosimilar to Lucentis in P-III Study at the AAO 2020 Virtual

Published: Nov 16, 2020

Product: SB11 proposed biosimilar to Lucentis

  • The P-III study involves assessing SB11 vs reference ranibizumab in monthly injections (0.5 mg) in 705 patients in a ratio (1:1) with nAMD while only 634 patients continued to receive treatment up to 48wks.
  • One-year results from the P-III study demonstrated equivalence between SB11 and reference ranibizumab in patients with nAMD
  • The study met its 1EPs i.e. changes from baseline in BCVA @8wks. and CST @4wks. The EMA has accepted for review the MAA of SB11 in Oct’2020

Samsung Bioepis and Biogen Reported the FDA’s Acceptance of BLA for SB11 Proposed Biosimilar to Lucentis

Published: Nov 18, 2020

Product: SB11 proposed biosimilar to Lucentis

  • The US FDA has accepted for review the BLA of SB11, a proposed biosimilar referencing Lucentis (ranibizumab)
  • The EMA has accepted for review the MAA of SB11 in Oct’2020. If approved, SB11 will add to the biosimilars portfolio developed under the collaboration of Samsung Bioepis and Biogen including Benepali, Imraldi & Flixabi
  • In Nov’2019, Samsung Bioepis entered into a commercialization agreement with Biogen for 2 ophthalmology biosimilar candidates, SB11 (ranibizumab) & SB15 (aflibercept) in the US, Canada, Europe, Japan & Australia. Ranibizumab is an anti-VEGF therapy for retinal vascular disorders

The US FDA Draft New Guidelines for Biosimilarity and Interchangeability

Published: Nov 19, 2020

Product: Biosimilar

Shots:

  • The FDA has released a draft guidance for industry entitled “Biosimilarity and Interchangeability: Additional Draft Q&As on Biosimilar Development and the BPCI Act”
  • The draft guidance is intended to inform prospective applicants and facilitate the development of proposed biosimilars and proposed interchangeable products, as well as describe FDA’s interpretation of statutory requirements added by the BPCI Act
  • The draft guidance is to be published in the Federal Register on Nov 20, 2020

Alvotech Reported the US FDA and EMA’s Acceptance of AVT02 Proposed Biosimilar to Humira (adalimumab)

Product: Nov 20, 2020

Product: AVT02, a proposed biosimilar to Humira

  • The US FDA has accepted the BLA of AVT02 for review and is expected to decide on the filing in Sept’2021 while the EMA has accepted for review an MAA for AVT02 with an EMA decision anticipated in the Q4’21
  • The filings were based on AVT02-GL-101 & AVT02-GL-301 studies demonstrating a high degree of similarity b/w AVT02 and the reference products. AVT02-GL-101 study met its 1EPs of PK similarity while the later study confirmed the efficacy and safety of AVT02 in patients with mod. to sev. chronic psoriasis
  • AVT02 is a proposed biosimilar to the reference product Humira (adalimumab) with high concentration (100mg/mL) dosage forms

Henlius Presented Results of HLX04 (biosimilar, bevacizumab) in P-III Study at ESMO Asia 2020

Published: Nov 20, 2020

Product: HLX04 (biosimilar, bevacizumab)

  • The P-III HLX04-mCRC03 study involves assessing the efficacy, safety and immunogenicity of HLX04 vs reference bevacizumab (7.5 mg/kg, q3w or 5 mg/kg, q2w) + CT (Xelox or mFOLFOX6) as a 1L treatment in patients in the ratio of (1:1) with mCRC
  • Result: PFSR36wk (46.4% vs 50.7%); no significant difference b/w the treatment groups in 2EPs including OS, PFS, ORR, TTR and DoR; safety and immunogenicity profiles were similar b/w HLX04 and the reference
  • The NMPA has accepted the NDA for HLX04. Additionally, Henlius has submitted a patent for a new formulation of HLX04 with potential better safety and stability, designed for ophthalmic use

Samsung Biologics and AstraZeneca Dissolved Rituximab Alliance

Published: Nov 20, 2020

Product: SAIT101 (biosimilar, rituximab)

  • Samsung Biologics and AstraZeneca had decided to suspend long-running research and development activities by a jointly owned subsidiary, Archigen Biotech, which was solely engaged in development of SAIT101 (biosimilar, rituximab)
  • Samsung halted the P-III study of SAIT101 in Oct’2012 and resumed it in 2014 via Archigen. The P-III study similar therapeutic effect to Rituxan in 315 FL patients with ORR (66.3% vs 70.6%)
  • The companies decided to stop commercializing SAIT101 and take step for liquidation of Archigen as the product lacks commercial viability

The US FDA Approved Pfizer’s Oncology Supportive Care Biosimilar Nyvepria (biosimilar, pegfilgrastim)

Product: Nov 20, 2020

Product: Nyvepria (biosimilar, pegfilgrastim)

  • The EC has approved Nyvepria, a biosimilar referencing Neulasta to reduce the duration of neutropenia and the incidence of febrile neutropenia in adult patients treated with cytotoxic CT for malignancy
  • The EC approval is based data demonstrating a high degree of similarity of Nyvepria to its reference product
  • Pfizer plans to make Nyvepria available to patients in multiple EU countries starting in Q1’21. The EC’s approval follows the US FDA’s approval granted in Jun’2020

Innovent Reported Results of Tyvyt + Byvasda (biosimilar, bevacizumab) in P-lll ORIENT-32 Study as 1L Treatment for HCC

Product: Nov 23, 2020

Product: Byvasda (biosimilar, bevacizumab)

  • The P-lll ORIENT-32 study involves assessing of Tyvyt (sintilimab) + Byvasda vs sorafenib as a 1L treatment in 571 patients in a ratio (2:1) with advanced HCC and the result were released in an oral presentation at the ESMO Asia Virtual Congress 2020
  • Result: Reduction in risk of all-cause mortality (43.1%); the median OS (not reached vs 10.4 mos.); reduction in risk of progression (43.5%); m-PFS (4.6 vs 2.8 mos.), results was presented at ESMO 2020
  • The improved OS and PFS benefits of the dual regimen were generally consistent across all subgroups and showed an acceptable safety profile with no new safety signals

Innovent’s Sulinno (biosimilar, adalimumab) Received NMPA’s Approval for Polyarticular Juvenile Idiopathic Arthritis

Published: Nov 23, 2020

Product: Sulinno (biosimilar, adalimumab)

  • The NMPA has approved Sulinno for the treatment of pJIA which is the fourth approved indication of the therapy in China. Earlier, Sulinno was approved for RA, PS, and psoriasis
  • The launch of Sulinno has provided more Chinese patients with high-quality and relatively affordable adalimumab injection, bringing hope and opportunities to more patients
  • Sulinno is a human anti-TNF-α mAb referencing Humira. The clinical results were published at the Inaugural Issue of The Lancet Rheumatology in 2019

Alvotech and Alvotech & CCHT Signed an Exclusive Commercialization Agreement with Yangtze River for Eight Biosimilars in China

Product: Nov 25, 2020

Product: Biosimilar

  • The companies collaborate with the Yangtze to commercialize eight biosimilars in China. The initial pipeline contains biosimilar candidates for the treatment of autoimmunity, ophthalmology, and oncology
  • Alvotech and Alvotech & CCHT will be jointly responsible for the development, registration, and supply of biosimilars in China while Yangtze River Pharmaceutical will exclusively commercialize the biosimilars
  • The manufacturing of biosimilars will be made in a new state-of-the-art biopharmaceutical facility, currently being built in Changchun, China, through the Alvotech & CCHT. The first phase of the facility is expected to be completed in 2021

Bio-Thera Reported MAA Submission to EMA for BAT1706 a Proposed Biosimilar to Avastin

Product: Nov 26, 2020

Product: BAT1706 (a proposed biosimilar to Avastin)

  • The company has submitted an MAAA for BAT1706 to EMA. Bio-Thera seeks a commercial license for all approved indications of bevacizumab in the EU Member States, Iceland, Norway, and Liechtenstein
  • The submission of the MAA for BAT1706 marks it as the first ex-China MAA/ BLA submission. The BLA of the biosimilar for metastatic carcinoma of the colon or rectum and NSCLC is under NMPA’s review
  • The company plans to submit a BLA for BAT1706 to the US FDA by the end of 2020. Bevacizumab is a mAb that targets VEGF thus reduces neovascularization, thereby inhibiting tumor growth

Related Post: Insights+ Key Biosimilars Events of October 2020

The post Insights+ Key Biosimilars Events of November 2020 first appeared on PharmaShots.

What is the impact of FDA review time on pharmaceutical R&D investments?

This is the question that a recent paper by Chorniy et al. (2020) attempt to answer. This issue is clearly very relevant as the UK recently has approved the Pfizer/BioNTech vaccine for COVID-19 before the US. Unlike most studies that attempt to examine the relationship between FDA review time and R&D investment dollars, the authors aim to measure the relationship between FDA review time and number of drugs in the pipeline. Whether the former is probably a better measure of R&D investment efforts, the latter is what society cares more about.

The dependent variable is the number of drugs in the pipeline for indication category, and the key independent variable is the natural log of the FDA review time for drug category C. The drug pipeline data comes from AdisInsight and the review time comes from the [email protected] database. The regression also controls for whether the drug is receives priority or orphan status (also from Drugs @FDA), the development cost and the market size. The development cost is endogenous so the authors use the number of pages in an NDA submission, the number of Phase III clinical trials and the Phase III trial sample size. The vector of market characteristics include disease mortality and morbidity (from World Health Organization data by disease), all‐payer drug expenditures (from the Medicare Expenditure Panel Survey, MEPS); number of drugs on the market (also from MEPS); and drug prices (from Express Scripts/Medco and Redbook).

The authors find that:

The average FDA review time for drugs approved after 1999 is 466 days, or about 1.3 years, but …it takes anywhere from 46 days (Eloxatin) to 1827 days (Prialt) for a drug to complete the review process that gives a drug a green light to be marketed. Post‐PDUFA, many NDAs were eligible for a special review status. About a half of the drugs in our sample received a priority review status, and about 20% were classified as orphan, on average by disease category.

Using the regression specification described above, they also find that longer review time decreases the number of drugs in the pipeline.

A doubling of the review length is associated with approximately six fewer drugs in the development pipeline in that disease category. This implies that a one‐sixth increase in review length is associated with approximately one fewer drug in development; with a mean review length of 466 days, this implies that each 78 extra days of review are associated with one fewer drug in development.

One challenge of this study is that pipeline decisions are made years in advance. Thus, longer review time may also impact the decision for early phase drug development, but the data the authors use is from a fairly limited time period 1999 to 2005. Given that the drug development timeline is typically more than 10 years, this study estimates the impact over a relatively short time period. The study also ignores the regulatory process in other countries as well, and their impact on drug development. While the US is the largest pharmaceutical markets, the regulatory environment in other countries–particularly in Europe–may affect investment decisions.

Nevertheless, it is clearly logical that additional regulatory burden and delays in time to market clearly do affect this study does contribute to pharmaceutical firms investment decisions. Budish et al. (2015) find that firms often invest in oncology indications for late stage disease because the time for trials to read out is much shorter. To expedite the FDA review process, in 1992 Congress passed the Prescription Drug User Fee Act (PDUFA) which allowed the FDA to charge fees to pharmaceutical firms to expedite the review process. These payments fund just under half of all drug reviews.

This study does add to the literature on how pharmaceutical firm R&D respond to incentives. For instance, Acemoglu and Linn (2004) found that a potential market size affects affects the number of new drugs that get to market. Other studies have found that higher profits boost pharmaceutical firm R&D investments, for instance the advent of Medicare Part D (Blume-Kohout and Sood 2013) and changes in patent law (Williams 2017). Perhaps the best-known paper–Dubois et al. 2015–found an elasticity of innovation with respect to market size of 0.23, suggesting that $2.5 billion of revenue is required to bring a new drug to market based on drug development costs. The paper by Chorniy et al. (2020), despite some limitations, helps add to this literature.

Source:

Taking Two Different Vaccines?

We seem to be heading for a world with multiple coronavirus vaccines in it, and right off, I have to say that that this is a very good situation. But it has its complications, and one that I know many people have been wondering about is, what if you get two different ones? That could happen in several ways, of course, with the different vaccines themselves, the order in which a person is exposed to them, the total number of vaccinations involved, etc. And honestly, it’s not possible to be completely sure about the answer until this is actually tried (immunology!) But we can look back over previous vaccines and made some educated guesses.

The best outcome is that you get even stronger immunity. That seems to be what happens when people who received the oral (Sabin) polio vaccine were then given the injectable (Salk) form. The first is an attenuated live virus, and the second is a completely inactivated one. The Salk vaccine is better at producing humoral immunity (antibody and T-cell response), and the Sabin vaccine needs multiple doses to be effective. But it is better at producing mucosal immunity in the gut, which has a better chance of interrupting the spread of the disease in children. The choice about which one to use has always been a matter of argument. But the study linked above showed that in children who had already had the oral Sabin vaccine, that an injection of the Salk vaccine boosted their intestinal immunity better than another round of the oral vaccine. Again, you wouldn’t necessarily have predicted that – if it had come out that the injected dose didn’t seem to do much for mucosal immunity, it would have been easy to rationalize that as well.

There are other cases where multiple vaccines are available for the same pathogen, and where a mix-and-match approach doesn’t seem to make a difference either way. An example is hepatitis A, where there are several inactivated-virus options. In that case, it appears that the vaccines are basically interchangeable: the booster-shot schedule can be completed any way you like. The same goes for the two monovalent vaccines for hepatitis B, and for the three vaccines that target meningococcus group A, C, W, and Y. (Here’s an overview of vaccine interchangeability).

That said, all of those vaccines in each of those cases are rather similar to each other, and we now have the unusual – very, very unusual – situation of several different vaccine platforms coming into potential use against the same virus at almost the same time. By the spring we may well have two mRNA vaccines (Pfizer/BioNTech and Moderna), two different adenovirus vaccines (Oxford/AZ and J&J), and a recombinant protein vaccine (Novavax). We don’t have efficacy data on the J&J and Novavax candidates yet (numbers are on the way), and we can argue about the data for Oxford/AZ, but it’s certainly possible that all of them will be out there simultaneously. Putting one of these on top of the other is a step into the unknown.

And there are examples of vaccines for the same pathogen having some interference. Several vaccines for bacterial diseases are in the “conjugate vaccine” category: they have a bacterial polysaccharide fused to a carrier protein, which can give a more useful immune response than just dosing the polysaccharide by itself. But for pneumococcal vaccines, both types are given (with a different range of immune response to cover a variety of bacterial serotypes). It’s been found that if you give a pneumococcal polysaccharide vaccine (PPSV) followed by a pneumococcal polysaccharide conjugate vaccine (PCV), there’s a lower antibody responses for some serotypes targeted by the conjugate vaccine than there is if you give them in the opposite order. So the rule in this area is to give both for maximum protection, but to always give the conjugate vaccine first. Another tricky part is that the use of the same sorts of carrier proteins in different vaccines – you could imagine a situation where an immune response against the carrier protein causes a later vaccine to be less effective.

That last problem is similar to what we’re talking about with the immune response to adenovirus vectors and booster-shot dosing regimens with the same vaccine. But the Oxford/AZ vaccine is a chimpanzee adenovirus and the J&J one is Ad26, so that’s a different situation, and I have no idea of what would happen if you mixed those two. (The Russian vaccine is, in fact, a mixture of two different adenovirus vectors, one in the original shot and one in the booster). I also don’t know what happens if you take both an mRNA vaccine and one of the other types.

Overall, though, I would tend to think that it would work out. All of the coronavirus vaccines we’re talking about target the Spike protein, after all, and they are, by different means, presumably raising a pretty similar suite of antibodies (with perhaps more differences in T-cell response, which remains to be seen in detail). So the chances are that the immune response will be similar (as with the hepatitis vaccines) or perhaps even a bit better (as with mixing the polio vaccines), rather than worse. But we haven’t proven anything like that in the clinic yet, and educated guesses will only take you so far. I would assume that there will be people who end up taking both types, for all sorts of reasons, and I hope that we collect as much data from those cases as we can.

Protein Folding, 2020

Every two years there’s a big challenge competition in predicting protein folding. That is. . .well, a hard problem. Protein chains have (in theory) an incomprehensibly large number of possible folded states, but many actual proteins just manage to arrange themselves properly either alone or with a few judicious bumps from chaperones. It’s been clear for many decades that there are many energetic factors in play that allow them to accomplish these magic tricks, which are a bit like watching piles of hinged lumber spontaneously restack themselves into functional boats, wagons, and treehouses. But knowing that amide bond angles, pi-stacking interactions, hydrogen bonding, hydrophobic surfaces, steric clashes, and all the rest are all important, while a good start, is a long way from being able to calculate them and assess their relative importance for any given case.

The CASP (Critical Assessment of protein Structure Prediction) contests have been run since 1994. I wrote about the 2018 one here, with particular attention to the Google-backed AlphaFold effort. Now the 2020 CASP results are in, and AlphaFold seems to have improved its standing even more. There are several divisions to the competition: “regular targets”, where the teams are given the plain amino acid sequence of proteins whose structures have been determined (but not publicly released), multimeric targets (for protein complexes), refinement targets (where teams try to refine an existing structural model to make it fit the experimental data better) and contact predictions. AlphaFold made their push this year in what is always the largest and most contested of these, the regular targets group.

This year’s press release is rather different from the others. It announces, basically, that an AI-based solution has been found, and that’s the latest AlphaFold version. Out of a list of 100 or so proteins in the free-modeling challenge, it predicted the structures of two-thirds of them to a level of accuracy that would be within the range of experimental error. Again, these are single proteins (not the multimeric complexes or the other categories, where AlphaFold did not participate), but that is really a substantial achievement. Their 2018 results were good (and better than anyone had achieved in previous CASP rounds), but these are much better still. Here are the results in that regular targets category, and you can see that the AlphaFold team largely blew everyone else out (that tall bar on the far left).

I’m impressed. We’re not up to “guaranteed protein structure for whatever you put it”, but getting that level of structural accuracy on that many varied proteins is something that has just never been done before. I will be very interested to hear from the AlphaFold people about what improvements they feel were most important. As it is, such computations tend to use a variety of techniques: straight-out calculation of those energetic factors mentioned above (when necessary) along with searching for similarities to known protein sequences and structures to get a leg up. Improved methods to run such “prior art” searches reliably are a big area as well; they are nontrivial.

So some of the improvement is due to the ever-increasing number of protein structures that we have solved experimentally, and the improved application of that data to new protein sequences. Some of it is due to better ways to search through and apply the lessons from those previous structures (and better ways to be sure that you’re picking the right lessons to learn!) And some of it is due to the sheer increases in computational power that we have at our disposal, of course, although it has to be noted that you cannot just compute your way out of problems like this one if you don’t have some solid ideas about where you’re going and how you’re going to find a path forward.

It’s not that we have completely achieved a fundamental understanding of all the energetic processes and tradeoffs in folding any given protein. While we’re closer to that than ever before, we also have shortcuts that allow us to table those fundamental problems and arrive at a solution by analogy to things we already know that proteins do (whatever their reasons might be for doing it!) And that means that the accuracy of such calculations is only going to improve as we continue to solve more protein structures (and to improve the tools for using them). Decades ago, people probably expected eventual progress in the protein folding problem to come more from the fundamental-understanding side, but AI programs can be extraordinarily good at the “Hey, you know what, I’ve seen something kind of like that before” approach, and the results speak for themselves.

X-ray and NMR protein structures are continuing to flow into the databases, of course. And I would expect the recent improvements in cryo-electron microscopy to add plenty of material for such efforts. Cryo-EM will also add a lot of multimeric protein complexes to that particular data pile as well. That will be the next big challenge, one with huge relevance to the way that protein tend to perform their functions inside living cells. Onward!

Philips Unveils Vendor-Neutral Radiology Operations Command Center, Automated Radiology Workflow Suite

Philips Unveils Vendor-Neutral Radiology Operations Command Center, Automated Radiology Workflow Suite

What You Should Know:

– Philips unveils a new multimodality virtual imaging command center that enables real-time, remote collaboration to broaden expertise between technologists, radiologists, and imaging operations teams across multiple sites via private, secure telepresence capabilities.

– In addition, Philips debuts
AI-enabled, automated Radiology Workflow Suite at RSNA 2020
to
advance precision diagnosis into clear care pathways and predictable outcomes.


Philips today announced the commercial launch of the industry’s first vendor-neutral, multimodality, radiology operations command center to add secure, digital, virtual scanner access to existing imaging installs across multiple systems and sites.  Making its debut at the Radiological Society of North America (RSNA) virtual Annual Meeting (Nov 29 – Dec 5, 2020), Philips Radiology Operations Command Center enables virtualized imaging operations via a private, secure, and auditable telepresence platform. Philips is the first company to market a radiology command center that can integrate with existing technologies and systems outside Philips.

Vendor-Neutral Radiology Operations Command Center

Philips Unveils Vendor-Neutral Radiology Operations Command Center, Automated Radiology Workflow Suite

As a multimodality (MR and CT), vendor-neutral digital hub,
the Radiology Operations Command Center connects imaging experts at a central
command center with technologists and onsite staff in locations across an
entire enterprise for real-time, over-the-shoulder collaboration and support.
Powered by Philips’ proprietary, patented operational performance management
technology, Radiology Operations Command Center enables remote access to
scanners across an imaging network. Its remote scanner connections are
compatible with older imaging platforms, allowing customers to operationalize a
‘Hub and Spoke’ model for imaging within their current install base. Radiology Operations
Command Center enables multiple use cases such as virtual imaging assistance,
virtual on-demand cross-training, and remote adjustment of imaging protocols
for greater standardization.

Expanding access to imaging services to enable expert
care

With Philips Radiology Operations Command Center, radiology departments can expand access to imaging across more locations and during more convenient hours to help meet patient demand, while increasing capacity and throughput within existing labor resources limits. Radiology Operations Command Center’s telepresence capabilities give imaging centers access to imaging experts allowing them to expand their capabilities to provide complex procedures and specialty subservices such as virtual colonoscopy, cardiac CT, Breast MRI, and prostate MRI. This allows imaging providers to improve imaging exam quality to help reduce or eliminate negative patient experiences such as procedure recalls or repeats, while also helping to improve talent retention within their organization by reducing the burden on their staff.

“We understand the pain points our customers face in terms of staff variability, training levels, and the need to maintain standardization of both the imaging quality and the patient experience – pain points that have been further accentuated by the COVID-19 pandemic,” said Kees Wesdorp, Chief Business Leader, Precision Diagnosis at Philips. “As an integral component of the Philips Radiology Workflow Suite being featured at RSNA this year, Philips Radiology Operations Command Center enables our customers to virtualize imaging by setting up a unique model of operations to seamlessly extend their expert talent across all sites. By working with advanced imaging partners like RadNet, we see the virtualization of imaging as a game changer in radiology workflow.”

AI-Enabled, Automated Radiology Workflow Suite

In addition, Philips announced its participation in the RSNA
2020 virtual event, featuring its Radiology Workflow Suite of end-to-end
solutions to drive operational and clinical efficiency through the
digitalization, integration, and virtualization of radiology. At RSNA 2020,
Philips will showcase a coordinated suite of offerings for the first time, introducing
key solutions that come together to enhance the entire radiology workflow to
address the most pressing operational challenges across diagnostic and
interventional radiology.

The Philips Radiology Workflow Suite helps drive
clinical and operational efficiency across all phases of the diagnostic
enterprise, including:

Scheduling and preparation – Patients anxious
about a potentially serious diagnosis can receive support even before arriving
for their exams, with personalized instructions and reminders delivered via
SMS-based communications from Philips Patient Management, making its
debut at RSNA 2020.

Image acquisition – Technologists under pressure
to achieve a first-time-right scan can now be supported virtually by remote
specialists through the Radiology Operations Command Center.  Philips’ Collaboration Live, available on premiere Philips ultrasound
systems, also connects technologists with colleagues and specialists whenever
and wherever required. To streamline patient setup, the MR SmartWorkflow reduces and simplifies the number of
steps needed in a conventional MR exam workflow, using technology to automate
where possible.  And the Radiology Imaging Suite provides the technologist
with a common imaging platform and more streamlined workflows by integrating
patient information and advanced visualization and analysis into one
easy-to-view console.

Image and data interpretation – Radiologists
confronted with increasing numbers of images to read can now receive a
prioritized worklist from the AI-enabled Workflow Orchestrator, along with an
intuitive summary of advanced visualization and analysis from various systems
presented in a single view. IntelliSpace Portal Advanced Visualization also
connects patient data across departments to create interoperability for greater
clinical intelligence and analysis, supported by AI-tools such as an algorithm
for the detection of COVID-19 lesions.

Reporting and results communication – To
streamline reporting, the Interactive Multimedia Reporting module of Philips’
Clinical Collaboration Platform, with embedded voice recognition capability,
helps radiologists cut turnaround time by entirely eliminating the need for
typing and entry of patient or clinical context. Exam data can be inserted
directly into reports, allowing radiologists to quickly review and approve
final reports while adding clinical context for referring physicians.

Shared decision-making, pathway selection and treatment – For referring physicians who need to offer confident recommendations on a patient’s care pathway, Philips’ Oncology Collaborator integrates radiology, genomics, lab, treatment, and other data into a single view so radiologists and oncologists, together with the extended clinical team, can see the whole patient profile at a glance and decide on a treatment pathway efficiently and collaboratively.

Outcomes and follow-up care – Radiology administrators are empowered to help patients keep to their treatment plans while improving overall operational efficiency with the real-time performance metrics and follow-up patient tracking provided by Philips Operational Informatics. The Philips Patient Portal empowers patients to access and share their information and access their results between facilities, physicians, specialists, and other healthcare providers.

“Precision across the diagnostic enterprise is more important today than ever before. To help meet our customers’ greatest challenges, we are pivoting away from standalone products to an integrated systems and solutions approach focused on data and intelligence to drive operational efficiency in an automated way for informed practice management and continuous improvement,” said Kees Wesdorp, Chief Business Leader, Precision Diagnosis at Philips. “Visitors to our Philips RSNA virtual site will experience Philips smart connected systems with solutions that optimize radiology workflows, generate insights from integrated diagnostics leading to clear care pathways and predictable outcomes for every patient.”

Oxford/AZ Vaccine Efficacy Data

As of this morning, we have a first look at the Oxford/AstraZeneca vaccine’s efficacy in clinical trials via press releases from both organizations. The number in the headlines says about 70% efficacy, but there’s more to the story.

Here’s the landscape so far: we have results from Pfizer and from Moderna, both of them developing mRNA-based coronavirus vaccines, and both showing efficacy in the 90 to 95% range. The Oxford effort is a different platform, though, with key similarities and key differences. It relies on another virus (a chimpanzee-derived adenovirus) that has had its original DNA genetic payload removed and substituted with the appropriate DNA to produce the full-length Spike protein of the coronavirus. In this construct, the original viral “leader sequence” at the beginning of this DNA has been replaced with another, the leader sequence found for the human tissue plasminogen activase (TPA) protein, because this gave better expression and a better immune response. These adenovirus particles can’t replicate – they don’t have the DNA to express the proteins needed to do that. But they do have all the viral machinery needed to infect a human patient’s cells and force them to express the coronavirus Spike protein, which will set off an immune response that should provide protection against later exposure to the real coronavirus.

So both the adenovirus vector and the mRNA vaccines hijack the protein expression capabilities of a vaccinated person’s own cells, making them produce SARS-Cov-2 Spike protein constructs and thus setting off the immune system. The Pfizer and Moderna mRNA vaccines we’ve seen so far actually express a form of the Spike protein that has a couple of proline residues mutated to make it more stable, whereas the Oxford/AZ vaccine is using the straight wild-type Spike sequence – there’s one difference. Another big one is of course that the Oxford/AZ vaccine is using a completely difference virus to deliver a DNA sequence, whereas the mRNA vaccines are skipping up to a later stage in protein production and slipping messenger RNA directly into the cells.

What was announced today is that they have quite different results for two different dosing regimens. This interim analysis was run when 131 cases had been accrued across trials in the UK, Brazil, and South Africa across about 24,000 trial participants (treatment and control groups). In the treatment group, 8,895 participants received two full doses of the vaccine, spaced one month apart, and 2,741 patients got a half dose at first, followed by a full dose a month later. And the efficacy rates for these two dosing regimes were very different: 62% for the two-full-dose group and 90% for the half/full group. I do not see a breakdown of how those 131 cases partitioned across the two groups, but the overall N has to be higher for the first, doesn’t it? But that’s a significant split in efficacy.

Why might that be? My own wild guess is that perhaps the two-full-dose protocol raised too many antibodies to the adenovirus vector itself, and made the second dose less effective. This has always been a concern with the viral-vector idea. It is, in fact, why this effort is using a chimpanzee adenovirus – because humans haven’t been exposed to it yet. Earlier work in this field kicked off with more common human-infective adenoviruses (particularly Ad5), but there are significant numbers of people in most global populations who have already had that viral infection and have immune memory for it. Dosing people with an Ad5 vector would then run into patients whose immune systems slap down the vaccine before it has a chance to work. That’s not the case for a chimpanzee-infecting form, naturally (few if any people have ever been exposed to that one!) but the two-dose regime may have run into just that problem. Immunology being what it is, though, there are surely other explanations, but that’s the one that occurs to me.

Now, I’ve seen people speculating this morning that these numbers may be better than they look, because they believe that these trials monitored patients by PCR tests rather than by symptoms. If that were the case, then yes, that’s a finer net than the Pfizer and Moderna trials used and it would certainly affect the efficacy readouts. But I don’t think it is: looking at the published trial protocol, the cases are defined as “SARS-CoV-2 RT-PCR-positive symptomatic illness”, and the patients have to show symptoms of the disease (see Table 13). So I don’t think we can explain the lower efficacy by saying that they were finding asymptomatic people as well: the trial excludes asymptomatic people from its endpoint definition. The rate of asymptomatic cases in the treatments and controls will be determined in these trials (see section 8.5.2.1 of the protocol) but those aren’t the numbers we’re seeing today.

So from an efficacy standpoint, the choice is clear: if this vaccine is going to be deployed, the half-dose/full-dose regime is the obvious choice, since otherwise you can do the same amount of work dosing your population, use up more vaccine. doing it, and get notably worse results. How about from the safety side of things? The Oxford release says just that “No serious safety events related to the vaccine have been identified”, and the AZ one says “No serious safety events related to the vaccine have been confirmed”. I would have preferred to hear more about local and systemic reactions, as we did with the Pfizer and Moderna releases, but that seems to be it. Readers will recall that a participant in the UK trial developed transverse myelitis, and that the trial was stopped in the US for about a month. (Note: the US trial is the two-full-dose version).

Overall, I would have to think that Oxford and AZ are disappointed with the results from the two-full-dose regime and will be actively trying to track down the reason for the better performance in the the half/full dosing, which one would expect to be the way the vaccine is eventually used. How many of the other trials that are being run are using that protocol, one wonders? This could still be an effective weapon in the pandemic, but the stories are starting to differentiate. Pfizer (very effective, tough distribution and storage), Moderna (very effective, easier distribution/storage than Pfizer, but perhaps stronger safety reactions), and now Oxford/AZ (widely varying efficacy depending on dosing, easier distribution/storage, safety details TBD). The next vaccine effort to report efficacy will be J&J, another adenovirus vector, and this time with a one-shot dose. The landscape is starting to fill in a bit!

How much extra COVID-19 risk would you be willing to accept to open the economy?

That is the basic question that Reed et al. (2020) attempt to answer in their recent paper. The approach they use is as follows:

We designed a discrete-choice experiment to administer 10 choice questions to each respondent representing experimentally controlled pairs of scenarios defined by when nonessential businesses could reopen (May, July, or October 2020), cumulative percentage of Americans contracting coronavirus disease 2019 (COVID-19) through 2020 (2% to 20%), time for economic recovery (2 to 5 years), and the percentage of US households falling below the poverty threshold (16% to 25%)…

Applying this methodology, the survey collected information from about 6000 US adults in May 2020. Using a latent class analysis, they found that four classes fit the data best.

The largest class (36%) represented COVID-19 risk-minimizers, reluctant to accept any increases in COVID-19 risks. About 26% were waiters, strongly preferring to delay reopening nonessential businesses, independent of COVID-19 risk levels. Another 25% represented recovery-supporters, primarily concerned about time required for economic recovery. This group would accept COVID-19 risks as high as 16% (95% CI: 13%-19%) to shorten economic recovery from 3 to 2 years. The final openers class prioritized lifting social distancing restrictions, accepting of COVID-19 risks greater than 20% to open in May rather than July or October. Political affiliation, race, household income, and employment status were all associated with class membership (P.01).

Do read the whole article.

More Vaccine Data, in Advance of More Efficacy

I know that it’s been a run of vaccine posts around here, but the numbers just keep on coming. Today we have two more papers to look at, both published in The Lancet. –

The first is from the SinoVac inactivated virus effort (the CoronaVac vaccine). In this one, the virus is grown in Vero cell culture, harvested, and inactivated by treatment with beta-propiolactone. As mentioned in previous vaccine roundups, this is an older technique. Its advantages are that it’s a well-worked-out technology, but the disadvantages are that it’s also known to produce less active vaccines overall.

This paper is just safety, tolerability, and immunogenicity in the human trials; we don’t have actual efficacy data yet (although you’d figure that’s coming soon). And overall, it looks like the vaccine works, but not as strongly as some of the other data we’ve seen. This trial looked at 3-microgram and 6-microgram doses of the inactivated virus preparation (with an aluminum hydroxide adjuvant), in two rounds spaced either 14 or 28 days apart. The antibody response after the second dose was not impressive in the first trial – even after the second dose, only about 80% of the patients seroconverted with neutralizing antibodies. But the Phase II trial was more like it, with 95% to 99% of people seroconverting. The paper says:

The immune response in the phase 2 study was substantially higher than in the phase 1 study, which might be due to the difference in preparation process of vaccine batches used in phase 1 and 2 resulting in a higher proportion of intact spike protein on the purified inactivated SARS-CoV-2 virions in the vaccine used in phase 2 than that used in phase 1.

That’s interesting, to have changed things in mid-stream like that, but perhaps it was the unimpressive results the first time through that prompted it? But even with that change, the antibody titers seen after the second dose were (in all patients) lower than those seen in a panel of 117 recovered coronavirus patients. There is no detailed T-cell data, but the paper does mention that ELISpot assays “provided no clear evidence that the vaccine induced T-cell responses“. So that’s the big question here: is all this enough? It might be, but it might not (or not be enough compared to other vaccine options). We’re not going to know until we see actual efficacy numbers from the trials that are going on in Brazil, Indonesia, and Turkey. The Brazilian trial, you will have heard, has been turbulent and was suddenly halted last week on orders from Brazil’s turbulent president. What this will do to the statistics or timeliness of the overall results is not clear.

The other paper is from the AstraZeneca/Oxford adenovirus vector effort. It’s a look at safety and immunogenicity in a wider spectrum of patients than has been reported so far, and the main news from it is that older patients appear to respond very similarly to the younger ones, both in antibody titers and T cells. What’s more, the vaccine actually appears to be better tolerated in the older patients (both in local reactions at the site of injection and systemically). So that’s good news. The actual antibody and T-cell numbers are similar to the earlier report, and would appear to be what you’d need for an effective vaccine, but we’ll again have to wait for the real numbers. Those shouldn’t be long in coming – BioCentury has a graphic of the current advanced-trial vaccine landscape here, and they (and others) expect to hear from both the AZ/Oxford team and J&J’s single-dose trial in December. At that point, we will have a completely unprecedented look at the landscape: large nearly-simultaneous data sets for two different mRNA vaccines and two different adenovirus vector ones, all directed to the same pathogen, and neither technology ever having advanced into humans like this. Let’s hope we never see the like again, because you would only do it this way when your back is against the wall.

Vaccine Possibilities

Now that we’re seeing that coronavirus vaccines are indeed possible (and are on their way), let’s talk about the remaining unanswered questions and the things that we will be getting more data on. Here are some of the big issues – it’ll be good to see this stuff coming into focus. I’ll put these into the form of questions (think of it as a tribute to the late Alex Trebek, whom I was glad to help remember in this article). Each one will have a summary answer at the end of the section, if you just want to skip to that part.

How long will the vaccine protection last?

This one can’t be answered with total confidence by any other way than just waiting and watching. But we will be able to give a meaningful answer well before that, fortunately. Here, just out in the last couple of days, is the most long-term and comprehensive look at the duration of immunity in recovered coronavirus patients. In fact, it appears to be the largest and most detailed study of post-viral-infection immunity in the entire medical literature (!) It’s from a multi-center team at the La Jolla Institute for Immunology, UCSD, and Mt. Sinai, and it looks at 185 patients who had a range of infection experiences, from asymptomatic to severe. 38 of the subjects provided longitudinal blood samples across six months.

We’ve already seen from the convalescent plasma comparison samples in the various vaccine Phase I trials that the antibody response to coronavirus infection can be quite variable, and that was the case in this study as well. That gives you wide error bars when you try. to calculate half-lives, and it’s not even clear what kind of decay curve the antibody levels will best fit to (it might well be different in different patients). But one figure to take home is that 90% of the subjects were still seropositive for neutralizing antibodies at the 6 to 8 month time points. The authors point out that in primate studies, even low titers (>1:20) of such neutralizing antibodies were still largely protective, so if humans work similarly, that’s a good sign. An even better sign, though, are the numbers for memory B cells, which are the long-term antibody producers that help to provide immunological memory. B-cells specific to the Spike and to the nucleocapsid coronavirus proteins actually increased over a five-month period post-symptom-onset, thus with no apparent half-life at all. These had interesting variations in antibody type (by the end of the period, they were strongly IgG, the others having dropped off), but as the paper notes, we really don’t have many viral infection profiles in humans to compare these results to. B-cell memory overall, though, looks to be long-lasting, and is expected by these results to stretch into years. For what it’s worth, there are patients who survived the 1918 influenza pandemic who had B cells that still responded with fresh neutralizing antibodies after over 90 years, so they can be rather hardy.

What about the other immune (and immune memory) component, T cells? The news there is good as well. CD4+ and CD8+ memory T cells appear to have half-lives of at least five or six months in these patients, and helper T cells (crucial for those memory B cells to respond later on) were completely stable over the entire period studied. Again, there are very few viral infection studies to compare this one to, but these numbers look consistent with long-term protection via reactivated immune memory.

Looking over the whole set of patients, it was clear that the immune system’s famously individual character was on full display here. That heterogeneity could well be the reason that there are real cases of re-infection, although it still seems to be rare. Different components of the immune response (both in antibodies and T cells) varied widely among patients, and these differences only became more pronounced over time. Nevertheless, at the five-month time point in a measure of five components of immune response and memory, 96% of patients were still positive on at least three of them (the categories were IgG antibodies against the Spike receptor-binding domain (RBD), IgA antibodies against the same Spike RBD, memory B cells aimed at the RBD, total SARS-CoV-2-specific CD8+ T cells, and total SARS-CoV-2-specific CD4+ T cells).

Bottom line: Taken together, this study, several others over the past few months, and this recent work all paint a consistent picture of a strong, normal, lasting immune response in the great majority of patients. Add in the results we’re seeing from the two vaccines that have reported interim data so far, and I think that the prospects for lasting immunity from vaccination are also very good. Remember, the early vaccine data suggested antibody responses at least as strong as those found in naturally infected cases. There seems (so far) every reason to think that vaccine-based immunity will be as good or better than that conferred by actual coronavirus infection. I very much look forward to more data to shore up this conclusion, but that’s how it looks to me at the moment.

How effective are these vaccines? Will they provide total protection or not?

We’re just starting to get numbers on this, and we are definitely going to know more as the various trials read out interim data and then reach their conclusions. So far, though, the efficacies we’re seeing have been more than I had really hoped for. I thought that they would work, and I didn’t think that meant just the FDA’s floor of 50% efficacy, but I sure didn’t have the nerve to predict that the first two readouts would be 95% (Pfizer just reported their final readout this morning). I can’t overemphasize how good that news is, especially when you compare it to some earlier worries that a useful coronavirus vaccine might not even be possible at all. Cross that one off the list!

Those efficacy numbers, though, are measured for symptomatic coronavirus cases. The vaccine trial participants are not being pulled in at regular intervals for testing to see if they’ve gone positive-though-asymptomatic. We may get controlled data of that sort eventually, but for now, we know from the Moderna trial that the few people who came down with symptoms at all had very mild cases. The antibody levels that we’re seeing would argue for a low probability of having a significant number of vaccinated people walking around asymptomatically shedding coronavirus, and for anyone who does to be shedding a lot less of it for a shorter period of time.

From a public health standpoint, that’s what you need. Epidemics are a matter of probabilities, and you can lower the chances of spread for a virus like this in any number of ways. They surely vary in efficacy, but include keeping distant from other people and avoiding any crowding in general, wearing masks, avoiding indoor situations with people that you haven’t been exposed to (such as going to the grocery store when it’s not so crowded), minimizing the time you spend in any higher-risk situation in general (getting those groceries in an organized fashion and getting back outside), and more. The fewer people there are around shedding infectious particles, the better (obviously), but the worst case for a weakly effective vaccine might be that it could actually raise that number for a while by creating more asymptomatic cases rather than having the infection make people aware that they need to stay the hell inside. But I don’t think we’re going to see that. I think that the efficacy levels we’re seeing are indeed going to be epidemic-breaking if we can get sufficient numbers of people vaccinated. Right now we’re up around the efficacy of the measles vaccine, which is very effective against a virus that is far more infectious than SARS-Cov-2. . .if enough people take it. (Believe it, if the current coronavirus were as infectious as measles is, we would be hosed).

Bottom line: the results we have so far indicate that these vaccines will indeed provide strong protection in the great majority of patients. The number of asymptomatic cases among the vaccinated population will be a harder number to pin down, but I believe that we should be in good enough shape there as well, based on antibody levels in the primate studies and what we’re seeing in humans.

What about coronavirus mutations? Will the virus move out from under the vaccine’s targeting?

The SARS-Cov-2 virus has indeed been throwing off mutations, but all viruses do. They replicate quickly, and errors pile up. Fortunately, though, none of these have proven to be a problem so far. There’s been a lot of talk about the D614G mutation being more infectious, but the difficulty of proving that shows that it’s certainly not way more infectious, if it is at all. And it doesn’t seem to have a noticeable effect on disease severity – so far, no mutation has.

The recent news from Denmark about a multi-residue mutant (“Cluster 5”) that might be less susceptible to the antibodies raised by the current vaccines is a real concern, but the news there, thus far, is also reassuring. The vaccine efficacy warning might be true, but it was also based on a small amount of preliminary data. And the Cluster-5 variant has not been detected since September, which suggests that (if anything) this combination of mutations actually might make the virus less likely to spread. From what we’ve been seeing with the Spike protein, evading the current antibodies looks like it’s going to be difficult to do while retaining infectiousness at the same time. We already know from a Pfizer analysis that many of the common mutations are just as susceptible to neutralizing antibodies raised by their vaccine.

I know that many people are wondering about the similarity to influenza, and to the yearly (and not always incredibly effective) flu vaccines. Flu viruses, though, change their proteins far more easily and thoroughly than the coronavirus does, which is why we need a new vaccine every year to start with. SARS-Cov-2 doesn’t have anything like that mix-and-match mechanism, and it’s a damn good thing.

Bottom line: the coronavirus can’t undergo the wholesale changes that we see with the influenza viruses. And the mutations we’re seeing so far appear to still be under the umbrella of the antibody protection we’ll be raising with vaccination, which argues that it’s difficult to escape it.

What about efficacy in different groups of people? Where will the vaccines work the best, and where might there be gaps?

This is another area that is definitely going to come into better focus as the current trials go on. For the moment, we know that the results we have seen so far come from participants in a range of ages and ethnic backgrounds. There’s not much expectation that things will vary much (if any at all) across the latter, although it’s always good to know that for sure, and not least so you can point to hard evidence that it’s so. Age, though, can definitely be a factor. Older people are quite likely more susceptible to coronavirus infection in the first place, and are absolutely, positively at higher risk of severe disease or death if they do get infected. The immune response changes with aging, and it is very reasonable to wonder if the response to vaccination changes in a meaningful way, too.

But as mentioned above, we have more data from the Pfizer vaccine effort just this morning. The overall efficacy was 95%, and the efficacy in patients 65 and older was all the way down to 94%. This is excellent news. No numbers yet for people with pre-exisiting conditions and risk factors, but I’m definitely encouraged by what we’re seeing so far.

Bottom line: our first look at efficacy in older patients is very good indeed, and that’s the most significant high-risk patient subgroup taken care of right off the top.

How safe are these vaccines? What do we know about side effects?

As mentioned in the Moderna write-up here the other day, that team saw around 10% of their vaccinated cohort come down with noticeable side effects such as muscle and joint pain, fatigue, pain at the injection site, etc. These were Grade 3 events – basically, enough to send you to bed, but definitely not enough to send you to the hospital – but they were short-lived. For reference, those numbers seem to be very close to those for the current Shingrix vaccine against shingles, from GSK (thanks to their butt-kicking adjuvant mixture of a Salmonella lipopolysaccharide and a natural product from a South American tree). It’s a reasonable trade for coronavirus protection, as far as I’m concerned. And my reading of the Pfizer announcement today makes me think that their side effect profile is even a bit milder. They have fatigue in 3.8% of their patients, and all the other side effects come in lower.

What about lower-incidence side effects? Well, 30,000 patients is a pretty big sample, but on the other hand, the immune system is as idiosyncratic as it can be. There may well be people out there who will have much worse reactions to these vaccines. If you have a literal one in a million, you’re simply not going to see that in a trial this size, or actually in any trial at all. These are about as big as clinical trial numbers ever get. At that point, you’d be looking at such a hypothetical bad outcome in about two or three hundred people if we gave the shot to every single person in the US. And the public health calculation that’ss made every time a vaccine is approved is that this is a worthwhile tradeoff. Let’s be honest: if we could instantly vaccinate every person in the country and in doing so killed 200 people on the spot, that is an excellent trade against a disease that has killed off far more Americans than that every single day since the last week of March. Yesterday’s death toll was over 1500 people, and the numbers are climbing.

How about long-term problems, then? These are possible with vaccines, but rare. And unfortunately, there is truly no way to know about them without actually experiencing that long term. We simply don’t know enough immunology to do it any other way. Given the track record over the last century of vaccination, though, this seems to be another deal worth making.

Bottom line: immediate safety looks good so far. Rare side effects and long-term ones are still possible, but based on what we’ve seen with other vaccines, they do not look to be anywhere at all significant compared to the pandemic we have in front of us.

OK, what about the rollout? Who’s getting these things first? When does everyone else get a chance to line up?

Harder questions to answer – there are a lot of variables. Pfizer and Moderna both say that they can make in the range of 20 million doses by the end of the year, but what we don’t know is (1) when the FDA will grant Emergency Use Authorization, (2) how many of these doses can be distributed and how that’s going to happen, (3) what the number of doses available right now might be, (4) how the ramp-up of both production and distribution are going to be coupled in the coming months, (5) what’s going to show up with the other vaccine candidates in testing, and so on.

The person in charge of the “Operation Warp Speed” logistics is Gen. Gustave Perna, who has been in charge of the Army’s Materiel Command (just the sort of background you’d want for an effort this size, I think). We know that manufacturing has already been underway on at “at risk” basis, and it looks like those bets are paying off, given the clinical results. Here’s the rollout strategy that has been announced so far, and it certainly seems sound from what I know about these things. It does leave some questions open, such as what groups are in the initial queue. You would have to think that health care workers would be at the top of the list – these people are risking their health and their lives as they deal with a constant stream of infectious patients, and losing them to illness or death has a severe impact on our ability to deal with the situation.

That situation, it has to be said, is going to be getting worse. It’s been getting worse for weeks, and it looks like it’s going to keep doing that for several weeks more even if we do everything right. And let’s be honest: as a country, as a population, we’re not doing everything right. There are a lot of people taking sensible precautions, but others are letting their guard down when they shouldn’t, and there are of course other people who never put their guards up in the first place and seem to have little intention of doing so. The map says “uncontrolled spread” across most of the US, and they ain’t lying. These vaccines are coming at extraordinary, record-breaking speed, but not fast enough for us to avoid what looks sure to be a 2,000-deaths-a-day situation. Take the worst air crashes in aviation history, and imagine three, four, five, six of them a day. All day Monday. All day Tuesday. No letup. Every single day of the week and all weekend long, a hideous no-survivors crash every few hours. That’s what we’re experiencing right now in terms of the sheer number of deaths.

Bottom line: the very first people to get these new vaccines will almost surely be health care workers, and starting some time on in December. The rollout after that has too many variables to usefully predict, but it’s going to be the biggest thing of its type ever attempted, in people-per-unit-time. And yes, I think it’s going to work, and not a minute too soon.

AliveCor Lands $65M to Advance Remote Cardiology Platform Worldwide

AliveCor Receives “Breakthrough Device” Designation from FDA for Bloodless Hyperkalemia Test

What You Should Know:

Today AliveCor, the leader in AI-based, personal ECG
devices and cardiology platforms, announced they closed $65 million in Series E
funding to accelerate its growth.

To date, AliveCor’s FDA-cleared Kardia devices have
recorded more than 85 million ECGs. Since the pandemic began, nearly 15 million
ECGs were taken which is an increase of 70% year-over-year.

AliveCor, a Mountain
View, CA-based provider of AI-based
personal ECG technology and provider of enterprise cardiology solutions, today
announced their $65 million series E financing led by existing investors OMRON,
Khosla Ventures, WP Global Partners, Qualcomm Ventures and Bold Capital
Partners.

Remote Cardiology Solutions

Founded in 2010, AliveCor, Inc. is transforming cardiological
care using deep learning. The FDA-cleared KardiaMobile device is the most
clinically validated personal ECG solution in the world. KardiaMobile provides
instant detection of atrial fibrillation, bradycardia, tachycardia, and normal
heart rhythm in an ECG. Kardia is the first AI-enabled platform to aid patients
and clinicians in the early detection of atrial fibrillation, the most common
arrhythmia and one associated with a highly-elevated risk of stroke. AliveCor’s
enterprise platform allows third party providers to manage their patients and
customers’ heart conditions simply and profitably using state-of-the-art tools
that provide easy front-end and back-end integration to AliveCor technologies.

Recent Milestones/Expansion Plans

To date, AliveCor products have served more than one million
customers around the world, and recorded more than 85 million ECGs. This vast
data set gives the company a meaningful advantage in building new AI-based
services to drive a new age of advanced and improved cardiological care.
AliveCor believes that comprehensive services coupled with AI-powered
diagnostics will have an ongoing impact on cost, quality, and most importantly
on responsiveness: resolving false positives and improving response time in
medical emergencies.

AliveCor plans to use funding to accelerate growth of
AliveCor’s remote cardiology platform both domestically and around the world.
The company’s AI-powered ECG determinations will be augmented with telehealth
services, as well as with detection and condition management services for
providers and institutions. The enhanced partnership with OMRON will also
position the company to include hypertension management within its service
portfolio.

“We are grateful for the continued confidence of our
investors” said Priya Abani, CEO of AliveCor. “This financing speaks
to the transformative power our technology brings to the healthcare system. We
remain positioned to fulfill our vision of delivering AI-based, remote
cardiological services for the vast majority of cases when cardiac patients are
not in front of their doctor.”

Moderna’s Vaccine Efficacy Readout

As expected, we have more vaccine news this morning. And the news is good. Moderna reports that their own mRNA candidate is >94% effective (point estimate), with 95 total cases in the trial to date, split 90/5 between the control group and the vaccinated group. 11 of those were severe infections: all 11 in the controls and zero in the vaccine patients. Of the 95 total cases, 15 were in participants 65 years and older, but there’s no word on the split between controls and the vaccine arm there. All of these points are at 14 days past the second dose of the vaccine, which is going to be a standard time point for all the trials (except the J&J one-dose candidate, of course).

The safety readout looks like what we were expecting as well: the Grade 3 events were fatigue in 9.7% of patients, myalgia (muscle pain) in 8.9%, arthralgia (joint pain) in 5.2%, headache in 4.5%, and just “pain” in 4.1%. I would assume that there is overlap in these categories. The company says that these were “generally short lived”. The FDA’s guidance on event reporting would class these as “significant, prevents daily activity”, but not requiring hospitalization. So my read now with the data we have is that up to 10% of people taking the shot will spend the next day or so in bed, feeling like they’ve been hit with a really bad flu. That’s not enjoyable, but I will definitely make that trade in exchange for coronavirus immunity (see below). More data are being collected, of course, so we’ll get better reads on both safety and efficacy as the trial goes on, as will be the case with the Pfizer candidate and the others as well. We have to make sure (as much as we can) that there aren’t worse effects poking up out of those Grade 3 events, but so far, so good.

The second press release from the company today is also significant: Moderna says that new stability testing shows that their vaccine remains stable for up to six months under standard freezer conditions, up to 30 days under standard refrigeration conditions, and up to 12 hours at room temperature. There’s no dilution or further handling at the point of administration. This is much more like what you want to see, as compared to the more demanding storage conditions that seem to be needed for the Pfizer candidate. This is how a lot of medicine (and food, for that matter) is already distributed and stored – our infrastructure is a lot more prepared for this.

So we’re already starting to see some differentiation between the candidates, with likely more to come. We’ll see if there’s any statistical daylight in efficacy between the Pfizer and Moderna candidates as more cases accrue (I have no idea if that’ll be the case or not). Likewise with safety. But we already have a difference in shipping and storage, and it’s in Moderna’s favor. As mentioned before here, there are several other categories that could differentiate all the vaccine candidates: point efficacy (as we have now, 14 days after the second), effect on severity of disease when it does occur, duration of efficacy (which we’ll need time for, and there’s no other way), overall safety (which also needs big numbers and will sharpen with longer time points), and whatever differences in all these categories may show up in different patient populations. Those will take time to emerge, too, most likely,

But make no mistake: right now the vaccine news is very good indeed. Effective ones are coming, and what I said when the Pfizer results came about applies even more now, because this good news is coming against a stark background. The coronavirus statistics here in the US now are very, very bad, with cases, hospitalizations, and deaths all rising. Many areas of the country are facing ICU capacity shortages as we head into these rising numbers, and in the coming weeks a lot of people are going to die. It’s never been more important for people to take action against the pandemic: isolation as much as possible, mask wearing, avoiding indoor groups, and all that stuff that we already know about but that apparently too few people are following through on. The curves from Europe have been accelerating at a similar alarming rate, but take a look: their case numbers starting to turn back down again, and there’s no reason we can’t do that here. And we’re not going to be doing all this forever; I really think that the vaccine results we’re seeing mean that the end of all this is finally in sight. We have to make it through to getting our population vaccinated. Hang on.

Medtronic Launches Smart Insulin Pen Integrated with Real-Time CGM Data

Medtronic Launches Smart Insulin Pen Integrated with Real-Time CGM Data

What You Should Know:

Medtronic,
the global leader in medical technology, announced the launch of InPen
integrated with real-time Guardian
Connect Continuous Glucose Monitoring
(CGM) data.

– InPen is the first and only FDA-cleared smart insulin pen
on the market for people on multiple daily injections (MDI).

– The combined solution now provides real-time glucose
readings alongside insulin dose information giving users everything they need to
manage their diabetes in one view.

– The InPen app will continue to display information from
other currently compatible CGM systems on a three-hour delay.

– The integration of real-time CGM data into the smart
insulin pen app is a result of the addition of Companion Medical’s InPen to the
Medtronic portfolio, as of September 2020.

THCB Gang Episode 32 LIVE 1PM PT/4PM ET

Episode 32 of “The THCB Gang” will be live-streamed on Thursday, November 12th. Tune in below!

Matthew Holt (@boltyboy) will be joined by some of our regulars: WTF Health Host Jessica DaMassa (@jessdamassa), radiologist Saurabh Jha (@RougeRad), MD-turned entrepreneur Jean-Luc Neptune (@jeanlucneptune), communications leader Jennifer Benz (@jenbenz), THCB’s Editor-in-Chief me (zoykskhan) and guest Jeff Goldsmith, President of Health Futures, Inc and National Advisor, Navigant Healthcare. The conversation will follow the post-election frenzy around COVID-19 response, ACA, and what a Dem. president means for the United States in terms of health care.

If you’d rather listen to the episode, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels — Zoya Khanproducer

HST Pathways and Casetabs Merge to Form ASC Practice Management Powerhouse

HST Pathways and Casetabs to Merge to Form ASC Practice Management Powerhouse

What You Should Know:

– HST Pathways and Casetabs announce merger to power surgical communication combined with the industry leader in ASC practice management.

–  The combined company is supported by a majority investment led by private equity company Bain Capital Tech Opportunities with a minority investment from Nexxus Holdings.


HST Pathways (“HST”)
and Casetabs, two leading providers of
innovative, cloud-based software for ambulatory surgery centers (“ASCs”) across
the U.S., today announced a strategic merger
of their businesses that will offer customers a flexible and secure set of
technology solutions and enhanced products and services. The combination is
supported by a majority investment led by Bain Capital Tech
Opportunities
with a minority investment from Nexxus Holdings. Financial
terms of the private transaction were not disclosed.

In 2019, HST and Casetabs began bi-directional integration
of their products. A revenue-sharing agreement was launched in January 2020,
when Casetabs’ Schedule Sharing application became an integrated feature of
HST’s practice management software.


Merger to Help Redefine the ASC Industry

The combined company will be led by seasoned executives from
both HST and Casetabs, who will leverage their in-depth understanding of the
ASC industry, strong client relationships, and insight into the market to align
strategic goals and maximize opportunities for growth.

With the support and resources provided by growth investors
with significant experience in healthcare technology, HST and Casetabs will
accelerate the pipeline of ASC information management products and services to
enable streamlined service, greater capabilities, and enhanced expertise for
customers. HST and Casetabs offer practice management software, physician
office scheduling, care coordination, revenue cycle optimization, enterprise
supply chain management, case costing, patient engagement and communication, an
electronic health record system, and analytics.

“The investment structure and partnership with Casetabs opens up new growth and synergies to create additional benefits for customers,” said Tom Hui, Founder and CEO of HST. “We already share over 400 customers which will provide a strong foundation to build further success in the market place. The combined talent of both companies will broaden and deepen our ability to deliver new products and continue to be a customer-centric services organization. Together, we continue to be thought leaders in the ASC market and introduce innovations that help our customers be successful today and in the future. This investment is a reflection of our commitment and confidence in our ability to grow together with Casetabs as leaders in the health information technology space.”

Innovaccer, Surescripts Integrate to Leverage Medication Data for Patients

Innovaccer, Surescripts Integrate to Leverage Medication Data for Patients

What You Should Know:

Innovaccer partners with Surescripts to power its data activation platform
with the most comprehensive medication data.

– The partnership will enable the company to conduct
smart medication reconciliation and ensure that patients are complying with
their care protocols.


Innovaccer, Inc., a San
Francisco, CA-based healthcare
technology
company, announced its partnership with Surescripts, the nation’s leading health
information network, to leverage the industry’s most comprehensive medication
data. This partnership will enable Innovaccer to enhance its medication
adherence powered by its FHIR-enabled Data Activation Platform.

Integration Provides Access to Medication Data for Specific Patient Populations

The integration of Surescripts with Innovaccer’s data platform will strengthen their ability to identify and triage at-risk patient populations and drive better care coordination. With access to integrated data on 314 million patients through Surescripts’ nationwide health information network, the company will enhance its analytics and care management capabilities. 

The partnership will enable Innovaccer to leverage
Surescripts Medication History for Populations to confidently pinpoint and
close care gaps in the patient data that is refreshed daily. Additionally, it
will empower them to highlight cases of medication non-adherence and potential
abuse. This capability will allow Innovaccer’s provider clients to measure
medication metrics for the Centers for Medicare & Medicaid Services (CMS)
reimbursement and avoid penalties. 

Addressing Medication Adherence Pain Points

Together, Surescripts and Innovaccer will address the major pain points with medication adherence among the patients and healthcare organizations. With the insights provided by Surescripts medication data, Innovaccer will assist physicians and care teams in driving better care management by creating personalized care plans. Integrating this information on the data platform, users can obtain the whole view of the patient in a single click.

“With access to medication information for specific patient populations, providers in value-based care arrangements can help manage cost-effective care and optimize clinical interventions for patients at risk of medication non-adherence,” explained Ryan Hess, Vice President of Innovation at Surescripts. “Our nationwide network delivers a more complete and accurate electronic picture of patients’ medication history for better informed, more efficient and safer care decisions.

How Yale-New Haven Uses IPA in Revenue Cycle to Tackle Inefficiency

How Yale-New Haven Uses IPA in Revenue Cycle to Tackle Inefficiency

As the pandemic heads toward a second year with no further financial stimulus guaranteed, hospitals and health systems are seeking ways to reduce costs and improve revenue cycle performance. Intelligent process automation (IPA) is an emerging solution designed to optimize operations and increase productivity through a combination of process modeling, process automation, and artificial intelligence. 

IPA in the revenue cycle enables healthcare organizations to shift manual, repetitive work to automated processes that improve efficiency, accuracy, and financial outcomes. These benefits are particularly important in the healthcare revenue cycle where a maze of confusing payer requirements, redundant workflows and siloed administrative functions push up operational costs and departmental overhead. 

Connecticut’s leading healthcare system, Yale-New Haven Health, is breaking the pattern of costly revenue cycle operations—one function at a time. This article explores how the health system’s 1,200-employee Corporate Business Services organization uses IPA in the revenue cycle to tackle inefficiency. 

Revenue Cycle Automation at Yale-New Haven Health

Yale-New Haven Health began using IPA to streamline revenue cycle operations in 2019. The organization first analyzed all their high-volume, repetitive tasks that required no human intervention until there was an exception in the case or workflow. Their assessment process involved four steps:

– Evaluate each revenue cycle function for high levels of repetitive, redundant tasks, or work overlaps.

– Step back and perform process mapping. Look at EHR and other existing vendors to ensure efficient uses of all current application capabilities. Implement any capabilities not currently being used. 

– Identify any remaining gaps and determine if revenue cycle automation using an IPA platform could fill the gaps for that specific revenue cycle function. 

– Work with internal staff and IPA vendors to create a comprehensive physical map of the entire process, new workflow changes, and a timeline for implementation.  

In addition to choosing the right revenue cycle process to automate, it is critical to re-engineer those functions to achieve the greatest impact and value to the healthcare organization. “We needed to use all of our existing systems before bringing in new revenue cycle automation,” says Melisa Brereton-Esposito, Director, Systems, Training and Development, Corporate Business Services at Yale-New Haven Health. “We first focused on cash reconciliation and posting, which provided a valuable learning experience for future projects.” 

The four-step approach takes time, but yields dramatic results in cost reduction and staff adoption. “If our team doesn’t use the recommended assessment process, the introduction of IPA is of little value,” adds Brereton-Esposito.

Overcoming Adoption Challenges

Initially, there was general distrust among staff regarding how automation would improve or replace their manual work. Concerned about job security, many were reluctant to turn over tasks to the computer. Revenue cycle staff tend to be long-term employees who are cautious by nature. Brereton-Esposito’s department implemented three managerial guidelines with regard to staffing: 

– Keep staff whose jobs are replaced by technology—never let them go based on automation.

– Reassign and retrain to jobs that require more analytical thinking. Encourage staff to focus on the next “better” job

– Redistribute staff or wait for attrition in areas that have been automated. 

Example of a task currently automated: Correspondence Workflow—Applies to mail that comes into the revenue cycle department, centralized across five hospitals. 

Before automation: All letters are received from a lockbox in random order in batches. Staff are assigned to read, sort, and process the letters to different work queues such as an explanation of benefits (EOB), financial assistance applications, approval, and denial letters in the EMR system. This is a highly manual effort and delays in this process may sometimes lead to missing time-sensitive correspondence from the payors and other external entities. 

After automation: The technology uses OCR and machine learning to categorize each piece of correspondence based on the content and then moves it to the correct person or place. For all types of letters, the system takes steps to sort and send to the right category. The technology is expected to read approximately 70% and send 30% to the human in the loop. Percentages should improve with ongoing testing, validation, tracking, and working on the exceptions in incoming correspondence.

Checklist for Evaluating Solutions

Automation platforms should use a combination of AI tools along with RPA (robotic process automation) to enable automated workflows, specifically processes like document classification. Solution providers who have an enterprise approach and multi-tenant automation technology platforms can help with long term organizational goals.

Organizations should look for vendors with the knowledge and experience of healthcare processes and have deep technology capabilities beyond RPA, like the capability to handle large amounts of structured and unstructured data, to drive automation. Evaluate vendors beyond a point solution on how the automation platform can scale across various functions and their ability to partner with you to maximize value. 

Finally, these systems learn as they go. Vendors should have the ability to scale with reusable components and continuous learning for enterprise-wide automation. 

 Feedback and Outcomes

Achieving positive outcomes with revenue cycle automation depends on staff trust in the technology and new processes. Partnering with a reputable IPA vendor will allow the management to build trust with the staff and get staff involved in the process. Accuracy is one of the key determinants of success and must be measured consistently since intelligent systems learn and improve over time. When staff and leadership agree that the implementation has been successful, then they can rely on IPA to address the next costly and inefficient revenue cycle function. 


About Albert Porco

How Yale New Haven Uses IPA in Revenue Cycle to Tackle Inefficiency
Albert Porco, Chief Solutions Architect at Cognitive Health Technologies

Albert Porco serves as Chief Solutions Architect at Cognitive Health Technologies. Albert has served as CIO for several New York metropolitan area hospitals and health systems. Prior to joining Cognitive Health Technologies, he also served as the Chief Technology Officer for the New York Department of Health. He can be reached at [email protected]

Vaccine Efficacy Data!

Earlier this morning, Pfizer and BioNTech announced the first controlled efficacy data for a coronavirus vaccine. And the news is good.

You may recall that these vaccine trials are set up to get to a defined number of coronavirus cases overall, at which time the various monitoring committees lock the door and unblind the data to have a look at how things are going. Pfizer’s original plan (as mentioned in that post) was the most aggressive of them all – they were planning to take their first look once they hit 32 cases. But one of the things we learned from this morning’s press release is that the company and the FDA changed that, dropping the 32-case read in favor of a 62-case read. By the time they finished those negotiations, though, the number of cases had reached 94, so we actually have a much more statistically robust look than we would have otherwise. And the split between placebo-group patients and vaccine-arm patients is consistent with greater than 90% efficacy. That number will come into better focus, but I hope that we can continue to take 90% as the lower bound.

The final analysis of the trial is set for a 164-case level, and that should be reached sooner than we might have thought. The higher number of cases in this current readout is surely because the coronavirus pandemic is itself at the “uncontrolled spread” stage in much of the Northern hemisphere. Remember the worries about whether companies would have to move their trials around to find places where the disease was still spreading? Seems like a long time ago, and as things have worked out you can (unfortunately) run your vaccine trial pretty much anywhere you like.

More details: these numbers are from 7 days after the second dose of the vaccine (28 days after the first dose overall). Going forward, they plan to collect data at 14 days after the second dose to make the statistics more comparable to the other ongoing vaccine trials. Pfizer/BioNTech say that the protection looks like it should last at least a year – no numbers on that yet, but it can only be based on neutralizing antibody titers and/or T-cell levels and their change over time. The only way to get better numbers on that is to wait and collect better numbers; there is absolutely no way to tell without waiting to see. But if we’re already out to about a year’s protection, that’s very good to hear. And this is the time to mention that nothing has (so far) been set off on the safety side, “no serious concerns”. But the only way to collect longer-term safety data is to keep watching over that longer term as well. I suspect that we’re still going to see plenty of fevers and very sore arms after injection (with this and the other vaccines), and I say bring them on, then.

What does this mean for the pandemic vaccine effort in general? The first big take-away is that coronavirus vaccines can work. I have already said many times (here and in interviews) that I thought that this would be the case, but now we finally have proof. The worst “oh-God-no-vaccine” case is now disposed of. And since all of the vaccines are targeting the same Spike protein, it is highly likely that they are all going to work. There may well be differences between them, in safety, level of efficacy across different patient groups, and duration, but since all of them have shown robust antibody responses in Phase I trials, I think we can now connect those dots and say that we can expect positive data from all of them.

Having the Pfizer/BioNTech candidate read out first has some other implications. One that may (or may not!) get lost in the excitement is that Pfizer explicitly and publicly did not take US government funding for this effort. In fact, they said at the time that working in the “Operation Warp Speed” framework would likely slow them down, and it looks like they can make a case for that! Not all the victory laps that people are going to try to take for this will be justified (and wasn’t that ever the case, right?)

But the other thing to keep in mind is that this candidate has (so it appears) by far the most challenging distribution of all of them. The Pfizer/BioNTech candidate, last we heard, needs -80C storage, and that is not available down at your local pharmacy. Pfizer has been rounding up as many ultracold freezers (and as much dry ice production) as they can, but there seems little doubt that this is going to be a tough one. I know that the press release talks about getting 1.3 billion doses of this vaccine during 2021, but actually getting 1.3 billion doses out there is going to take an extraordinary effort, because you’re getting into some regions where such relatively high-tech storage and handling becomes far more difficult. Population density is as big a factor as electricity and transport infrastructure. With demanding storage requirements, the more people that are within a short distance of a Big Really Cold Freezer, the better. And the more trucks (etc.) that you have to send down isolated roads to find the spread-out patients, the worse. That’s always the case, but if you’re rushing against dry-ice-pack deadlines the situation is more fraught.

What are the regulatory consequences (as discussed here)? The press release says that the companies expect to reach the required safety data (two months after dosing) in the third week of November, and that they plan to file for an Emergency Use Authorization (EUA) after that. The complex rollout of this vaccine itself (see below) may mean that such an EUA, if granted, will not have as much of an immediate effect on the other trials, but this is still an open question, and there seems to be a good deal of debate at the FDA itself about how to handle things. I would assume that any EUA would be directed at the very highest-need populations; we’re not going to see a country-wide effort to vaccinate the population until sometime in 2021.

Remember as well that the efficacy levels we’re seeing this morning are after the second dose. If everyone magically got the vaccine this morning, we still wouldn’t all be able to breath easy for nearly a month. And we are not all getting it this morning, for sure. It’s obvious that the pandemic is ripping through a long list of countries right now – cases are rising steeply, hospitalizations right behind, and although we’re better at treating the patients than we were, deaths are inevitably going up as well. This is all going to get worse before it gets better – but the good news here is that it really is going to get better. Keeping your head down, wearing masks and keeping your distance is more valuable than ever, because there’s an actual finish line in sight that you have to reach. Hang on for the vaccine – for the vaccines – because they really are coming.

We’re going to beat this. We’re starting to beat it right now. An extraordinary, unprecedented burst of biomedical research – huge amounts of brainpower, effort, money and resources – has come through for the world.

Aducanumab at the FDA

So it’s finally time for Biogen to sit down with an FDA advisory committee to look at their proposed Alzheimer’s therapy, the anti-amyloid antibody aducanumab. I last wrote about it here, back in December, and you know what? I haven’t changed my mind a bit, because (1) no new data have emerged (none were expected) and (2) I have not had a change of heart about the existing numbers. You can read the post and the links in it for more, but to put it briefly: I do not think that Biogen has demonstrated efficacy. I think that they have enough of a hint to run a better confirmatory trial, should they so desire, but they do not. They desire to go to the FDA, get the drug approved, and begin printing money.

And I would be all for a drug company printing money if they had a drug that could really alter the course of Alzheimer’s disease, but (once again) Biogen has not, in my opinion, demonstrated that they have anything like that. And I am definitely not all for a drug company printing money for something that really doesn’t do anyone any good. Because everyone knows what’s going to happen if aducanumab is approved: the pent-up demand for something, anything to treat Alzheimer’s is immense. Has been immense forever. There are a lot of people who have a family member with the disease, and they will demand treatment with the new drug that the FDA has approved to fight the disease. Who could blame them?

The FDA has released its briefing documents for the advisory committee meeting, and they took a lot of people by surprise (I was one of them). A key document is here, and a key section of it starts around internally numbered page 147 of the PDF. That the FDA clinical reviewer’s commentary on the efficacy data in the trial, and let’s say that he’s a lot more convinced than I am. Here’s a writeup on it at Stat, from Damian Garde and Adam Feuerstein.

There are two studies under discussion: 301 and 302. Broadly speaking, 301 (also known as ENGAGE) was negative, and 302 (also known as EMERGE) was positive, and the problem has been which result to believe and to account for why they’re different. There’s also a problem in that the data are incomplete; Biogen itself stopped work in the clinic when a futility analysis suggested that the antibody was not working, but revived their hopes with a post hoc analysis. (Here’s my writeup on this at the time). But the FDA clinical reviewer is very persuaded by the positive results, and believes that the negative trial was skewed by a population of fast-deteriorating patients. As I recall, Biogen’s rationale was that the negative trial looked better if you went for the subset of patients who got sufficient exposure; I don’t think they had advanced the fast-deteriorating-cohort explanation at the time, although I’d be glad to be corrected about that. Those rapid progressors seem to have mostly been in the high-dose arm of 301, and to be fair, that is where the greatest problem with the results showed up. And I mean “problem” in the sense of “worse than placebo”. But with the new analysis, the clinical reviewer is unbothered by 301, adds its good parts into 302, and concludes that the data are “robust and exceptionally persuasive”.

This is what goosed Biogen’s stock price the other day, as well it might. But even at the time, people were starting to notice the comments of the FDA’s statistical reviewer, which start on internally numbered page 247 of the same PDF. “Inconsistency on many levels summarizes the final clinical efficacy data from these trials,” is how things start off there, and the review goes on to note (among other things) the missing data at week 78 (due to the trial halt, and the time point that had shown the most evidence of benefit). “It is not justifiable to search 301 for patients who are most similar to 302“, it goes on to say, “because that may have selection bias and gives the impression that 302 is right and 301 is wrong, for which there is no justification not relying on post-hoc analyses“. Statements like that, and there are many, give one the strong impression that this reviewer had read at least a draft of the clinical reviewer’s work. Biogen, the review goes on to note, is advancing that post-hoc “rapid progressor” theory, but the reviewer says that after the completion of the study is too late to try to address that – and besides, an effective drug would be unlikely to fail so completely just due to rapid progressors in what was considered a study in people early in the disease.

Overall, the statistical reviewer says that the negative 301 study “should not be discounted without some extremely compelling reason (which there is not)”. Overall, after working through a number of the issues in the trial, the conclusion is that “. . .the totality of the data does not seem to support the efficacy of the high dose. . .the reviewer believes that there is no compelling substantial evidence of treatment effect or disease slowing, and that another study is needed. . .” The figures that follow in the PDF are worth a look too, if you’re into this sort of thing. They get into a great deal more detail than I have time or space to here in this post, but they constitute a sustained attack on the hypothesis that that aducanumab has shown efficacy and the interpretability of the data presented for that case.

The fact that Biogen stock jumped on the release of this document tells you most of what you need to know about the stock market (and about investing in biotech stocks in particular). You wonder how many people even got as far as the statistical review section before hitting the big green Buy button. Now, I have no idea of the internal politics of this FDA review – and since the place is staffed by humans, it will have politics. But this looks like a deeply divided briefing document – frankly, I can’t recall seeing one that was more at odds with itself. I don’t know what the advisory committee will make of all this, either, although it has to be noted that one of the AdComm members, David Knopman of the Mayo Clinic, has been recused by the FDA and will not sit. He was involved in the Biogen trials, which is indeed a conflict of interest – but he emerged from the experience as a critic of the drug who thinks that another trial is the only way to answer the questions about it.

I’ve been worried about this sort of situation in Alzheimer’s for years, and I’m not the only one. I really wish that we weren’t fighting out a big approval decision under such conditions, but here we are. I have no idea of what’s going to happen in the advisory committee, although the loss of Dr. Knopman increases the chance of a favorable vote. And beyond that, I have even less of an idea of what’s going to happen with the eventual approval decision. I don’t think that the FDA should approve aducanumab as it stands, in the same way that I don’t think that they should approve any drug for which the evidence is this equivocal. But the agency approved Exondys, so who knows what they’re capable of? We’ll find out. Lucky us.

And if they do, we’ll find out how many physicians will prescribe it, and how many insurance companies will pay for it. It would be something to see both of these groups hold the line, but I fear that the pressures will be just too great. Biogen is counting on just that, and I’m not happy about it.

Shodhana Laboratories-Walk-In Interviews for Freshers On 4th Nov’ to 7th Nov’ 2020

Shodhana Laboratories-Walk-In Interviews for Freshers On 4th Oct’ to 7th Nov’ 2020

Walk-In Interviews for B.Pharm/ M.Pharm/ M.Sc/ Any Degree Freshers @ Shodhana Laboratories Limited

Job Description

We are happy to provide an Opportunity for Freshers

Qualification: Any Degree, B.Pharmacy, M.Pharmacy, M.Sc, Any Other Graduation. (Only For Male Candidates).
Contact: 7659095552(What’s app ).
If You’re Interested in Production only , you can attend the interview. (only Male Candidates)
Email: Email: [email protected]
If you are interested, then only Apply.
VeVenue & Time Details: Walk-In On 4th to 7th November 2020 at Shodhana Laboratories Limited Plot No 24, 25, 26, IDA, Phase 1, Jeedimetla, Hyderabad – 500055
Telangana, India. Conctact: 7659095552 / 0, 8885575417, 8096007111.

Shodhana Laboratories-Walk-In Interviews for Freshers On 4th Oct’ to 7th Nov’ 2020

The Scientific Literature’s Own Pandemic

One side effect of the coronavirus has been an explosion of lower-quality publications in the scientific literature. This has come in several forms, some more excusable than others. In the former category are the papers that were rushed out earlier this year, observational studies that sometimes investigated possible therapies as well. These were often done under great pressure of time and resources, so it’s understandable that they had many possible confounding variables and were also statistically underpowered. These were from the “some data beats no data” era of coronavirus clinical reports, and these papers have been superseded by larger, more well-controlled ones (as their authors surely fully expected). Some of those early observations have held up, and some of them haven’t.

The less excusable stuff has some subtypes of its own. There are the people who have thrown a colorful coronavirus tarp across their existing work to increase its chances of publication and/or funding, for one. This is an old and not particularly honorable scientific tradition, but no one’s surprised by it (and I hope that no one’s impressed, either). Beyond that, unfortunately, are people publishing stuff that would probably never appear at all if it didn’t have the currently fashionable lipstick and rouge applied to it.

You can find many examples of this at literature watchdog sites such as Retraction Watch and For Better Science. For example, here’s a paper on the mental health effect of the pandemic that’s so useful that the authors published it three times in nearly identical form. And there’s been a flood of deeply unimpressive work on Vitamin D, which will make it even harder to figure out if there’s anything worthwhile in the idea to start with. And Retraction Watch has been keeping a list of Covid-19-related retractions and expressions of concern, which will surely grow ever longer. Of course, there are plenty of papers out there (in this field and others) that haven’t been retracted but sure look as if they should be.For example, this thing, which just recently appeared in Science of the Total Environment, an Elsevier journal that I’d never heard of. That’s no particular distinction – Elsevier has a lot of journals that no one has ever heard of, and quite a few that people wish that they never had heard of, either. The title of the paper really says it all: “Can Traditional Chinese Medicine provide insights into controlling the COVID-19 pandemic: Serpentinization-induced lithospheric long-wavelength magnetic anomalies in Proterozoic bedrocks in a weakened geomagnetic field mediate the aberrant transformation of biogenic molecules in COVID-19 via magnetic catalysis

That title is quite a ride. You have the unpromising TCM beginning, but then there’s a completely unexpected slide into geology, with a vertigo-inducing snapback at the end into biology via “magnetic catalysis”. Reading the paper itself does not resolve these feelings. It’s full of statements like “The discovery of the chiral-induced spin selectivity effect suggests that a resonant external magnetic field could alter the spin state of electrons in biogenic molecules and result in the magnetic catalysis of aberrant molecules and disease“, in which the verb “suggests” is doing an Olympic powerlift, and the weird and alarming “neither the SARS-CoV-2 infection nor the inflammatory reaction per se is the principal mediator of severe disease and mortality“. It’s the “serpentinization-induced resonant long-wavelength magnetic anomalies” that “induce the magnetic catalysis of iron oxides-silicate-like minerals (i.e., iron oxides, hyaline) from biogenic molecules and SARS-CoV-2 from endogenous viral elements in the genome“, you see. I realize that that last part might be hard to parse (I think they believe that viral particles are being produced endogenously?), but perhaps my house is built over the wrong kind of rock deposits or something. At any rate, one conclusion of the paper is that Nephrite-Jade amulets are appropriate personal protective measures against the pandemic, a recommendation is completely in line with best practices from the Neolithic Hemudu-Majiabang culture in China, and who should know better, I ask you.

I think this is a load of tripe, personally. Old, smelly, unsaleable tripe, the sort that George Orwell’s landlady was unsuccessfully trying to unload onto customers in The Road to Wigan Pier. I find the scientific rationales unconvincing and hand-waving, the attempt to dethrone the germ theory of disease quixotic at best, and the recommendation to wear jade amulets to protect from disease to be flat-out bizarre. The Retraction Watch post linked above has some e-mail correspondence with the editors of the journal and the authors of the paper itself, and those don’t make me any happier, either.

Extraordinary claims, after all, need extraordinary evidence, and this paper just doesn’t bring anything close to what would be needed for its conclusions. For its premises. It features a lot of wild leaps between unrelated phenomena, while all the while claiming that all of these steps are perfectly reasonable and well-precedented. If you don’t know anything about biology, about geology, or about physics it probably sounds impressive. But its peer review and publication do no credit to anyone involved – not the editors, nor the journal, nor Elsevier, nor the authors, and definitely not the University of Pittsburgh’s Graduate School of Public Health, where it originated.

Longevity

Wouldn’t be great if we could find a treatment to improve longevity? Not just a treatment for a specific disease but one that could dramatically improve longevity for a large percentage of human kind. While there is a lot of longevity research going on and much progress has been made. There are some challenges. Milan Cvitkovic writes about these challenges in his blog post “(How) should we pursue human longevity?” One issue is regulatory.

Before it can conquer death, a longevity treatment will have to conquer the U.S. FDA’s clinical trial process. The FDA doesn’t consider aging to be a medical indication (a.k.a. a valid reason for treatment). This means longevity companies have to choose an existing age-related indication (e.g. Alzheimer’s) to demonstrate efficacy of their treatment on. How to do this well is a key consideration for any longevity biotech.
More optimistically, regulatory changes like dual-track clinical trials or eliminating phase 3 trials altogether might entail a massive acceleration in longevity therapeutic development. Outsourcing or decentralizing clinical trials is also an exciting option. Science 37 is one company working on this.

Increased transparency and a ‘fail fast’ mentality would also be helpful.

Between 25% and 50% of clinical trial results remain unpublished even several years after completion. This isn’t surprising: why publish your clinical data when there’s zero gain to you and even a slim chance you can repurpose the asset in the future? But this means a potential downside of moving longevity efforts from research to commercial therapeutic development is that the field will learn less overall.
A related problem is that because liquidity events can happen far before convincing clinical data, biotechs are incentivized to push their risky studies until as late as possible (ideally after the employees get their money) and to gussy them up to look better than they really are. A fail-fast-and-publish-honestly mentality would of course be better for longevity overall.

Nintil also has a very helpful “Longevity FAQ” post that is worth reading as well.

Down to the Atoms

I wanted to mention something that was reported a week or so ago, and may sound a bit exotic or obscure, if you’re not a structural biologist. But it’s yet another sign of a revolution in our ability to get structures of biomolecules (and others) that we never would have before, and the effects over the coming years are going to be profound.

I’m talking about the improvement in cryo-electron microscopy to get down to atomic resolution. I’ve spoken about cryo-EM several times before on the blog (most recently here and also elsewhere), since over the lifetime of the blog itself the technique has made huge advances. Now two papers in Nature report the latest, which takes the quality of cryo-EM data to a level that twenty years ago not many people would have ever expected to see.

The early 2010s brought a “resolution revolution” that isn’t over yet. It’s been a combination of several techniques and improvements in both hardware and software. Cryo-EM can be computationally intensive, so even if the hardware had been available in (say) 1985, we wouldn’t have been able to extract the information out of it that we can now (at least not in any useful amount of time!) But it’s not all just better processors and better software; the newer hardware is no small part of the story. We have better electron sources, better ways to keep the energies of the beam within a very narrow range and very precisely aimed, better detectors at the back end of the sample, new noise-reduction techniques in general – all of these and more have combined with all that software power to produce something amazing.

If you’re wondering what the big deal is, it might be summed up as “you don’t always have to grow crystals any more”. X-ray crystallography has been a huge technique ever since its beginnings early in the 20th century, and has advanced over the decades for some of the same reasons that cryo-EM has more recently: better X-ray sources (both brighter and tighter), better detectors, and better algorithms and hardware to run them on. Modern crystallography would floor the folks who were doing huge amounts of work to generate every single structure back in the 1950s. But you’ve still needed crystals, high-quality ones, to get the technique to work. And growing them is witchcraft. Really, it is. As I’ve said many times, all you have to do is walk into a protein crystallography lab and see the stacks of multiwell plates everywhere, full of dozens, hundreds, thousands of attempts to grow good crystals by varying the concentrations, varying the buffers, varying the protein constructs, varying the long, long list of salts and counterions and additives that people have tried over the years to coax the proteins into somehow lining up and forming orderly ranks.

As anyone who’s done the slightest bit of work in the field will attest, even once you see beautiful chunky crystals forming – and not those fluffy threads, those usually don’t work out – you’re nowhere near out of the woods. The first big question is “Is that actually your protein?” because those salts and additives can form nice crystals of their own, too, and the second big question is “Does it diffract?” Very decorative-looking crystals can turn out to be crap when the X-ray beam hits them: what you want is showers of well-defined diffraction spots, varying in sweeps of beautiful patterns as the orientation changes between the crystal and the beam, and stretching way out to the edges of your detector (the further out they go, the higher the resolution you will likely be able to wring out of the eventual data set). What you get, too often, is splorts and splotches, smeary gorp that starts out ugly and gets uglier as you watch. Robert Palmer warned us years ago that a pretty face don’t mean no pretty heart, and every crystallographer knows that he was right. Why yes, I have tried growing crystals and collecting data on them personally: did I give myself away?

But what if you didn’t have to do this? Cryo-EM doesn’t need crystals. You take your protein of interest and spread a solution of it out on a surface, which after a few more steps gives you a scattering of individual protein particles lying there every which way. And you hit them individually with your fancy electron beam and collect the results on your fancy detector, with all that data going into some very fancy software indeed. That software can, if all goes well, sort the particles you collected on into categories – edgewise, tilted, right down the middle, lying on the long side, and so on – and build a model of what the protein must look like that could have generated the data. Used to be, this process gave you fuzzballs, but they’ve been getting relentlessly sharper.

And now we’re down to individual atoms, which is what a high-quality X-ray structure has always been able to deliver. As the resolution of these techniques improves, the pictures become startling. An aromatic ring, for example, like a phenylalanine side chain on a protein, is a fuzzy blob with lousy data. Better data show you that it’s a flattened blob, and you can start to see how it’s tilted in three dimensions. Produce higher-quality stuff, and that aromatic ring stops looking like a hexagonal lump of wood and starts becoming a hexagonal doughnut: you’re seeing the open space in the center of the six-membered ring. That level is very nice resolution for a protein structure, but the small-molecule crystallographers can push on to even more. Higher resolution still, and that doughnut turns into a ring made from six ball bearings: the individual carbon atoms. Beyond that, you can even start to see ghostly electron density between them, which is the shared electrons of the chemical bonds themselves. You can see the improvement in the real-world structures below, from this review article. The ball-and-stick stuff is the model, and the meshwork is what you actually get:

Note that the highest-resolution structures there are rare and difficult. But these two new cryo-EM papers have a well-behaved protein (apo-ferritin) resolved to just over 1.2 Ångstrom, and a membrane protein (a GABA receptor) at 1.7Å. That’s really damned good, and the exciting thing is that the technology is still improving. This overview at Nature suggests that we may be getting near the end improving the electron beams and the like, but that there’s plenty of room in sample preparation and data analysis. This will allow us to see more and more detail on progressively harder and harder to handle samples.

Consider cryo-electron tomography, where you apply these techniques to proteins in the cells themselves. This is a pretty intense technique, but it’s already providing insights into protein structure and function that we couldn’t get any other way. This stuff would have been considered impossible not all that long ago, and I’m very happy to be able to see it happening for real.

The Latest Antibody Data From Lilly and From Regeneron

We have a new paper in the NEJM from the Eli Lilly effort on monoclonal antibodies against the coronavirus. And there’s no reason not to be up front about it: it’s disappointing.

This is the BLAZE-1 trial (mentioned in this recent post), which is studying non-hospitalized patients. It was another trial (ACTIV-3) where this antibody was found to be ineffective in hospitalized patients, but the numbers from this trial are not all that impressive, either. I’ll be adding my voice to the chorus – here’s Jason Mast at Endpoints with a roundup of the reactions so far. The trial (which is still continuing) is evaluating single infusions of 700mg, 2.8g, or 7g of the monoclonal in people with mild-to-moderate infection. The primary endpoint is change in viral load at Day 11 after dosing, and what we’re seeing here is the results from an interim analysis in early September.

Here’s the problem: only the middle dosed reached statistical significance on that endpoint. That’s not great from a dose-response standpoint – as usual, the easiest result to understand is seeing the effects increasing with increasing dose. There are most certainly “U-shaped” dose responses, with the U facing either up or down, but with three doses it’s hard to know what that U is like or why it’s shaped like it is. You can bet that Lilly would rather not have seen the results come out the way that they did. One problem was that by Day 11 even the placebo standard-of-care group had a trend towards viral clearance, but the bigger problem is that the antibody wasn’t able to distinguish itself very well from that.

And that’s especially true since the company has asked the FDA for an Emergency Use Authorization for the 700mg dose, arguing that the viral load data – which was the primary endpoint, remember – isn’t the most important thing to consider. But that immediately makes you want to ask them why they made it the focus of the study, and makes this argument sound (unavoidably) like ex post facto special pleading. You also have to wonder what dose the FDA might approve for an EUA, if the FDA is minded to give them one at all. The effective dose in the trial was obviously four times as large as what Lilly is asking for.

When you look at other outcomes, there are trends for symptoms clearing a bit faster, a trend for slightly fewer people to be hospitalized at Day 29, and so on. But the antibody therapy (at any dose) does not distinguish itself much. I really hoped for better, but (all together now) this is why we run the trials. Sometimes we find things that work better than we expected, and sometimes we find a perfectly reasonable idea doesn’t seem to do as much as anyone would have predicted. I think the best hope for a really compelling use for this antibody is in the ongoing prophylaxis trial, but after seeing these data I wonder how that one’s going to read out, too.

Regeneron has also released more data on their antibody cocktail (although in another press release and not in a journal manuscript). This is another non-hospitalized-patient study, in about 800 people. They’re not seeing much difference either in their two doses (2.4g and 8g), so they’re asking for an EUA for the lower one, naturally. The numbers overall look more interesting than the Lilly ones (although it has to be said that these are different trials with different enrollments, etc., so you can’t necessarily just line things up head-to-head). That said, the Regeneron therapy does show significant results for reducing viral load and for follow-ons like reducing medical visits. The best results were in the most at-risk patients, whether that’s defined as higher viral load, pre-existing risk factors, or weak antibody response. That last one was the group highlighted in Regeneron’s earlier look at their data, you may recall. But the Lilly data also look best in these cohorts – that, at least, is a result that’s what you’d have expected.

Does this mean that an EUA would feature a recommendation to use either of these therapies preferentially in high-risk patients? Or a call to get an antibody readout before starting therapy, prioritizing those people who aren’t mounting as good a response? Or is there going to be one at all?

My guess is that yes, we’ll see such a move. But how much difference it makes, that’s another question. As Matthew Herper details in this column at Stat, we have not prioritized production of these antibodies the way that we have vaccines, so there are (at the moment) only about 50,000 doses of the Regeneron cocktail available. A look at the map will suggest that the entire stockpile, even if given immediate authorization and Star Trek transporter-style distribution, would vanish into the national epidemic like a rock thrown into a pond. You’d have to look very hard to see its effects at all. We had around 80,000 new cases yesterday, and that number is going up. 50,000 doses of an antibody that seems to help cut down on hospital visits is not going to make much of a dent in that situation.

Rehash: The Health Assurance SPAC

Not so long ago (August) Jessica DaMassa and I ran a THCB Bookclub interview with Hemant Teneja & Stephen Klasko about their new book UnHealthcare. And, just because, their friend Glen Tullman sat in…..

Fast forward to this week and the three of them plus a cast of characters from General Catalyst & Livongo (Jenny Schneider, Lee Shapiro) have put $500m of their Livongo winnings into a SPAC. The book is based on the idea of Health Assurance and so is the SPAC. So if you are interested in figuring out what they are up to and what they might do or buy, here’s the interview–Matthew Holt

Click Chemotherapy

So here’s an ambitious idea that’s about to get a hearing in human clinical trials. A startup called Shasqi is using click chemistry as a drug delivery method, and they have a new manuscript on the idea here at ChemRxiv.

The idea is this: you produce a modified version of a hyaluronate biopolymer, decorated with aryl-tetrazine functional groups. The tetrazine/cyclooctyne reaction has been exploited many times (here’s an example) since the group was introduced by Joseph Fox’s group at Delaware some years ago. Under the banner of click chemistry, as introduced by Barry Sharpless, the idea is that the two components have little or no reactivity except for each other, but will react if they merely come into proximity under the right conditions.

So you take the tetrazine-laced biopolymer and inject it into a tumor site, where it is expected to largely sit there. You then inject a chemotherapy agent that has been modified to contain a cyclooctyne group – and in this system, that modification is through an ester/carbonate/carbamate linkage to a cyclooctynol group. That does several things for you, ideally: if optimized, it makes the chemotherapy agent less effective until that ester is cleaved, so larger doses of it can be given with a better safety profile. And when it encounters the tetrazine-containing polymer, it does the cycloaddition click reaction which then cleaves the ester and releases the free drug at the site of the tumor.

It’s a pretty slick idea. Does it work? The team published a proof-of-concept in a mouse model in 2014 (open access link) using a radioligand, and it really did seem to localize the agent at the site of injection when it was delivered a few hours after the hyaluronate polymer. In the latest paper, cyclooctynol versions of the well-known chemotherapy drugs doxorubicin, paclitaxel, etoposide, and gemcitabine. This was not trivial chemistry, as a look through the paper will show, but they were able to make example of all four modifications.

Then came the evaluations. The modified versions of doxorubicin and etoposide were indeed less cytotoxic than the parent compounds, but not the other two, unfortunately. Helpfully, the modified doxorubicin also had better solubility than the parent compound. It also had better plasma stability than the modified etoposide, so that made it the obvious choice to proceed with. And it does indeed react with the tetrazine hyaluronate polymer in vitro, releasing doxorubicin itself.

What about in vivo, though? In a mouse model, the modified dox compound could be dosed up to 10x the maximum tolerated dose of doxorubicin itself, so that part checked out. And the pharmacokinetics checked out as well: when mice were injected with the modified doxorubicin and then given an injection of the biopolymer, blood levels of the former compound dropped quickly (by over 2000-fold), with an increase in free doxorubicin at the same time. This phenomenon repeated over multiple daily injections of the modified doxorubicin, albeit with slight decreases in capture over time. These are presumably due to gradually diminished tetrazine sites on the polymer and/or its degradation in vivo, but the effects were still significant all the way through. Exposure to plain dox was significantly lower in peripheral tissue compared to a normal dosing protocol, as were its adverse effects in sensitive tissues such as cardiac muscle.

And here is another preprint in which this system is extended to mouse xenograft models, with effects both on the injected tumor and distal ones. So the idea, up to this point, appears worth trying out. The company has started dosing patients in a Phase I trial in people with various solid tumors who are ineligible for standard-of-care treatment. That’s a tough population to show good effects in, but that will make it all the more interesting if they can deliver. There are several places where things could go wrong, of course. The modified polymer and/or the modified doxorubicin could turn out not to be well tolerated in patients. The pharmacokinetics of the click-capture mechanism could be looser or less dramatic than they were in the mouse model (and this could vary from patient to patient and across different sorts of malignancies). And finally, there’s going to be only so much that doxorubicin itself can do for some of these cases. It’s not an infallible tumor destroyer – we don’t have too many of those – and the current trial will (at best) get the most out of doxorubicin treatment that it has to give.

But if the idea is sound, there are plenty of other applications for it, with more labor on the med-chem synthetic front to produce more modified molecules. Different compounds, cocktail treatment regimes, combinations of click-labeled drugs, even – there will be a lot to investigate, if it looks worthwhile. Let’s see if it does!

Will California Produce and Distribute Biosimilars and Generics Drugs?

As AJMC reports, Governor Gavin Newsom signed into law legislation that will allow California to produce and distribute its own line of biosimilars, biosimilar insulins, and generic drugs. My thoughts on SB-852 are covered in an interview in Radar on Specialty Pharmacy.

Shafrin tells AIS Health that there are two key reasons for manufacturers to contract with California. First, the drugs will be available without rebates. For this reason, “manufacturers could lower their list price but not lose much funds by simply cutting out the middleman — the pharmacy benefit managers — and avoiding having to pay rebates.”
Second, he notes that “California is a large market, and it could be the case that these ‘Made in California’ drugs would have preferred status among California payers. The bill notes that the state needs to consult with key state purchasers including Public Employees’ Retirement System, the State Department of Health Care Services, the California Health Benefit Exchange (Covered California), the State Department of Public Health, the Department of General Services, and the Department of Corrections and Rehabilitation. Getting access to all these large California payers could be lucrative if the reimbursement price is reasonable.”

Do read the whole thing (subscription required).

More Antibody Data

Unfortunately, we’re getting a dose these days of “That’s why you run clinical trials”. Word came Monday evening (Peter Loftus in the WSJ, and a Lilly statement) that the ACTIV-3 trial being run by the NIH has shown lack of efficacy for the combination of the Eli Lilly/AbCellera anti-coronavirus antibody (bamlanivimab, LY-CoV555) when combined with remdesivir in hospitalized patients. This is the trial that had been paused earlier this month for safety concerns, but now no more patients will be enrolled at all. A thorough review of the data showed that there were, in fact, no differences in safety between the arms of the trial with and without the antibody – but there was also no difference in efficacy.

Lilly’s own statement notes that their own BLAZE-1 trial continues. That one is studying patients who are not hospitalized, with this antibody as a monotherapy and in combination with another anti-coronavirus antibody candidate (etesevimab, LY-CoV016). The company provided some data earlier this month on this trial, and indeed has said that they’re asking for an Emergency Use Authorization for high-risk patients on the basis of the numbers so far. There’s also the NIH ACTIV-2 trial, which continues to study the antibody in mild-to-moderate disease, and Lilly’s BLAZE-2, which is looking at using it for prophylaxis in patients and staff at long-term care facilities. It would be a coherent story if these trials show positive data – that way we could say that this therapy needs to be given early, and is of little use after someone has already been hospitalized. That makes a lot of sense, too – but you know what? A lot of things make sense that still aren’t real. We have to get the clinical data to be sure.

As if to underline this point, we’ve also recently had a controlled trial of convalescent plasma report its results (Damian Garde at Stat, and the article in the BMJ). Recall that the earlier Mayo Clinic data on this had no control group, making it frankly impossible to say how much benefit the plasma treatment might have had. But the new PLACID trial was a study of 464 patients across India, randomized versus a control arm, and it found no benefit to the plasma treatment at all. One objection is that many of these patients may have been dosed too late as well – many of these patients, it seems, were already developing their own antibodies, and it’s possible that if there is a benefit, that it will only show up with earlier treatment. But we don’t know that yet. Right now, convalescent plasma treatment is, at the very least, not looking spectacular, and could even be more or less useless, which would be quite disappointing.

You’ll notice a theme here, to which we can add the recent remdesivir data from the SOLIDARITY trial. Earlier data had suggested that the drug has some positive effects (although not dramatic ones) but the larger data set didn’t bear that out. What these things do is set bounds for what you can expect for a given therapy: the level of response in these cases means that even if there is some benefit, it’s going to be limited (or, at the most optimistic, limited in these particular populations of patients). A Wonder Drug would have given a stronger signal, and none of these are wonder drugs. The question is whether there’s any signal left after more waves of well-controlled data come in – and if so, what patients can get what benefits there are.

A month ago, for example, Regeneron said that their own monoclonal antibody therapy showed some signs of benefit, particularly in patients who had not mounted their own immune response. But if that’s not going to be the best-responding group, who is? Perhaps this study ran into the same problem as the PLACID trial above, and that could mean that monoclonals won’t add that much to people who are already producing their own antibodies. But if the Regeneron antibody cocktail worked wonders, we would have seen something more impressive. It might be that prophylaxis is where these therapies will show their worth (can’t dose much earlier than before a person has the disease at all!) I very much hope so. I will admit to having expected better from the mAbs in general, though, and even from convalescent plasma.

Prophylaxis frankly might be an easier public health decision than one that relies on very early treatment. After all, most cases of coronavirus resolve, and most people don’t end up in the hospital. When do you make the call for an expensive bolus of monocolonal antibodies? Your best shot is probably to catch the highest-risk patients as early as you can, and hope that it’s still early enough. But if such treatment can protect high-risk workers (sort of like a temporary vaccine), then you can far more easily pick out the people who could benefit.

Of course, right now we’re heading back into high case loads in many parts of the country, with some (like Utah) already beginning to hit the wall of hospital capacity. Deploying mAbs under these conditions is going to be more complex than ever. Let’s hope that we get some more clinical data soon to guide these decisions. There really does look to be a gap before any vaccine rollout where the antibodies could help. The key word being “could”. Might. Maybe. Clarity has been in short supply in 2020, and here we are calling for more of it.

Medtronic’s Abre Venous Stent Receives the US FDA’s Approval to Treat Venous Outflow Obstruction

Shots:

  • The approval is based on ABRE clinical study assessing the Abre stent in 200 patients with iliofemoral venous outflow obstruction across the spectrum of deep venous obstruction including those with the post-thrombotic syndrome, NIVL & those who presented with an aDVT. The study also includes a challenging patient population, 44% of whom required stents that extended below the inguinal ligament into the CFV
  • The study resulted in meeting its 1EP of safety with a 2% rate of MAEs within 30 days and also met its 1EPs of efficacy with an overall primary patency rate of 88.0%, no stent fractures, and no stent migrations were reported in the study.
  • Abre venous self-expanding stent system is the device, indicated for use in the iliofemoral veins in patients with symptomatic iliofemoral venous outflow obstruction and has received CE Mark approval in April’2017

Click here ­to­ read full press release/ article | Ref: Medtronic | Image: elEconomista.es

The post Medtronic’s Abre Venous Stent Receives the US FDA’s Approval to Treat Venous Outflow Obstruction first appeared on PharmaShots.

Are Medicinal Chemists Taking It Too Easy?

I was speaking to a university audience the other day (over Zoom, of course) and as I often do I mentioned the studies that have looked at what kinds of reactions medicinal chemists actually use. The cliché is that we spend most of our time doing things like metal-catalyzed couplings and amide formation, and well, there’s a reason that got to be such a cliché, because there’s a lot of truth in that.

At the same time, there’s some evidence that innovative drug molecules come with innovative structures, more often than you’d expect by chance. It’s for sure that some of the hottest research areas right now (such as bifunctional protein degraders) can produce some rather off-the-beaten-track structures. So how do we reconcile these? Can we be making innovative drugs using a bunch of boring reactions?

This new paper (open access) says that yes, we sure can. The authors (from AstraZeneca) first note that about a third of all the reactions in AZ’s electronic notebooks are amide couplings, which sounds about right. They assembled two random sets of 10,000 compounds that had been made and screened in at least two assays, with one of them featuring amide formation and the other with it specifically excluded. These sets (Amide Formation and Other Reactions) were then evaluated by various techniques to roughly measure structural complexity, diversity, and novelty, and in addition the targets that they had hit in past AZ screens were examined.

And as it happens, the Amide Formation set had similar, but slightly higher complexity than the Other Reactions set. The two sets were virtually identical in lipophilicity and percent of saturated carbon atoms, but the amide group was slightly higher in molecular weight and the the number of chiral centers. As for molecular diversity, two different measurements broadly agreed: the Other Reactions set covered more diversity space, but the two sets also had significant non-overlapping regions. That is, the Amide Formation set was not just contained inside the larger diversity space carved out by the Other Reactions set, but had space all its own as well. And there was no real difference in novelty between the two sets, as measured by the number of structures that already occurred in databases such as ChEMBL. And when historical assay behavior was examined, the Amide Formation set had more active compounds in it, while the Other Reactions set covered a slightly wider range of assays themselves. But the two sets had a large overlap in the actual targets covered, so there was, in the end, not a significant difference between the two in “target space”.

The authors suggest that one reason that so simple a reaction as amide formation can hold its own (versus so many other possibilities) is that there are more and more unique amines available for such reactions. They looked through the ELNs for one-step amide couplings that made compounds for testing and examined the amines involved. On average, 8,000 different amines were used each year for such reactions, and every year about 2,000 of them were new. The authors:

In practice, building-block availability is one of the main determining factors. If the desired building blocks are unavailable, the chemist is faced with the decision whether to invest in new route development, or to make analogs with established routes, or to avoid making the target molecule at all. Given the uncertain nature of drug design, investing more time and resources in making a compound does not guarantee improved molecular quality. . .

. . .In medicinal chemistry, we have now reached a state where millions of building blocks have previously been engineered and can now be used in molecular design and synthesis. In addition to the increase in the number of new amines, boronic acids have been another fast-expanding reagent class since the introduction of the Suzuki coupling method

That really has been a change over my career. There are just so many more neat little functionalized compounds available now; it’s become an entire business of its own. As the paper notes, you even have setups such as Enamine’s REAL compound set, which is a virtual-but-easily-made collection via mixing and matching their available building blocks. That one would come out to well over a billion compounds if someone placed an order for the whole collection.

And if we can get our work done via such easy reactions – plenty of experience in doing the reactions, relatively easy purifications, existing scaleup expertise, and so on – then why shouldn’t we? (I should note that the paper under discussion has a lot of good references to past arguments about this issue). That gets to another point I was emphasizing to my university audience: medicinal chemistry is a means to an end. The end, of course, is the discovery of useful drug molecules, and if the synthetic chemistry can (as much as possible) get out of the way of all the other tricky steps in that process, then so much the better.

That’s not to say that we shouldn’t try new reactions or new technologies. Among other things, these can lead to even more new building blocks that can feed into the easy reactions themselves. And God knows, as you develop the SAR of a compound series you may find yourself unavoidably being pushed into difficult chemistry, where you will need all the help you can get and throwing amide couplings and Suzukis at the problem will avail you not. No, we definitely need our skills and our imaginations – but we need them for the times we need them, and when we don’t need them we should speed drug discovery along with the best tools we have for it. To paraphrase Einstein about physical theories, a synthetic route should be as simple as possible, but not any simpler. Getting as much done as you can with the easy methods leaves you more time to tackle the hard stuff. Get flashy only when you have to.

Why doesn’t the US have nationalized health care?

This is one of the main topics explored by the interesting book Remedy and Reaction by Paul Starr. The U.S. has gotten close to national health insurance programs a number of times. One of the first times was in New York after World War I.

The one state where compulsory health insurance came close to passage was New York…In March 1919…the New York State Senate…[passed] a health-insurance bill that had the support of the recently elected Democratic governor Al Smith…

If New York had adopted a health insurance program in 1919,it might have had national ramifications. The unemploymen tinsurance program that Wisconsin adopted in 1932 helped pave the way for federal legislation in 29135. In Canada, the health insurance plan adopted in Saskatchewan in 1946 played a comparable role as a stepping stone toward a national program.

Private health insurance in the US developed out of the need to finance the high cost of hospital services. In the early 20th century, health insurance was most valuable for replacing wages and providing a funeral benefit. As the cost of medical care care grew in the late 1920s and 1930s, the value of health insurance to cover medical cost grew.

…in the late 1920s and early 1930s individual hospitals and hospital associations in Texas, California and other states created the first plans for groups of employees to buy insurance for hospital expenses. These plans, which evolved into the Blue Cross system, were run on a non-profit basis and at first covered only a small number of people…

Another reason why nationalized health insurance has not passed is that physician have generally been opposed to government health insurance.

In the late nineteenth and early twentieth centuries, legislatures enacted progressively stricter licensing laws for doctors, raising requirements for medical education; the effect of those laws was to close many medical schools and reduce the supply of physicians at a time when demand for their services was growing. Under these conditions, doctors were able to increase their fees and incomes (which was one reason why organized professional support for government health programs diminished).

Many government health insurance programs would pay physicians based on global capitation, which would likely drive down demand and wages, particularly for specialist services.

The Vaccine Tightrope

We’re getting closer to having to deal with a number of tricky issues around the first Emergency Use Authorizations (EUAs) for coronavirus vaccines. These have never quite come up in this way before, because (for one thing) EUAs for vaccines are relatively rare events, and (for another) we’ve never had so many simultaneous vaccine trials against the same disease before.

So let’s just stipulate that Somebody (be it Pfizer, Moderna, AstraZeneca, whoever) asks for an EUA before all the other Somebodies, and that this request is granted. I don’t know when this is going to be – December? January? Whenever. Someone is probably going to be first, and it could well happen somewhere around then. The actual date doesn’t matter so much as what happens relative to the event.

At that point, it means that there is a coronavirus vaccine that has been authorized for human use. Immediately we have all the rollout concerns about who gets it, in what order, where, how much vaccine is available and how it’s going to be distributed. We’ve talked a bit about that here, and there seems to be a lot of planning going on around these questions (as there had better be!) But what if you were in the placebo group of the very vaccine trial that led to the EUA? Will this trial be unblinded, or not, and will such participants be given the chance to be given the actual vaccine? What if you’re one of the (many thousands) of people who are in the placebo group for the other vaccine trials, the ones that have not yet been authorized for use?

Steve Usdin goes into these questions here at BioCentury (article is free to read). He notes this recent FDA guidance document that goes into some of these issues:

It is FDA’s expectation that, following submission of an EUA request and issuance of an EUA, a sponsor would continue to collect placebo-controlled data in any ongoing trials for as long as feasible and would also work towards submission of a Biologics License Application (BLA) as soon as possible. FDA’s recommendations regarding the safety and effectiveness data and information outlined below are essential to ensure that clinical development of a COVID-19 vaccine has progressed far enough that issuance of an EUA for the vaccine would not interfere with the ability of an ongoing Phase 3 trial to demonstrate effectiveness of the vaccine. . .

“As long as feasible” is a well-crafted way of putting it. But we need to ask just how long that might be. You can see from the latter part of the quote that one of the considerations for issuing an EUA at all will be whether it might fatally disrupt the ongoing Phase III, and indeed, the FDA goes on to say that they don’t consider a vaccine EUA itself as grounds for unblinding. This leads us to the various clinical trial designs and interim data analyses, and how they fit in with an EUA request. It’s important to understand what the interim data readouts are designed to be able to say, which (translated into English from statistics) is something like “the distribution of coronavirus cases observed so far is not inconsistent with the vaccine having an eventual efficacy at the end of the trial of at least X per cent, at a certain pre-defined confidence level”. This sounds rather less definitive than what I think the general public might imagine the unblinding of clinical trial data looks like, and I should also add that it can take longer than you’d think (once you have the unblinded data) even to be able to say that much. This is not like rubbing off a lottery scratch ticket, that’s for sure. Rarely does a trial read out in a way that you look at the raw data and say “Holy guacamole, I think that stuff kicked butt”, although it should be said that obvious butt-kickings in the other direction are somewhat more common,

But I’m not expecting any of the latter, to be honest. Clinical trial success rates for vaccines against infectious diseases are (according to these estimates) the absolute best in any therapeutic area in the whole industry. Now, that means that a full one-third of those trials are successful, as opposed to about 3% of the oncology trials (to pick the therapeutic area at the other end of the scale!) But we’re going to do even better than that. Thanks to the SARS and MERS epidemics, we had a real advantage on this coronavirus, in terms of what the most likely antigen might be for a successful vaccine, what to look out for in animal models, and so on. If we get some weirdo pandemic from a less-studied group of viruses, it’s going to be a lot harder, and let’s hope we never get a chance to find out just how much. No, I expect the current vaccines to all work to some degree, and the whole point of running the trials is to figure out what that degree is and how it compares to what we need.

Thus all the statistics, and thus all the hard decisions. We may get into a situation where an interim readout of the data show that a vaccine may well be working, but that granting an immediate EUA has a real danger of blowing the statistics for the complete trial. That is truly the worst outcome: ending up with something that might be useful, but being unable (despite all the time and money and effort) to able to say if it really is. We’ve got to avoid that.

But the patients involved in all these trials may have other ideas. Each individual that decides to leave the trial protocol may feel that loss of their own data is not enough to affect the overall result, but if enough people think that way, that result will most certainly suffer – a tragedy of the clinical commons. It’s important to remember that the “tragedy of the commons” doesn’t have to happen every time it could: there are plenty of examples of it being avoided, and we have to make sure that this is going to be another one. The considerations run the other way as well – it may end up being incumbent on the trial organizers, from an ethical perspective, to break the blind and offer the vaccine to all participants. “We should have such problems” is my first reaction to that possibility, but we’ll have to make that call carefully, not ruling out such a decision but not leaping to it, either.

But let’s get back to the question of what happens to the other vaccine trials. In the same way that you can’t force the participants of the emergency-authorized vaccine trial to stay in it, you also can’t force the participants in the other trials not to get the newly authorized one. An additional problem is that I strongly suspect that many participants across all the US trials have a good idea of whether they’re in the placebo group or not. AstraZeneca has been giving a meningitis vaccine in the control group in the UK (and other countries?), but their US trial (and Moderna’s, Pfizer’s, and J&J’s) are using saline control. I’d have to figure that someone who got notable site-of-injection reactions knows they got the real vaccine, while someone who hasn’t (especially after a second injection) figures that they’re in the control group. What’s to keep the latter from going out and getting vaccinated, if another vaccine is available to them?

And how will all this affect the coming clinical trials for other vaccines? Novavax is an obvious example – they have a recombinant protein vaccine coming along, which is a method that hasn’t been into human trials yet for the current coronavirus, and has potential advantages for storage and distribution. Then you have various companies investigating things like nasally-administered vaccines, which also could have advantages that will only be made clear in large controlled trials. How will anyone run them, if there’s another vaccine (or two, or three) available? One answer, as the BioCentury article goes into (based on the FDA documents) is that you might have to switch over to using the authorized vaccines as the control group and run “non-inferiority” trials instead of placebo-controlled ones. That’s going to be a tricky gear shift, though. Here’s Usdin:

It is far from clear, however, that non-inferiority trials of COVID-19 vaccines would be feasible. Moreover, the difficulty of doing this could preface pressure on FDA to accept external control arms or real-world data as controls. Demonstrating non-inferiority to an effective intervention can require very large trials or the acceptance of large confidence intervals around the results. The placebo-controlled Phase III COVID-19 trials already include at least 30,000 participants, and there will be little acceptance of uncertainty about the efficacy of a vaccine to prevent a life-threatening disease.

These considerations are getting less hypothetical all the time. As noted in the BioCentury article, J&J is already openly asking the FDA and the other vaccine developers to work together and ensure that an EUA and unblinding event doesn’t create a de facto monopoly in the vaccine space. There will be a number of ways to get this wrong, and I’m glad that the issues are at least out there being talked about. Next come actions.

The Machines Rise a Bit More

Here’s a new paper in Nature on computer-generated synthesis of natural products. More formally, you’d call it retrosynthesis, since the htought process in organic chemistry tends to work backwards when you have a particular target that you’re trying to make: “OK, this part could could be made from something like this. . .and that, you could make by condensing two pieces sort of like these. . .”

You work back to more accessible starting materials, based on the transformations that you know about or can picture being feasible. For more simple molecules, it’s the kind of thing you ask sophomore students to do on one of the higher-point-value questions at the end of the test. But for larger and more complex ones, it can be a great deal of work. The “decision tree” about what pathways to use to build up a tricky structure can be huge, and the relative advantages of each are not always obvious. Some of the things that we chemists do value, though, are brevity (fewer steps are almost always better), high yields in each step (because even 90% yield per step will whittle your material away surprisingly quickly), use of readily available/inexpensive reagents and materials (especially important to industrial chemists, obviously), reproducibility (no one goes in and tries to reproduce a 35-step total synthesis for the heck of it, but if you ran step 26 fourteen times and it only gave you a decent conversion once, that’s bad form), and what we all call “elegance”.

That last one is hard to define, but one aspect of it is the “I didn’t see that coming” factor, where parts of a complex molecule are assembled more quickly and surely than you would have pictured, through some nonobvious path. Compression is another aspect, and that can mean something as simple as doing more than one step in the same flask without having to do the whole work-up-the-reaction-isolate-and-purify-the-product thing every single time. Past that, it’s getting the most out of every chemical step, something like “This hydroxy group is what makes the nucleophile come in from this direction in this step, and in the next one it’s also going to be what sets off the rearrangement that fixes two more chiral centers at the same time” Getting things like that to work ain’t easy. Actually just seeing them in the first place isn’t, either. If you know organic chemistry well enough, the reaction to a great synthesis really does have an aesthetic component (as overplayed as that part is in the writing of some practitioners over the years). It’s like watching a talented writer land the last lines of a poem, managing to finish its point while also suggesting new meanings that apply to the earlier stanzas, and simultaneously making a subtle but deliberate unexpected reference to some other work of art that illuminates the whole poem from a different direction still.

So from all that, you can tell that coming up with such synthetic proposals is (for many organic chemists) something that they see as a unique and central part of their discipline. And that’s why attempts to automate it grate on some people’s nerves. Imagine how Petrarch might have reacted to a brochure for a Sonnet-o-Matic. If your horse is high enough, you can regard such software as an offense against Nature and against your honor, and even if you’re a bit closer to the ground it might occur to you that part of your recognized job is now under some form of assault.

I’ve written about such programs a few times over the years. There are several commercial software packages out there already, and a number of competing approaches. I think it’s fair to say that none of them have taken over the world, but it’s also fair to say that they’re being taken seriously. There are particular advantages to a computational approach to retrosynthesis that are harder to realize with one’s brain: avoiding a thicket of process patents, for example, or not even considering using reagents and starting materials that are not on some particular list. And that’s not even mentioned the difficulties of keeping up with the literature itself – as I’m fond of saying, a retrosynthesis program can learn new chemistry every evening, while most of us can’t keep up that pace.

This latest work is from the people who came up with the Chematica program, and it has some interesting insights into what happened when the authors tried to push the software into more challenging natural product chemical space. There were, they report, many instances where the program knew all the individual steps that could go into such a synthetic route, but still failed to find one. They had to make a number of modifications to make it work more strategically – for example, being willing to admit a step that made things temporarily more complex for a bigger synthetic payoff a step or two later, or looking for opportunities to accomplish more than one chemical step at a time. Not all of these are at that strategic level, I should add – one extension was directly having the software recognize about a hundred useful and well-precedented functional-group interconversions and sequences that had shown up in human-driven total synthesis over the years.

The analogies to chess playing come to mind whether you want them to or not: you’re getting the software to handle the idea of sacrificing a piece to gain better position or better prospects in the end game, or loading it with particular lines of play that have proven useful and forcing it to take those into account. And these analogies work so well because organic synthesis is itself a game, played on a very large board with very complex rules, and with the added complexity of new pieces and moves being discovered from time to time. That’s why we like it so much.

In the end, the authors assembled a set of natural product syntheses from the literature, all of which we can presume to have been worked out by various sorts of humans. And they mixed these with a set (done on broadly similar sorts of molecules) generated by the souped-up version of Chematica. They sent these around to a number of experienced chemists and did a sort of Turing test, asking people if they could tell which routes were from the humans and which were from the machines. You can try the same experiment – start from the beginning of the Supplementary Information file and make your calls. The answer key comes after the syntheses are laid out.

What I can tell you is that no, it appears that the experts couldn’t really tell the difference. And that says something about Chematica, but I fear that it also says something about organic synthesis. None of these syntheses, the known human ones nor the machine-generated ones, are going to trigger a major aesthetic experience for anyone. The natural product structures are fair ones, but they’re generally not complicated enough for something really elegant or surprising to occur. That makes their synthesis, even when performed by humans, a bit more of a mechanical exercise than it would have been at one time. We know a lot more chemistry than we did in R. B. Woodward’s day, and what he often had to invent, we now use as a matter of course. Whole classes of ring systems and functional group combinations have been worked on to the point that we have pretty reasonable ideas of how you might produce them. And while those aren’t always going to work in practice, enough of them will (and there are enough alternatives for the steps that don’t) that the resulting synthesis falls into the “Yeah, sure, why not?” category, rather than “Whoa, look at that”.

No software is yet producing “Whoa, look at that” syntheses. But let’s be honest: most humans aren’t, either. The upper reaches of organic synthesis can still produce such things – and the upper stratum of organic chemists can still produce new and starting routes even to less complex molecules. But seeing machine-generated synthesis coming along in its present form just serves to point out that it’s not so much that the machines are encroaching onto human territory, so much as pointing out that some of the human work has gradually become more mechanical.

The SOLIDARITY Data

OK, we have some more to think about this morning. The large SOLIDARITY trial from the WHO has reported more interim data on its investigation into repurposed drugs for the coronavirus pandemic. And some of this we already knew, but some of it’s a real surprise.

One drug reported on is hydroxychloroquine. This showed no apparent benefit along with a statistical possibility of increased hazard, although the latter was also not proven. This is consistent, as the paper points out, with the results of 27 other trials, and for God’s sake, enough said. Another was the lopinavir/ritonavir combination, and no benefit was found for that, either, which is consistent with two other reported trials. A third intervention was interferon-beta1, with and without lopinavir, but this also showed no evidence of benefit. The statistics on it do not rule out a small useful effect, but do rule out anything moderately beneficial or above, and it’s worth nothing that they don’t rule out small amounts of harm, either. This was either subcutaneous or i.v. administration – there has been a report that administering it to the lungs via a nebulizer could be effective, but that only had 100 patients. There is also another trial underway with s.c. dosing, but after these results it’s hard to be optimistic about that one.

Now we get to remdesivir. In contrast to the report that just came out from the ACTT-1 trial in NEJM, the SOLIDARITY trial found no benefit at all. No benefit in ventilation, in time to recovery, nor in overall mortality. Combining the data from these two reports (and two other smaller controlled Remdesivir trials, the authors conclude:

This absolutely excludes the suggestion that Remdesivir can prevent a substantial fraction of all deaths. The confidence interval is comfortably compatible with prevention of a small fraction of all deaths, but is also comfortably compatible with prevention of no deaths.

Recall that the ACTT-1 trial showed some benefit, but not a dramatic one. These two results are probably not as incompatible as they seem, particularly (as the current paper notes) when you adjust for the fact that in ACTT-1 the randomization put slightly fewer patients on high-flow oxygen or ventilation into its treatment group. So in the same way that I noted that the ACTT-1 data came from a larger set of patients than the earlier remdesivir trials, we have to also deal with the fact that the SOLIDARITY data set is larger still. The authors again:

The unpromising overall findings from the regimens tested suffice to refute early hopes, based on smaller or non-randomized studies, that any will substantially reduce inpatient mortality, initiation of ventilation or hospitalisation duration. Narrower confidence intervals would be helpful (particularly for Remdesivir), but the main need is for better treatments.

I think that’s the major take-home. It looks like there is no case at all to be made for some of these therapies, and for remdesivir, it looks like the argument now is “Does it help a bit or just not at all?” The only thing that I’ve seen that really seems to be making a big difference now is dexamethasone, and we may soon (I hope) be adding the monoclonal antibodies to that list. The SOLIDARITY enrollment is still moving along at 2000 patients/month, and will be looking at these and other newer ideas as it continues. And that means abandoning the older ideas, because – as we’re finding out – they’re not much help. It’s time to move on.

Immunity and Re-Infection

For months now, people have been watching closely to see if it’s possible to get re-infected with the coronavirus. It’s taken a while for the signal-to-noise to get better, but by now there’s no doubt that the answer is yes, it’s possible. We’ve just had the first of these in the US, a man in Nevada who was infected twice six weeks apart, with the second round being worse than the first. And in the Netherlands, the first fatality from a reinfection has been reported. All this sounds immediately like bad news, but I’m going to break out the same advice I was handing out yesterday: don’t panic.

Why not? Because from everything we can see, re-infection is a very rare event. The confirmed examples worldwide could possibly be counted on your fingers (depending on whose count you believe) out of at least 38 million total cases. Looking at the Netherlands case, this was an 89-year-old patient with Waldenström’s macroglobulinemia, a type of leukemia that affects two different varieties of B-cells. She was being treated with chemotherapy to impair. B-cell production, and was thus immune compromised, and the second infection occurred two days after her latest round of treatment. Below is an analysis of the sequences of the first virus and the second – if you’d like more information about what a figure like this means and how to read it, see posts here and here.

You’ll note that there are not very many changes, and that all but three of them are nucleotide changes that make no difference to the actual coronavirus proteins. Let’s take a similar look at the two round of virus in the case in Nevada:

In this case, the red marks at the bottom are noting the changes versus the reference coronavirus genome. Comparing the two, you can see that these two strains had at least eight differences between them, but they’re both considered part of the “20C” clade of SARS-CoV-2, which is a largely North American family. There are several key things to take home from these sequences.

First off, in each of these cases, the unfortunate patients involved were infected by different variants of the coronavirus than they had the first time around. That’s pretty much what you have to show to be sure that it’s a real re-infection – otherwise you’d always wonder if the virus had just never really been cleared the first time. The first-round versus the second-round sequences show some real differences – not gigantic ones, but real. Second, these two second-round viruses were different from each other as well, so it’s not like some particular new supervirus is stomping around re-infecting people around the world. Third, in neither of these reports was it the widely publicized D614G strain coming back around the second time. That mutation really doesn’t figure into either of these cases at all, so watch out for anyone who’s mixing those stories together.

Fourth – and here’s where we start digging into some details – note that the mutations in both of these new re-infection cases have nothing to do with the Spike protein. There’s no change in the Spike in the Nevada sequences (they both had D614G), and the changes in the Netherlands sequences are conserved ones that don’t lead to changes in the protein in that region. Antibodies don’t care about genetic sequences; they respond to the eventual proteins that are displayed, and from what I can see, the Spike proteins of all of these strains are identical.

That’s important for several reasons. For one, the vaccines under development are all raising antibodies and T-cells to the Spike region. That was identified early on as the most promising antigen, building on the work during the SARS and MERS outbreaks. Note also this new paper, a thorough look at the various antibody fractions in patients who have recovered from coronavirus infection. The authors find that Spike-targeting neutralizing antibodies persist out to the limits of their study (five to seven months) while antibodies to the nucleocapsid region (N), which are also raised in most people by infection, disappear more quickly.

This leads to a hypothesis: perhaps in these two cases of re-infection, the patients either did not raise a very robust immune response the first time, and/or raised antibodies more to other antigen proteins compared to the Spike protein. That would account for the second variant being able to slip in under the immunological radar: the antibodies these people used to fight it back the first time were directed towards protein regions that had altered enough to make their recognition less effective.

What about the other confirmed re-infection cases? In the Hong Kong case, there were in fact mutations in the Spike protein, including the D614G, which led some people to wonder if such Spike changes were going to be a general phenomenon and lead to more re-infections. But these two new cases show that it doesn’t have to be that way. Note also that in that case, the first-infecting variant had 58 amino acids missing out at one end (a stop codon mutation in the ORF8 region), which is rather different as well. In the Belgian case, there was one amino acid change in ORF1a in the second variant, three in the Spike (including the D614G), and one in the N protein. The other mutations in the Spike, though, did not match up with the ones in the Hong Kong second variant. And in the Ecuador case, there were nine amino acid changes, but the only one in the Spike region was the D614G.

This is the time to note that a good amount of work has been done on the possible changes in infectivity, etc., of different coronavirus Spike mutants. And we’re not seeing a trend for viral evolution in directions that could increase re-infection or evade the antibodies that are raised by the existing vaccine candidates. The Nevada case, for example, showed the second variant actually had four amino acids that were back to the original Wuhan strain, rather that being something further and further out on the mutational limb. I’ve been looking through the re-infection papers and comparing the mutations seen with that link earlier in the paragraph, and so far anyway I’m not seeing a correlation between the second-round infections and the Spike changes that were flagged for changes in infectivity or antibody susceptibility. (And even some of those possible antibody-resistant mutants identified in that paper above turned out to be equally susceptible to the antibodies raised by the Pfizer vaccine when that team profiled them).

So the situation, for now, seems to be that yes, re-infection is possible. But it’s also quite rare. There are surely cases that we’ve missed, but it’s clearly not something that is happening much. We’re dealing with the fact that the human immune response is hugely variable from person to person – that’s one of its key features. Different people are going to raise different levels of different populations of different antibodies to a coronavirus infection, and that’s a big reason why the clinical course of disease is so variable. Even in these documented reinfection cases, we don’t know the details about what their first immune responses were like (there was no reason to profile these people in such detail the first time!)

Moving beyond that, I would suspect that vaccination, which raises neutralizing antibodies to the Spike protein, will provide a population that is even less susceptible to re-infection than we have in the wild-type-recovered population now, given that three of the five cases we have details of did not have significant changes in the Spike region at all. Now, we don’t know how long vaccine protection will last, or how variable it will be in a broad population – we’re out there getting those data now – but from what we’re seeing, I think the prospects are good. No panic necessary for now.

Another Vaccine Trial Halt

The first advice is “Don’t panic”. You will have heard that last night J&J announced that their coronavirus vaccine dosing has been paused while they investigate an adverse event in the trial. And while you never like to hear that, considering the size of their effort, this sort of thing is likely to happen even if the vaccine turns out to have no real safety issues. Just this morning, the company’s CEO told analysts on a conference call that they don’t even know yet if the affected patient is in the treatment group or the controls: that’s how early this is. Now, there are definitely ways that this could go that would be concerning, but we’re not there yet.

Update: now it appears that a trial of their monoclonal antibody plus remdesivir has also been paused after an adverse event “out of an abundance of caution”. I’m still not panicking.

In the larger picture, this (and the AstraZeneca vaccine halt, which continues here in their US trial) are why we have been developing so many candidates. The central fact of the biopharma industry is that most things we try don’t work. That means that (1) we have to keep trying a lot of things, and (2) since that costs a lot of money, we generally charge a lot for the things that actually do work. I expect the coronavirus vaccine success rate to be higher than the industry average, though, because (thanks to the work on the earlier SARS and MERS outbreaks) we already had a lot of information about how these coronaviruses are organized, how they attack cells, where the best choices might be to produce antigens against them, and some of the potential problems we would need to look out for. All that has been an invaluable leg up.

It will be a good thing if we don’t do the industry average, though, because that’s around a 90% failure rate from a standing start. Here’s the good news about that: what we’ve been able to see from the Phase I/II data on the various candidates is very encouraging. We can indeed stimulate antibody and T-cell responses, and every single vaccine that has reported data at this level has shown this. The uncertainty is that we don’t know (yet) what responses we need for a given level of protection, how well such vaccines will work across different populations groups (ages, risk factors), nor how long such protection will last. The only way to get those numbers is to go out there and get them, which is what’s going on right now.

And the other big uncertainty, which we’re also dealing with now, is safety. Again, the only way to find out about this is to go find out about it – we have no ways to really predict what might happen when a new vaccine is dosed in a large population. Showstopping holy-crap level tox effects are extremely rare (fortunately), but that means that you’re then looking for unlikely adverse events across a large trial population, and trying to figure out what will happen when you expand to a much larger one. It ain’t easy.

So this latest J&J headline has not changed my views. They’re still overall positive. I still think we’re going to have at least one (and likely more than one) useful vaccine in the next few months. It’s just that I have no idea of which ones those will be. And I also think that we will have even better choices in the longer term, once we’ve broken the back of the current pandemic: there are a lot of other interesting candidates that are just getting towards human trials.

But this doesn’t mean that things will go smoothly, which is why I wrote this earlier post. This is what drug development is like all the time; all we’ve done is hit the fast-forward button and put the spotlights on it. I completely understand if it’s nerve-wracking to watch, believe me. Remember, it’s extremely likely that there will be more dips and swerves coming, and it’s a good idea to try to be psychologically ready for them. When they come, it doesn’t mean that everything’s failed. We have too many things going for one piece of news to mean that. I’ll end where I started: don’t panic.

2020 Economics Nobel Prize: Paul Milgrom and Robert Wilson

Congratulations to Paul Milgrom and Robert Wilson for the 2020 Nobel Prize in Economics. Below is an excerpt from the press release.

Using auction theory, researchers try to understand the outcomes of different rules for bidding and final prices, the auction format. The analysis is difficult, because bidders behave strategically, based on the available information. They take into consideration both what they know themselves and what they believe other bidders to know.
Robert Wilson developed the theory for auctions of objects with a common value – a value which is uncertain beforehand but, in the end, is the same for everyone. Examples include the future value of radio frequencies or the volume of minerals in a particular area. Wilson showed why rational bidders tend to place bids below their own best estimate of the common value: they are worried about the winner’s curse – that is, about paying too much and losing out.
Paul Milgrom formulated a more general theory of auctions that not only allows common values, but also private values that vary from bidder to bidder. He analysed the bidding strategies in a number of well-known auction formats, demonstrating that a format will give the seller higher expected revenue when bidders learn more about each other’s estimated values during bidding.
Over time, societies have allocated ever more complex objects among users, such as landing slots and radio frequencies. In response, Milgrom and Wilson invented new formats for auctioning off many interrelated objects simultaneously, on behalf of a seller motivated by broad societal benefit rather than maximal revenue. In 1994, the US authorities first used one of their auction formats to sell radio frequencies to telecom operators. Since then, many other countries have followed suit.

Do the laureates have a health care connection? The answer is ‘yes’! Paul Milgrom’s most cited (non-textbook) paper is studies titled “Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design.” The study discusses how to set up contracts one some components of quality can be observed by the principal but some cannot. It is a cautious tale on payer value-based purchasing arrangements with providers. Paul Milgrom’s co-author on the study, Bengt Holmstrom, actually won the Nobel Prize in 2016.

More commentary on the 2020 Economics Nobel prize winners can be found here:

2020 Economics Nobel Prize: Paul Milgrom and Robert Wilson

Congratulations to Paul Milgrom and Robert Wilson for the 2020 Nobel Prize in Economics. Below is an excerpt from the press release.

Using auction theory, researchers try to understand the outcomes of different rules for bidding and final prices, the auction format. The analysis is difficult, because bidders behave strategically, based on the available information. They take into consideration both what they know themselves and what they believe other bidders to know.
Robert Wilson developed the theory for auctions of objects with a common value – a value which is uncertain beforehand but, in the end, is the same for everyone. Examples include the future value of radio frequencies or the volume of minerals in a particular area. Wilson showed why rational bidders tend to place bids below their own best estimate of the common value: they are worried about the winner’s curse – that is, about paying too much and losing out.
Paul Milgrom formulated a more general theory of auctions that not only allows common values, but also private values that vary from bidder to bidder. He analysed the bidding strategies in a number of well-known auction formats, demonstrating that a format will give the seller higher expected revenue when bidders learn more about each other’s estimated values during bidding.
Over time, societies have allocated ever more complex objects among users, such as landing slots and radio frequencies. In response, Milgrom and Wilson invented new formats for auctioning off many interrelated objects simultaneously, on behalf of a seller motivated by broad societal benefit rather than maximal revenue. In 1994, the US authorities first used one of their auction formats to sell radio frequencies to telecom operators. Since then, many other countries have followed suit.

Do the laureates have a health care connection? The answer is ‘yes’! Paul Milgrom’s most cited (non-textbook) paper is studies titled “Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design.” The study discusses how to set up contracts one some components of quality can be observed by the principal but some cannot. It is a cautious tale on payer value-based purchasing arrangements with providers. Paul Milgrom’s co-author on the study, Bengt Holmstrom, actually won the Nobel Prize in 2016.

More commentary on the 2020 Economics Nobel prize winners can be found here:

Target Locations: Meat Lover’s Pizza

**Target locations listed in alphabetical order by State** We highly suggest you check Target.com or call any store ahead of time to guarantee availability. address city state zip 6275 University Dr NW Huntsville AL 35806-1776 2750 Carl T Jones Dr SE Huntsville AL 35802-4914 790 Schillinger Rd S Mobile AL 36695-8979 4616 […]

Hard Data on Remdesivir, and on Hydroxychlorquine

Let’s catch up with some things that (by this point) feel a bit like old news. But it’s important to do it, because (A) the big reason they feel that way is because of the bizarre world we’ve been living in the last few months, and (B) the pace of medical discovery is not set to human preferences, anyway. I’m talking about remdesivir and hydroxychloroquine. And yes, I know that I said I wasn’t going to mention the latter one again, but I figured the points being made today are important enough to justify it. I might regret that.

The New England Journal of Medicine today has the final report from the team studying remdesivir in a >1000 patient randomized controlled trial. This group was randomized and divided roughly half-and-half to standard of care plus remdesivir versus standard of care plus placebo, blinded. So this is the most solid look we have at the drug’s efficacy in coronavirus patients. The good news is that the patients receiving the drug had a shorter time to recovery (9 to 11 days, in the 95% confidence interval, versus 13 to 18 days with non-remdesivir standard of care. That’s real, but it’s not real dramatic, either, which is what you would realistically expect from a single broad-spectrum antiviral drug. This ain’t sofosbuvir clearing out hepatitis C, and even that one doesn’t do the job by itself.

As for the hardest endpoint of all, mortality by Day 29 for these patients was 11.4% with remdesivir therapy as compared to 15.2% with the controls. So again, that’s a real improvement and very much worth having, but it’s not a Miracle Drug, either. Adverse events were actually lower in the treatment group, which is of course good news. You can see that these were indeed at-risk patients – the overall mortality rate in the general population for coronavirus infections is nowhere near those rates, and it’s a damn good thing it isn’t. The mean age of the patients was 59 years, 64% male, with 50% of them having hypertension, 45% of them obese, and 30% with Type 2 diabetes. So even though they were characterized as mild-to-moderate on enrollment, this was just the sort of group that you’d worry about as a physician.

So remdesivir is indeed a worthwhile drug, especially when given to people in higher-risk patient groups. This confirms the preliminary reports, and (to be honest) is somewhat better than some of the early reads. Not everything gets worse when you look at it closely! But you have to look at it closely and rigorously – there is no substitute and there are no shortcuts.

That point is illustrated by a second paper in the same issue of NEJM, the report from the RECOVERY study on hydroxychloroquine. In this one, 1561 patients got HCQ plus standard of care, versus 3155 who had standard of care without it. There would have been even more treatment patients, but enrollment was closed in early June after an interim analysis showed a strong likelihood of no benefit. Both cohorts were followed thereafter, with 28-day mortality as the primary endpoint.

The HCQ-treated patients did not survive better than those not getting the drug: 27% of them died within 28 days, versus 25% in the standard of care group. The data are shown below:

Overall, 59.6% of the HCQ patients were discharged during that 28-day period versus 62.9% of the control group. Meanwhile, 30.7% of the HCQ group ended up on ventilators during that period (none were, at entry into the trial) versus 26.9% of the control patients. Every single one of these trends is in the wrong direction and every single one of them was seen in all the pre-specified patient subgroups. As for adverse effects, there were numerically more cardiac events in the HCQ group, but it was not statistically significant. Note that patients with existing QT prolongation were excluded from the study.

I’m not in a mood to be subtle. Hydroxychloroquine treatment for coronavirus does not work. It is not beneficial, and in fact appears to be actively harmful. As far as I’m concerned, administering it to infected patients now constitutes medical malpractice. I have no interest in goalpost-moving efforts to say that they didn’t administer zinc or azithromycin, or they picked the wrong patients or the wrong loading dose or whatever. No. This is special pleading, and it is not backed up by any hard data. None of the countries or regions where HCQ was enthusiastically adopted, with or without the addition of zinc, azithromycin or what have you have seen discernable benefits. It. Does. Not. Work. Give it up.

CRISPR inventors win Nobel Prize in Chemistry

From the Nobel Prize website:

Emmanuelle Charpentier and Jennifer A. Doudna have discovered one of gene technology’s sharpest tools: the CRISPR/Cas9 genetic scissors. Using these, researchers can change the DNA of animals, plants and microorganisms with extremely high precision. This technology has had a revolutionary impact on the life sciences, is contributing to new cancer therapies and may make the dream of curing inherited diseases come true.

Congratulations to Drs. Charpentier and Doudna!

Cigna’s Evernorth Expands Digital Health Formulary with Addition of Omada MSK by Physera

Cigna’s Evernorth Expands Digital Health Formulary with Addition of Omada MSK by Physera

What You Should Know:

– Cigna Corporation’s health services segment, Evernorth expands
its digital health formulary with the addition of Omada MSK by Physera.

– Omada MSK by Physera joins Omada’s existing formulary
programs for diabetes prevention, type 2 diabetes, and hypertension.


Cigna Corporation’s health
services segment Evernorth today announced
the expansion of its industry-first Digital Health Formulary,
adding Omada Health’s program for at-home physical therapy and personalized
coaching for chronic pain to the platform. Omada MSK by Physera joins Omada’s
existing formulary programs for diabetes prevention, type 2 diabetes, and
hypertension.


Evernorth Health Services Background

Evernorth provides a distinct and dedicated platform for the
distribution of health solutions geared toward health plans, employers, and
government organizations – inclusive of those without Cigna medical insurance.
This enables Evernorth to partner deeply with, and provide unmatched and
focused support to these groups, all guided by its unwavering commitment to
make health care better. 

To date, health plans covering more than 20 million
Americans offer solutions from Evernorth’s Digital Health Formulary to their
members. Plan sponsors can choose to invest and deploy individual solutions, or
a combination of programs on the platform. In addition to rigorous measurement
from physicians, pharmacists, and health research scientists, programs on the
formulary are also evaluated by user-experience experts. People using programs
on the Digital Health Formulary may also receive support from specialist
pharmacists to further optimize the potential health benefits of the chosen
solutions.


Availability

The musculoskeletal program will be available to formulary
customers beginning in 2021, giving Omada the broadest range of programs on the
industry-leading platform. Created in 2019, the Digital Health Formulary
features solutions that are evaluated and measured on clinical effectiveness,
usability, scalability, and interoperability.


A Nobel for CRISPR

The 2020 Chemistry Nobel has gone to Jennifer Doudna and Emmanuelle Charpentier for the discovery of CRISPR. An award in this area has been expected for some time – it’s obviously worthy – so the main thing people have been waiting for is to see when it would happen and who would be on it. We’ll get to that second part, but let’s start off with a CRISPR explainer. What is it, how does it work and why has everyone been so sure that it’s a lock for a Nobel?

The short answer is that CRISPR is the easiest, most versatile method for gene editing that has yet been discovered. It’s important to note that those discoveries are still coming; the fireworks have not stopped going off by any means. We’ll do “basic CRISPR” first, though, since everything builds off of that. The story of the discovery is actually a very good illustration of how science works – the good, the bad, and the baffling – and with that in mind, I’m going to spend more time on that part.

The acronym is not going to help very much, I fear: it stands for Clustered Regularly Interspaced Short Palindromic Repeats, which refers to some odd features found in the DNA sequences of many single-celled organisms. In 1987 these were discovered by Yoshizumi Ishino and his group at Osaka. They were cloning a completely unrelated bacterial gene and found these weirdo repeated short stretches of DNA all clustered together (but still separated by unrelated sequences), and no one had seen anything quite like them. This clearly wasn’t an accidental feature, but no one really knew what to make of them, either. It wasn’t until 1993 that the story was really picked up again, when J. D. A. van Empden and co-workers in the Netherlands were looking at repetitive DNA sequences as a way to tell different strains of M. tuberculosis bacteria apart, and noted the same sort of odd patterns.

That same year, Francisco Mojica (a grad student at the time) and co-workers at Alicante in Spain reported the same sort of thing in a really unrelated organism, Haloferax mediterranei. That’s an Archaea, the weird non-bacteria single-cell creatures that are found in many extreme environments, and Mojica was looking at gene transcription changes in that organism under various high-salt conditions (H. mediterranei is the sort of creature that gets stressed out when the salt concentration gets too low, to give you the idea). He was actually doing that (as this retrospective notes!) because he got totally scooped on his original project with the organism and then set out for the less-studied parts of its genome. Out there in what looked like a non-coding region, he found these same sort DNA repeats, 14 of them, each about 30 base pairs long and regularly spaced along in the organism’s genome. Bacterial sequencing was pretty strenuous work back in 1992, and these things were first thought to be an artifact of something that had gone wrong. But no, they were real. It was starting to become clear that this stuff (whatever it was) might have broader implications, but no one knew what those were.

Mojica kept digging into the problem: here’s a 2002 note that showed that these features (which he and his co-authors were then calling SRSRs, for Short Regularly Spaced Repeats) were found in dozens of bacteria and archaea and were probably the most widely distributed repeat sequences in prokaryotes in general. When they transcribed such a repeat region, they got an oddly wide array of proteins, which suggested that there was a lot of RNA processing going on downstream (confirmed by this 2002 work from another team on a different archaeon species).

Meanwhile, Ruud. Jansen at Utrecht, along with van Empden and others, worked out another piece of the puzzle. These repeats had been noted next to the open reading frames (ORFs) of proteins of unknown function, and they found that these were in fact always associated with them across different organisms. That paper suggested the “CRISPR” acronym to clear up the number of different terms that were appearing in the literature, and it stuck, as did the term for those “CRISPR-associated” genes: cas. But the origin and function of the repeats and their associated proteins were still a mystery. All of these were still confined to rather specialized microbiology journals and it was Just One of Those Things that no one had a handle on.

That changed in 2005. Three groups (including Mojica’s) found that those spacers between the repeat elements actually came (in some cases) from bacteriophage sequences or plasmids from other organisms. That rearranged people’s thinking, because it suggested that these weirdo repeat things were somehow involved in infection and defense mechanisms for the bacteria and Archaea themselves. But there were some hair-pulling difficulties along the way to getting the word out, because the discovery itself had been made some time before. Eric Lander’s history of the field (an article not without controversies of its own) illustrates what happened:

Mojica went out to celebrate with colleagues over cognac and returned the next morning to draft a paper. So began an 18-month odyssey of frustration. Recognizing the importance of the discovery, Mojica sent the paper to Nature. In November 2003, the journal rejected the paper without seeking external review; inexplicably, the editor claimed the key idea was already known. In January 2004, the Proceedings of the National Academy of Sciences decided that the paper lacked sufficient “novelty and importance” to justify sending it out to review. Molecular Microbiology and Nucleic Acid Research rejected the paper in turn. By now desperate and afraid of being scooped, Mojica sent the paper to Journal of Molecular Evolution. After 12 more months of review and revision, the paper reporting CRISPR’s likely function finally appeared on February 1, 2005. 

Here’s Mojica’s view on the history, for reference. This sort of thing has happened many a time in the history of science, in case you had any doubts. The other two groups trying to publish such results ran into similar difficulties: Gilles Vergnaud and his co-workers had their paper rejected by four journals in a row, and Alexander Bolotin’s paper lost months with a slow rejection as well.. But by 2007, this idea had been nailed down: if you challenged bacteria with various types of virus (phage), repeats showed up in their genomes with spacers between them based on chunks of that phage DNA. And in turn, if you went in and messed with those repeats and spacers, you altered the resistance profile of the bacterium to different phage infections. There was no doubt: this was part of an adaptive immune system in bacteria and Archaea.

And it was one that actually rewrote their genomes in order to work – that was the startling thing. There was some sort of mechanism that chopped up bacteriophage DNA and inserted pieces of it into the bacterial sequence in order to remember it for the next time it might show up. People had thought originally that such a system might work at the RNA level, but here it was operating on the DNA sequence instead. The number of papers in the field was taking off at this point – there was something new under the sun, and the uses for a completely new genome-editing tool were becoming apparent to anyone who spent a few minutes looking out the window and thinking about the possibilities.

Some of those cas proteins, in fact, turned out to be the endonucleases that did the double-stranded DNA cutting needed for these splices to occur. They needed more than one RNA species to guide them in that job, but Jennifer Doudna and Emmanuelle Charpentier re-engineered one (Cas9) from the bacterium S. pyogenes to make it more simple. Now it just needed one “guide RNA”, the sequence of which determined where in the genome the DNA would be cut. At the same time, Virginijus Šikšnys in Vilnius was working out the same sorts of details. He submitted his own paper to Cell, but it was rejected without review (!) It then spent months in review at PNAS, during which time Doudna and Charpentier’s work appeared in Science, and many are the people who will tell you that had preprint servers been more of a thing back then that his name might be on today’s Nobel citation as well.

This refashioning of the CRISPR system was very significant. The native bacterial system works, but it’s quite complex. The Doudna/Charpentier work opened up its use as a research tool, and molecular biology has never been the same. So there’s a story about the recognition of something strange in bacteria, a story about figuring out what that was and how it might work, and then a story about extending that into something new that could be used in other organisms entirely.

But there’s more – there’s generally more. The first people to get this to work in mammalian and indeed human cells (as opposed to bacteria and the like) were (in basically simultaneous publications) Feng Zhang and co-workers at the Broad Institute and George Church and co-workers at Harvard. Neither of them is on today’s citation, either, of course, and not everyone is happy about that, either (especially considering that there was a third slot left open). But that slot could also have been filled by Mojica, by Šikšnys, or by. . .well, you pick. Making the jump out of bacterial systems was non-trivial, and there were plenty of people who weren’t sure that it would even be possible. A human genome is a lot more complicated than a bacterial one, and its DNA is packaged and sequestered in totally different ways. Getting a bacterial enzyme into the right place in a human cell nucleus at the right time and in the right concentrations took a lot of work – just getting Cas9 and a guide RNA reliably into cells in the first place took a lot of work. But in the end, it could be done.

All these discoveries lead one to thoughts on the patent situation in this area, but it is too complex for human summary. I mean that nearly literally. In various jurisdictions, there are filings from multiple different institutions and companies (up to maybe eight at once) all fighting it out over the claim language and scope, and I just refuse to try to sort it out in my head. To add to the confusion, new variations and improvements on the technique are emerging constantly, leading a person to wonder what the eventual most valuable IP rights may turn out to be. There have already been a lot of twists and reversals in this area, but I have deliberately decided not to give it space in my head, for fear of crowding out something else.

But the reasons that such patent rights are so valuable, and that this area was Nobel-worthy to start with, are perfectly clear. CRISPR is the slickest way yet found to edit genome sequences in living organisms. You can silence particular genes, you can increase their expression, you can stick completely new things in pretty much wherever you feel like it. You can (in later variations) swap individual nucleotides around with extreme selectivity, and so on – it’s really like having magic powers. There are a lot of different Cas enzymes, and some of them do double-strand DNA breaking, while others do single-strand nicking and all sorts of things. They work with variation degrees of efficiency, selectivity, and fidelity, and the hunt is still very much on for improved versions.

So like any other molecular biology technique, CRISPR also has its hidden limitations that are still being worked out. That’s something to keep in mind when you hear about CRISPR babies, such as the hideously unethical human experiments in China in 2018. We are already trying to use CRISPR techniques to attack inborn genetic diseases such as sickle cell anemia, but think about that one: all those defective red blood cells come from a single tissue (the bone marrow) and we have already worked out techniques to transplant it (and, along the way, to kill off the original tissue in preparation for the new cells). That means that we have a much better chance of doing a clean swap, with cells that have been edited ex vivo and carefully sequenced to make sure that they’re what we think they are – and this on a disease whose genetic profile has been exhaustively studied for decades (and indeed was the first genetic disease ever characterized). The difference between that and stepping in to rewrite human embryos is huge, and we’re not going to be safely leaping that gap for a while yet.

That’s not least because in many cases we’re not sure what to rewrite. Inborn protein errors like sickle cell are clearly the place to start, but in many (most) cases the instructions are not quite so clear. Then you think about the genetic basis for (say) height, and it’s time to look out the window again. No, it’s going to be a while before we start cranking out the designer babies to order.

But where we’re using CRISPR every single day of the week is back in the research labs. It’s an astonishingly useful tool for producing new cell lines and looking at the phenotypes in organisms when you do such selective editing – you can accomplish things that were nearly (or completely) impossible. These new abilities have accelerated molecular biology noticeably, and it’s not like the field was lounging around much before. No, it’s hard to overstate the importance of CRISPR to basic research, and that’s where the clinical breakthroughs are born.

This was, then, one of those fields that has been recognized for years now as Definitely Going to Get a Nobel, No Doubt About It. And that day has come! Congratulations to Jennifer Doudna and Emmanuelle Charpentier, who are very deserving indeed and part of one of the great discoveries of 21st century biology so far.

Q3 2020 Digital Health Funding Breaks $4B: 3 Key Trends to Know

What You Should Know:

– Digital health funding reached a record-breaking $4.0B in
Q3 2020 for a total of $9.4B year to date, according to the latest Rock Health
quarterly report.

– Twenty-four (24) digital health companies have raised
mega deals of $100M or more through Q3 of 2020. The rise in mega deals reflects
a trend towards capital concentration in digital health venture investment.

–  On-demand
healthcare services is the top-funded value proposition with $2.0B invested
across 48 deals through Q3 2020; it is also the value proposition with the most
number of deals.


Digital health
funding
 reached a record-breaking $4.0B in Q3 2020 for a total of $9.4B
year to date as it continues to be the largest funding year ever, according
to Rock Health, a full-service
venture fund dedicated to digital health

Despite the COVID-19 pandemic, the stock market’s sharp
recovery and pandemic-initiated policy and regulation changes are driving large
competitive moves and commercialization activities in the digital health
sector. The Digital
Health Market Insights: Q3 Update report
reveals 24 large mega deals are
driving the top-line numbers. The average deal size in 2020 is $30.2M, 1.5
times greater than the $19.7M average in 2019.


Impact of COVID-19 Accelerating Digital Health Adoption

Since April, Rock Health reports the COVID-19 pandemic has accelerated
digital health
adoption as it attracted interest from consumers, investors, and entrepreneurs.
Deal volume through Q3 in 2020 is up nearly 22% compared to all of last year.”
This activity comes amidst a record stock market rebound and hopes for a vaccine before the end of the year—however, medium-term economic uncertainty still looms. The
impending risk of future outbreaks, lockdowns, and the upcoming presidential
election all create uncertainty around recovery.


Here are three key trends to know from Rock Health’s latest
report:

1. Mega deals are on the rise—particularly in virtual
care delivery, R&D enablement, and fitness & wellness

Twenty-four (24) digital health companies have raised mega
deals of $100M or more through Q3 of 20201. This already doubles the previous
annual record of 12 mega deals set in 2018. These deals account for well over
one-third (41%) of total digital health funding so far this year with connected
fitness company Zwift raising the largest round so far—$450M in Series C
funding.


Rise of On-Demand Healthcare Services

Rock Health reports that on-demand healthcare services, representing
telemedicine services, prescription delivery, and at-home urgent car is the top
funded value-proposition with $2.0B invested across 48 deals through Q3 2020. 56%
of on-demand healthcare services deals were Series B or later, and Series B or
later deals represented 87% of funding for on-demand healthcare services
startups. The top three funded deals in the on-demand healthcare services
category are Alto Pharmacy ($250M), Ro ($200M), and AmWell ($194M).


2. Corporate investors double down on digital health

Q3 2020 Digital Health Funding Breaks $4B: 3 Key Trends to Know

Sixty-four percent (64%) of this year’s investors have
previously made investments in digital health—higher than any previous year.
Institutional venture firms continue to account for the largest share of
transactions (62%), with corporate venture capital (CVC) holding steady at 15%
of transactions.

– Corporate investors have made 149 investments in digital
health across three quarters this year, which already exceeds the previous
record of 145 investments across all of 2017.

– Quarterly investments by the four most active CVC
groups—providers, technology companies,8 biopharma, and payers—are all trending
upwards over the last 12 months.

– Provider CVCs lead the way with at least 12 investments
per quarter in each of the last three quarters.

3. Digital health companies capitalize on the stock
market’s sharp recovery and relaxed regulations

IPOs

Several digital health companies went public over the summer
or have announced plans to do so:

– Accolade and GoHealth went public in July

– Amwell, Outset Medical and GoodRx went public in September

high-growth, D2C platform Hims Inc. has struck a deal to go
public by merging with a blank-check company, and MDLive’s CEO announced
intentions to take the company public early next year. There were only six
digital health IPOs in 2019.


M&A Activity

Overall M&A activity is down in 2020 compared to prior
years. There have been 63 acquisitions of digital health companies through
Q3—on track to fall short of the 113 last year and 115 average from the prior
three years.


Rock Health Report Background & Methodology

The report produced by Sean Day and Elaine Wang with help
from Megan Zweig, Bill Evans, Jasmine DeSilva, Claire Egan Doyle, Derek Goshay,
and Nina Chiu sources data from Capital IQ, SEC company websites, Crunchbase,
NVCA, press releases, and the Rock Health funding database. Rock Health funding
data only includes disclosed U.S. deals over $2M.


Quest and 300 4k Movie Sweepstakes

Quest and 300 4k Movie Sweepstakes Official Rules NO PURCHASE NECESSARY.  A PURCHASE WILL NOT IMPROVE YOUR CHANCE OF WINNING. PROMOTION DESCRIPTION:  The “Quest and 300 4k Movie” Sweepstakes (the “Sweepstakes”) begins on or about October 6, 2020 at 12:01 a.m. Pacific Time (“PT”) and ends on October 12, 2020 […]

The President’s Coronavirus Treatment

I’ve had emails asking me what I think about President Trump’s illness and the course of treatment that he’s under. To be honest, this wasn’t a subject that I really felt like writing about – every time I write anything about Trump here, I regret it – but the reports have been so increasingly odd that I think that a discussion is in order.

I have to start off by admitting that the timeline of the president’s coronavirus infection is hopelessly confused. It’s very hard to say how long he’s been infected. But I also have to say that even if we had that figured out, it still might not be much help. Different people can have radically different courses of disease in this pandemic – a fact that’s been made abundantly clear over the last several months. But since the president has several risk factors working against him (age, gender, and BMI), one has to be ready for anything.

So what can we infer from his course of treatment so far? He got Regeneron’s monocolonal antibody cocktail very early, it seems, and at the highest clinical dose (8 grams). There has been no formal publication of Regeneron’s results, not even an unreviewed preprint, and I haven’t blogged about their recent press release (which is all we have). Like all press releases, it tells us some things and leaves some out. The company states that the antibody treatment reduced viral load and alleviates some coronavirus symptoms, particularly in patients that had not mounted a good antibody response of their own at the time of treatment. That all makes sense. But we don’t know if it actually helps with mortality, chances of being hospitalized (or going to an ICU once there), total time hospitalized, and so on. We also have only the earliest safety readouts (which so far don’t look problematic).

I would assume that Regeneron knows more than this, in unpublished form, and that this knowledge was part of their interaction with the president’s physicians when they provided the monoclonals. If I had to bet, I would bet that this would be an appropriate therapy. But betting the president’s health is another thing entirely, and I’m glad that I didn’t have to make that call. And this leaves Regeneron’s executives with even more reason than others to hope that the president recovers well, obviously.

Overall, though, it would seem that if you’re going to give monoclonal antibodies, that they would be best given early in the course of the disease, when therapy is still in antiviral mode. The addition of a five-day course of remdesivir to the treatment regimen fits that as well: both of these are designed to lower the amount of virus present and (in theory) keep the disease from progressing to a more severe stage.

That severe stage shows up as an overactive immune response leading to the well-known “cytokine storm”, and potentially big trouble. It really looks like the best therapy we have for that at the moment is dexamethasone. So I found it interesting – and not in a good way – that the president’s medical team had actually put him on dexamethasone, because its mode of action is to damp down the inflammation response. And if a person is still in the early stages of infection, that’s the opposite of what you want to do. There’s a real gear-shift in the treatment of coronavirus patients, when you have to switch from treating the viral infection to treating the immune consequences of the viral infection, and what’s appropriate for one phase of treatment is definitely not appropriate for the other.

So since the Walter Reed physicians are, in fact, very competent, the only conclusion I can draw from this is that the president’s infection is further along than we had thought. They may well be seeing signs of inappropriate over-response to the coronavirus and are trying to knock that down before it gets more serious. Another possibility, I suppose, is a similar over-reaction to pneumonia (which is the only reason I have ever had a short course of dexamethasone myself). OK, then. . .but how on Earth do we square any of that with the physician’s comment yesterday that Trump was doing so well that he might be discharged today? Discharging a 74-year-old man with coronavirus in the middle of remdesivir and dexamethasone therapy makes no sense at all.

But neither did Trump’s motorcade trip around Walter Reed yesterday. The news is full of people talking about what a bad idea that was, and I would especially single out Dr. James Phillips, whose views I agree with completely. The New York Times reports that the president wanted to be discharged yesterday, in fact, and that the limo ride was some sort of compromise. You have to imagine that his doctors are being put in some nearly impossible situations, but there are a lot of things about the president’s illness that simply are not adding up. It’s obvious from the oh-yeah-in-retrospect statements about his oxygen levels dropping on Friday and Saturday that we are not hearing anywhere close to the whole story about his illness. And what we are hearing makes very little sense.

I do not expect things to get any more sensible today, if you’re wondering.

Top 7 Most Prescribed Drugs in the US

Did you ever wonder which the most prescribed drugs in the US are? The whopping number of retail prescription drugs filled at pharmacies might shock you. Breaking it down for 2019, you see that 51% of them were Commercial, 28% were Medicare while 16% were Medicaid.

Now you know what keeps pharmacists as busy as you visit them to get your medicines. Okay, let’s move further.

How many of the most commonly prescribed drugs are familiar to you? Do you have any names on your tongue right now? If not, just check these out…

7 Most Commonly Prescribed Drugs in the US FYI

Simvastatin

Simvastatin belongs to the statin class of drugs that are used to treat high cholesterol issues. It aids in reducing the risk of heart stroke & death because of the same. The patient should know the potential risk of maximum dosages & use it under the guidance of a medical practitioner only.

Some of the very common side effects of Simvastatin include nausea, headache, vomiting, upset stomach, weakness in the muscles, and so on.

Levothyroxine

Levothyroxine, as the name suggests, treats hypothyroidism. It is also believed to be a synthetic version of T4-Thyroid Hormone.

This drug has remained consistent in securing the third position as the top 3 drugs prescribed in the U.S. It is sold with a generic name Levothyroxine Sodium under the brand name Synthroid. However, it is not recommended to be used for hypothyroidism when the patient undergoes the recovery phase of subacute thyroiditis.

As the levels of thyroid rise in the body, side effects like increased heart rate, nervousness, chest pain, excess sweating, and more can happen.

Lisinopril

Securing either the first or second position for years, Lisinopril is one of the most prescribed drugs in the United States. It is an ACE (Angiotensin Converting Enzyme) inhibitor.

Lisinopril is mainly prescribed to treat high blood pressure, heart attack, or congestive heart failure. Additionally, it is effective to prevent the conditions of kidney failure and diabetes. This generic for Privinil or Zestril has side effects like dry cough, nausea, dizziness, drowsiness, headache, and sexual dysfunction.

Azithromycin

Is there anyone who hasn’t heard about Azithromycin? It isn’t popular as drugs prescribed in the U.S. but has a name in the global market.
Azithromycin is a recognized medication to treat ear, throat, and sinus infection.

Besides this, it is also prescribed for pneumonia, bronchitis, and STDs. Some of the common side-effects of Azithromycin include diarrhea, vomiting, nervousness, or allergies.

Metformin

Needless to say that 30M+ Americans are suffering from diabetes. Metformin is a highly effective drug for the treatment of Diabetes (Type-2). It can be used by adults & children. You will always find it as one of the most prescribed drugs in the United States.

Some mild side-effects like vomiting, gas, diarrhea, and nausea may occur.

Amlodipine

Amlodipine is generic for Norvasc. It is a calcium channel blocker drug mainly used for the treatment of high blood pressure and angina (chest pain). Amlodipine is also one of the most recognized drugs in the US for years, since its inception in 2005.

Headache, dizziness, weakness in the body, lower-extremity swelling, and more are the common side effects of Amlodipine.

Amoxicillin

Bacterial infections have all-in-one Amoxicillin for treatment. Whether the patient suffers from skin infections, urinary tract fungal infection, or bacteria troubling in the ears, tonsils or throat, amoxicillin is very effective.

Buy any from the Top 7 Drugs Prescribed in the United States

By the time you have viewed the top 7 drugs prescribed in the United States, you now know what’s popular. If you have been prescribed with any of these over-the-counter medicines, we can get them to you at affordable rates.

So, are you ready to shop for the most prescribed drugs in the US now?

The post Top 7 Most Prescribed Drugs in the US appeared first on Actiza Pharmaceutical.

Thoughts On a New Coronavirus Test (And on Testing)

Word came yesterday that Abbott received an Emergency Use Authorization for a new coronavirus test, one that is faster and cheaper than anything currently out there. The two types of tests that we see in use now are RT-PCR, the nasal-swab test that detects viral RNA, and various antibody tests, that tell you if you have raised an immune response due to past exposure to the virus. This one has features of both, but its main use is more like the RT-PCR test: it will tell you if you are actively infected. It does that by detecting a particular antigen, the nucleocapsid protein (Np) of the coronavirus. It’s a key part of its structure, and in an actively replicating infection you can be sure that there will be plenty of that one floating around.

The test itself is one of Abbott’s “BinaxNOW” assays, and they have a whole line of these already as tests for malaria, RSV, various bacterial infections, and so on. It’s a laminar flow assay, which will be familiar to anyone who’s seen a pregnancy test, and I explained the general principles of those (as antibody tests) in this post. This new test is a sort of flipped version of what I described there, though. In this case, a nasal swab is taken, and several drops of solvent are used to put that sample onto the beginning of the absorbing strip inside the card. As it soaks up along the length of the strip, the sample will encounter a zone of antibodies that recognize the Np antigen, and these antibodies are also attached to nanoparticles of gold. This gold-antibody-Np complex is carried along in solution further along the strip until it runs into another antibody zone, one that’s immobilized on the solid support and which will bind the gold-antibody-Np complex molecules tightly. That stops them in their tracks and allows the gold nanoparticles to pile up enough to be visible as a pink or purple line. Along the way, the sample has also crossed a zone containing another soluble gold conjugate species as a control, which gets carried along until it runs into another separate zone of immobilized antibodies specific to it. The presence of a pink control line means that the test has been performed correctly; absence of such a control line means that the whole test has been messed up somehow and needs to be run again.

I had described earlier a test that looks for antibodies to the coronavirus by running them past gold-conjugated antigens on the test strip, but this one looks for antigens by running past gold-conjugated antibodies. Developing a test like this involves a lot of work to find the right antibodies, to make sure that they’re attached to the gold nanoparticles in ways that don’t impair their function, to find the right second immobilized set of antibodies that will develop that test line, and to make sure that the control line system is compatible with the test itself. You’ll also need to work on the composition of the test strip and the solvent that’ll be used to take the patient’s sample into it: these need to allow as much of the antibody complex to flow down the strip in a controlled fashion as you can get, and to do so in the same way every time. And finally, you need to validate the assay with a lot of coronavirus patients and controls, to see what your false positive and false negative rates are.

For this assay, those come out to a sensitivity of 97.1% (positive results detected when there should have been a positive) and a specificity of 98.5% (negative results when there should indeed have been a negative). Flipping those around, you’ll see that about 1.5 to 3% of the time, you will tell someone who’s infected that they’re not, or tell someone who’s not infected that they are. That’s about what you can expect for a test that sells for $5 and takes 15 minutes to read out with no special equipment, but such tests (if used properly) can be very valuable. Flipping that around, you can also infer that if used improperly, they can be sources of great confusion.

What’s proper? The FDA’s EUA is for testing people that show up with symptoms to see if they really do have SARS-CoV2. I think that’s appropriate, because you’re more likely to have a higher percentage of those folks who are really infected. If you tried to deploy this test across a large asymptomatic population with a very low true infection rate – everybody in New Zealand, for example – you would create turmoil. New Zealand’s real infection rate is vanishingly small, but Abbott’s quick $5 test would read out a false positive You Are Coronavirused for 1.5% of the whole country, never lower, which would be a completely misleading picture that would cause all sorts of needless trouble.

On the other hand, if you’re testing symptomatic people in a community where the virus is already known to be spreading, you can do a huge amount of good. Let’s imagine you test 1000 such coughing, worried patients under conditions where you expect that 10% of them really do have the coronavirus. In the course of testing all thousand, you’ll run those 100 positive folks through, and you’ll correctly tell 97 of them they they need to go isolate themselves immediately, which is a huge win for public health. Three of them, unfortunately, will be told that they’re negative and will go out and do what they do, but that’s surely far fewer than would be out and around without the test. You’ll also run the 900 other people through who actually have a cold or flu or something and not corona, and you’ll tell maybe 13 of them (900 x 0.015) that they’re positive for coronavirus and that they should isolate as well. That’s not great, either, but it’s worth it to get the 97 out of 100 real infectious coronavirus patients off the streets. And meanwhile you’ve correctly told the other 890 people in your original cohort that they do not have coronavirus, which is also a good outcome. But remember, with that 98.5% specificity you’re going to send 15 people out of every thousand you test home to quarantine even if no one really has it at all. If 1% of your sample of 1000 people is truly infected, you’ll probably catch all ten people who are really positive. . .but you’ll also tell 14 or 15 people who don’t have it that they do, crossing over to finding more false positives than there are real ones.

And let’s not forget the other really good aspects of this test: it’s cheaper than anything else out there but best of all, it’s fast. The delays in the RT-PCR testing have been killing its usefulness in too many cases – what good is knowing that you tested negative sometime last week, really? Far worse, what good is knowing that you tested positive last week if you didn’t isolate yourself because you weren’t sure if it was the coronavirus or not? But an answer in fifteen minutes, that’s actionable. As long as this test is deployed correctly, it can be very useful.

Addendum: I’m well aware that the CDC seems (controversially) to be changing its testing recommendations in general. This “only test if there are symptoms” guidance seems to apply to RT-PCR testing as well – and turnaround problems aside, that test still has far higher selectivity and specificity than this new 15-minute one and is far more appropriate for use in a broader, largely asymptomatic population. We need to be addressing the delay problems in RT-PCR, because we need to be doing a lot of those tests – not closing our eyes and whistling a happy tune instead. This appears to me, and to many others, to be political interference from above. What else is one to think when administration officials have suddenly started referring to the pandemic in the past tense? So here’s something I never pictured myself saying: it is my hope that this CDC guidance will be ignored. It’s a hell of a situation to get to, isn’t it?

Should more flexible health care professional licensing continue after COVID-19?

Shirley Svorny and Michael Cannon of the Cato Institute have a long history (see here) of criticizing government licensing of physicians and other health care providers (e.g., nurse practitioners [NPs], nurses). Their proposed alternative has been to allow private entities to certify the competence of practitioners. They claim that private entities would increase competition by allowing an increase in supply, allow for more cross-specialty competition, permit more flexible certification categories, and reduce the chance for incumbent control of government licensing boards that restrict supply. Those would favor government licensing would argue that the government may be able to better uphold quality standards than private-sector entities with a potential profit motive or conflict of interest. To generalize, the pro-certification camp (e.g., Svorny and Cannon) typically is more concerned with cost of care, efficiency and value; the pro-government licensing camp typically is more concerned with minimum quality standards independent of cost.

A recent white paper by Svorny and Cannon, however, argues that the cost of the lack of flexibility in government licensing has truly been highlighted by the recent COVID-19 pandemic. As there has been a surge in need for care of COVID patients, cross-specialty support is needed to care for the surge in COVID-19 patients.

State governments have not been entirely passive, however. There has been some loosening of government licensing restrictions during the pandemic by most states. Some examples:

New York expanded scopes of practice to let nurse anesthetists, physician assistants, and specialist assistants practice independently; to let pharmacy technicians help pharmacists compound, prepare, label, and dispense drugs for home infusion providers; and to increase the number of providers who can supervise emergency medical services personnel.63 Alabama expanded scopes of practice for NPs, nurse midwives, nurse anesthetists, physician assistants, and anesthesia assistants, freeing them to “practice to the full scope of their practice as determined by their education, training, and current national certification(s).”64 Colorado expanded scopes of practice for a host of health professionals…as well as (unlicensed) nursing students and medical assistants by allowing NPs and nurse anesthetists to delegate tasks to them.65 States including California, Maryland, and North Dakota allowed pharmacists to order and collect specimens for COVID-19 tests… Most states—including New Jersey and New York—suspended prohibitions on clinicians in other states providing care to their residents, whether in person or via telemedicine, either outright or by way of conditional waivers that require registration or an emergency license.72 Several states removed barriers to clinicians providing care after they retired or otherwise allowed their licenses to lapse.

Yet, not all states were so flexible and COVID-19 reveals that incumbents still wield significant power.

[In California, the]…state’s Department of Consumer Affairs refused to allow NPs to practice independently and instead increased the number of NPs each physician could supervise

Allowing for a more flexible credentialing program as Svorny and Cannon propose would certainly reduce cost and increase patient access to care. If done right, quality may not suffer. If credentialing is implemented in less robust way, however, there is a risk of quality reductions. Nevertheless, the Cato researchers make some compelling points that allowing for health care professionals to have more flexibility in how they practice medicine would be a net positive for society.

BMS to Acquire Forbius for its AVID200 to Expand its Footprints in Oncology and Fibrosis

Shots:

  • Forbius will receive up front and milestones while BMS will acquire Forbius’s TGF-beta program, including the lead investigational asset, AVID200. The transaction is expected to be closed in Q4’20
  • Prior to closing, Forbius’ non-TGF-beta assets will be transferred to a newly formed private company, which will be retained by Forbius’ existing shareholders
  • AVID200 is a highly potent and isoform-selective TGF-beta inhibitor that neutralizes TGF-beta 1 and -beta 3 with picomolar potency, currently being evaluated in P-I study for Oncology and Fibrosis

Click here ­to­ read full press release/ article | Ref: BMS | Image: Canvas

Vertex’s Kaftrio + Ivacaftor Receive the EC’s Approval to Treat Cystic Fibrosis in People Aged 12 Years and Older

Shots:

  • The EC has granted MAA to Kaftrio (ivacaftor/tezacaftor/lumacaftor) + ivacaftor (150mg) to treat people with CF aged ≥12yrs. with one F508del mutation and one minimal function mutation (F/MF), or two F508del mutations (F/F) in the CFTR gene
  • The MAA is based on two P-III studies i.e. a 24wks. study in 403 people with one F508del mutation and one F/MF & a 4wks. study in 107 people with two F508del mutations (F/F). The studies demonstrated improvements in lung function (1EPs) and all 2EPs and were generally well-tolerated in both studies
  • Kaftrio is designed to increase the quantity and function of the F508del-CFTR protein at the cell surface

Click here ­to­ read full press release/ article | Ref: PRNewswire | Image: Bloomberg

Sole-Source, Off-Patent Drugs: Are prices rising out of control?

In the news, we often hear of pharma companies dramatically raising the price of off-patent but sole-sourced drugs. For instance, Martin Shkreli–formerly of Turing pharmaceuticals–raised the price of Daraprim, an antiparasitic drug, from $13.50 to $750 per pill. Is this a common occurrence or is the news media blowing it out of proportion?

A recent article by Alpern et al. (2020) in JAMA Network Open finds that large price increases are the exception rather than the rule. The authors use 2008-2018 wholesale acquisition cost (WAC) by drug from First Databank. Using these data, they find that:

Of the 300 drug products and 2242 observations analyzed, the overall inflation-adjusted mean increase in drug prices was 8.8% (95% CI, 7.8%-9.8%) per year. Ninety-five drugs (31.7%) increased by 25% or more during any calendar year, and 66 drugs (22.0%) increased by 50% or more during any calendar year.

An 8.8% price increase may sound a bit steep, but the figures are skewed by a few outliers. Further, a large share of the “big” price increases were for drug’s whose WAC was <$2 per pill. If we look at this in absolute terms, we see that the absolute price increases are fairly modest. More than 80% of “large” price increases of 50% or more were actually increases of only $0-$19; the figures are similar when a “large” price increase is defined as a 25% or more price increase. While the mean price increase for drugs with more than a 50% price increase was an increase in cost of $137, this cost is greatly skewed by a few outliers, name Shkreli’s decision to increase Daraprim’s price by 5300% in a single year. The median absolute price increase among sole-sourced, off-patent drugs with a 50% price increase was only $3.80…or only $2.40 if a “large” price increase is defined as 25% or more price increase.

In short, while there have been a few egregious cases of very large price increases for off-patent, sole-sourced medications (e.g., Martin Shkreli), in the vast majority of cases price increases for these treatments have been fairly modest.

Source:

  • Alpern JD, Shahriar AA, Xi M, et al. Characteristics and Price Increases Among Sole-Source, Off-Patent Drugs in the United States, 2008 to 2018. JAMA Netw Open. 2020;3(8):e2013595. doi:10.1001/jamanetworkopen.2020.13595

The US FDA Approves Kyprolis (carfilzomib) + Darzalex (daratumumab) + Dexamethasone in Two Dosing Regimens for R/R Multiple Myeloma

Shots:

  • The US FDA has approved Janssen’s Darzalex + Amgen’s Kyprolis (carfilzomib) and dexamethasone (DKd)  in two dosing regimen (70 mg/m2 , qw and 56 mg/m2 , q2w) for the treatment of adult patients with r/r MM who have received 1-3L therapies
  • The approval of q2w dosing regimen is based on P-III CANDOR study assessing DKd (q2w) vs Kd in 466 patients with r/r MM and has met its 1EPs of PFS after a median follow-up of 16.9 & 16.3 mos. respectively
  • The inclusion of carfilzomib (qw) as an approved DKd regimen is based P-Ib EQUULEUS study assessing Darzalex in combination with multiple treatment regimens. Carfilzomib (qw) was evaluated with a starting dose of 20 mg/m2, which was increased to 70 mg/m2 on Cycle 1, Day 8 and onward

Source 1, Source 2 ­to­ read full press release/ article | Ref: Janssen, Amgen  | Image: Revlimid

Regeneron Collaborates with Roche to Improve the Global Supply of REGN-COV2 Against COVID-19

Shots:

  • The two global companies collaborated to develop, manufacture, and distribute REGN-COV2 across the globe. The agreement is expected to increase the supply of REGN-COV2 to at least 3.5 times the current capacity with the potential for expanding it further
  • Regeneron will lead the distribution in the US while Roche will be responsible for the distribution outside the US, both will bear distribution expenses in their designated territories. Each partner will dedicate a manufacturing capacity to REGN-COV2 every year while the collaborators have started the technology transfer process
  • The partners will jointly fund and execute the ongoing P-III prevention and P-I healthy volunteers safety studies as well as additional global studies to assess the potential for REGN-COV2 against COVID-19. Additionally, Roche will solely responsible for the initial EMA’s approval or conducting any additional studies required for approval outside the US

Click here ­to­ read full press release/ article | Ref: Roche| Image: StraitTimes

Pharma Whatsapp Group

Pharma Whatsapp Group

Whatsapp

When Join Pharma Pathway Group You get 250+ vacancies direct Your Phone

Group Rules – Welcome to Everyone Pharma Pathway
Some Rules To Be Followed In This Group:
1. People will be removed from the group without any intimation, if u sends irrelevant posts such as Good morning, Good Night, Forwarded Messages, God Pics, Wishes, Politics, and Cinema etc. So please co-operate with the Group.
2. Avoid making fun on other people’s posts. Unwanted comments will be removed along with user permanently.
3. Kindly do not post links of other groups in this Group.
4. Help members of your Group positively if they ever need your suggestion regarding any issue “only if you have knowledge about

Group 810

Group 811

Group 812

Group 813

Group 814

Group 815

Group 816

Group 816

Group 817

Group 818

Group 819

Group 820

Group 821 Full

Group 822 Full

Group 823 Full

Group 824 Full

Group 825 Full

Group 826 Full

Group 827 Full

Group 828 Full

Group 829 Full

Group 830 Full

Group 831 Full

Group 832 Full

Group 833 Full

Group 834 Full

Group 835 Full

Group 836 Full

Group 836 Full

Group 837

Group 838

Group 839

Group 840

Group 841

Group 842

Group 843

Group 844

Group 845

Group 846

Group 847

Group 848

Group 849

Group 850

Group 851

Group 852

Group 853

Group 854

Group 855

Group 856

Group 857

Group 858

Group 859

Group 860

Active Group

Group 861

Group 862

Group 863

Group 864

Group 865

Group 866

Group 867

Group 868

Group 869

Group 870

New Group

Whatsapp

Telegram Group

Telegram

 

Nanobodies Against the Coronavirus: Something New

So let’s talk about nanobodies – there’s a coronavirus connection to this, but it’s a good topic in general for several reasons. We begin at the beginning: what the heck is a “nanobody”?

Antibody Structure

The name is derived, rather loosely, from “antibody”. So let’s spend a minute on what antibodies actually look like. What you see at right is the three-dimensional structure of a typical one – you have a ridiculous number of these things circulating in your blood right now, nearly all of them subtly different from each other. The color codes are the two “heavy chains”, in red and blue, and the two “light chains”, in green and yellow. It all adds up to about 150 kilodaltons, a bit on the chunky side as proteins go.

It’s a bit easier to picture this stuff in schematic, so the next picture is general layout of these chains and domains. I’ve retained the same color scheme, but with some added information

There are the heavy chains and the light chains, as in the protein structure picture, but you’ll notice that the ends of both of those are variable regions, while the rest stay as a constant platform. Those variable ends (the “Fab”, for “fragment antigen binding” regions) are actually the part that recognizes antigens, as you would figure. You never know what sort of antigen you’re going to encounter next, and thus the insanely large collection of different antibodies that all of us are walking around with, produced “on spec” in hopes that one or another of them will happen to recognize what turns up. The two heavy chains are almost always identical in any given antibody, as are the two light chains (the antibody structure is symmetrical). All of this is held together by disulfide bonds between Cys amino acids and some other polar interactions between the constant regions.

I’m leaving out a lot. In any discussion of immunology that runs to less than about 500 pages in 6-point type you’ll be leaving out a lot. For example, antibody proteins have various sugar molecules attached to their surface at key points, and those are really important to their function. What’s more, there are actually five distinct types of heavy chain and two types of light chain – they’re mostly about the same size (a couple of the heavy ones are noticeably heavier) and they all fit into this same arrangement, but you can distinguish them by their amino acid sequences. That “base” at the bottom with the two heavy chains is called the Fc (“fragment, crystallizable”) region, and binds to various immune cells to regulate function. Meanwhile, the working details of the binding up in the variable Fab regions just get finer and grainer the closer you look at them; there are whole careers of work up there. Different classes of antibodies can also have these individual “Y” structures arranged further into pairs, for example, or into cycles with five of them in each unit. And so on.

Camelids and Nanobodies

And with that, we shall now abruptly veer off into talking about camels, llamas, alpacas and their kin, because they have their own variety of antibody. No one knew that they had a different system going until 1989, when a student-run project at the Vrije Universiteit Brussel was trying to come up with a diagnostic test to check camels for trypanosome infection. They discovered that camel antibodies were. . .weird. Some of them were just like the ones above, but about 75% of the camel antibodies (and up to 50% in the New World species like llamas) have no light chains at all. They just have the variable parts of the heavy chain stuck directly onto the “base” constant region. Sharks and their relatives, as it turns out, have something similar going on with a different sort of base region, in what are clearly two different evolutionary events: at least 220 million years ago for the cartilaginous fish and 25 million years ago for the camelids. Both sets of animals seem to work just fine with their proprietary systems – before these discoveries, most immunologists would have said that that such modifications would be likely to cripple the antibody response, but not so. That led to thoughts of clipping things down to just that heavy-chain variable chunk to see if those would recognize targets as well (i.e., having just one of those light-blue or light-red pieces in the above schematic by itself).

That they did, and “nanobodies” were born. Not only can they bind with high affinity to all sorts of antigen targets, but they do so via binding modes that have never been observed with real antibodies (which means that they might be able to recognize all those targets in new ways). They’re also much smaller than antibodies per se, which leads to some interesting properties. Nanobodies can have a wide range of stability and half-lives, which is tunable with some experimentation, and they often demonstrate much greater penetration into tissues and many other features. People have been investigating their properties and uses for over 25 years now. The Belgian researchers formed a company (Ablynx) in 2001 that has led the way, thanks to their solid patent positions in the area, but there have been so many twists and turns in the story that the first actual nanobody drugs have appeared just as their early patents have begun to expire (more on this timeline in that earlier link). With some irony in hindsight, some of those early investments in nanobodies were made in order to try to avoid the serious patent-licensing headaches with traditional antibodies.

Coronavirus Nanobodies

There is now a preprint describing a screen for such nanobodies binding to the coronavirus. The team (a large multicenter effort led out of UCSF) had prepared a yeast-displayed library of billions of potential heavy-chain fragments, 21 of which ended up showing strong binding to the coronavirus Spike protein. These fell into two classes: Class I bound directly to the receptor-binding domain (RBD) and competed with the ACE2 from the surface of human cells. Class II, though, didn’t hit the RBD, but instead bound somewhere else and changed the conformation of the RBD so that it can’t recognize ACE2 when it’s available. When these nanobodies were put into an assay that measured the binding of fluorescent-labeled Spike protein and HEK293 cells expressing ACE2, the Class 1 species were active, but the Class II ones did nothing, weirdly.

Further work yielded cryo-EM structures for two of the best Class I candidates bound to the RBD, but they couldn’t get any such data for a Class II. That was worked out (partially) by another technique, where the complex was exposed to extremely reactive hydroxyl radicals (generated by synchroton X ray beams). You then look over the proteins to see what didn’t get eaten by the radicals. Those experiments showed a protected area on the Spike protein well away from the RBD, which is presumably where that particular Class II nanobody was binding.

The team went on to take one of the Class I candidates (designated Nb6) and make dimers and trimers of it, separated by inert Gly-Ser linking chains. The idea was that the Spike protein, which has a three-fold repeated architecture, could be inhibited even more strongly by binding to more than one RBD on its surface. And that proved to be the case: using surface plasmon resonance (SPR) assays, which let you follow on- and off-rates in detail, it became clear that the dimeric and trimeric forms of Nb6 occasionally bound with just one of its nanobody ends, in which case it could fall off again reasonably quickly, and also showed binding with all of its nanobody regions at once, in which case it came off much more slowly. The trimeric form showed subpicomolar affinity for the Spike protein in this assay, although the exact binding constant is so tight that it hasn’t even been quantified.

These various forms were taken into a pseudovirus cell infection assay (that’s where you rig up a harmless virus to use the coronavirus’ infection machinery, Spike and all). Plain Nb6 had an IC50 of 2 micromolar, and another Class I nanobody (Nb11) was almost the same. The best Class II nanobody (Nb3) was 3.9 micromolar. But that trimeric form of Nb6 (Nb6-tri) was 1.2 nanomolar in the assay, a two-thousand-fold-improvement. Trimer forms of Nb11 and Nb3 also improved, but not as much. In a test of Vero cell infection with real SARS-Cov-2 coronavirus (done at the Pasteur Institute in France), Nb6-tri prevented viral attack with an IC50 of 160 picomolar, which is truly impressive.

They didn’t stop there, though. A “saturation mutagenesis” experiment around the sequence of Nb6 was then tried, with new rounds of assays, and this yielded a mutant nanobody that was still more potent. You might wonder about trying to make things even better when you started out with billions of nanobody candidates in the first round, but a quick look at the math shows that a couple of billion nanobodies are just a speck compared to the total number of possibilities (around 110 amino acids, 20 variations per!) This one was trimerized as before, and the new mNb6-tri, when put into the SPR assay, showed no off-rate at all during the limits of the experiment, putting its binding constant somewhere in the femtomolar range at worst. It comes in with IC50s of 120 picomolar in the pseudovirus assay and about 50 picomolar in the wild-type infection assay, but those are probably at the limit of detection for both. Basically, we don’t actually know how potent this nanobody construct is, because we don’t have assays good enough to read out a number (!)

Potential Therapeutic?

OK, now things get interesting. The authors tested the stability of mNb6-tri, and found that it can be heated, lyophilized (freeze-dried, basically), and nebulized into an aerosol with no loss of potency. It’s a very stable species that can put up with all sorts of handling and processing. You could certainly inject this material, just as you’d administer monocolonal antibodies. But there are more possibilities. How about formulating it as a nasal spray? Or in a nebulizer, to be breathed into the lungs, or even sprayed out into the room air? How about impregnating filter material with this protein so it pulls coronavirus particles out of the air as they pass through it? The extreme stability of nanobody proteins gives all of these a real shot, and they’re under serious consideration for development. The team says that they’re in discussion with several commercial partners to take this technology into human trials (and presumably medical-device trials, for the filtration idea), and I think that’s an excellent idea. This has real public-health potential, from the looks of it, and could be just the backup that we may need for the existing vaccine programs if they come in less effective than we’d like (or are rolled out more slowly than we’d like!) I hope that the money and resources are rounded up quickly.

 

Automating Safety Case Processing: The Options

Register now The promise of EMR/HER data is receiving increased attention as clinical researchers and patients continue a decade’s long mission to more effectively and rapidly enroll patients on clinical trials; bring new treatment options to market; and improve clinical trial cycle time. Despite vast quantities of available data, its dispersion among dissimilar systems and […]

The post Automating Safety Case Processing: The Options appeared first on PharmaVOICE.

That Kodak Deal

Many people have been wondering what’s going on with the announcement by the Trump administration that Kodak has been contracted to produce pharmaceutical APIs here in the US. Let’s line up some of the public statements about all this first, and then take a closer look. Here’s the press release from Kodak after signing a “Letter of Interest” for a $765 million dollar loan for the deal. They state that:

Once fully operational, Kodak Pharmaceuticals will have the capacity at Eastman Business Park to produce up to 25 percent of active pharmaceutical ingredients used in non-biologic, non-antibacterial, generic pharmaceuticals. . .

The government’s side of this is to be found here, at the US International Development Finance Corporation. Now, you may not have heard of the DFC before, but we’ll get to that in just a bit. Their release says that:

The project would mark the first use of new authority delegated by President Trump’s recent executive order that enables DFC and the U.S. Department of Defense (DOD) to collaborate in support of the domestic response to COVID-19 under the Defense Production Act (DPA). . .Today, Kodak is expanding its traditional product line to support the national response to COVID-19 by bolstering domestic production and supply chains of key strategic resources. . .DFC’s loan will accelerate Kodak’s time to market by supporting startup costs needed to repurpose and expand the company’s existing facilities in Rochester, New York and St. Paul, Minnesota, including by incorporating continuous manufacturing and advanced technology capabilities.

Kodak’s current CEO Jim Continenza was joined at the ceremony by DFC head Adam Boelner, White House Trade Director Peter Navarro, Rear Admiral John Polowczyk (of the White House Supply Chain Task Force) and Deputy Sec. of Defence Peter Norquiest. Said Continenza:

“By leveraging our vast infrastructure, deep expertise in chemicals manufacturing, and heritage of innovation and quality, Kodak will play a critical role in the return of a reliable American pharmaceutical supply chain”

Kodak’s Chemical History

OK, that lays the foundation, I’d say. What happens when you dig beneath the speeches and the press releases? Kodak’s history with film (and for a time digital) photography is well known. But as for chemicals, George Eastman himself got into the business in the 1920s so he could source his own materials. My maternal grandfather, as it happens, moved to the re-built town of Kingsport, Tennessee to be one of his employees. The company started to sell to outside customers, and Tennessee Eastman became a major producer in the fine chemicals business (making, for example, an awful lot of RDX explosive during World War II on government contracts). The Eastman chemicals business, though, was spun off in 1994 because it was seen as a low-margin business compared to film, but is now (by revenue) about ten times bigger than Kodak. When people think of Kodak and fine chemicals, more likely than not they’re mixing them up with Eastman in their mental registers.

And there was a pharma component at one time. Kodak bought Sterling-Winthrop pharmaceuticals in 1988 for $5.1 billion, in a deal that can only be regarded as disastrous. Six years later, they sold the prescription drug business to Sanofi for $1.675 billion and the OTC business to SmithKline Beecham for $2.925 billion. You will immediately notice that half a billion dollars evaporated along the way, and that’s not counting the losses Kodak sustained during the intervening six years. No, it is safe to say that Kodak does not have a glorious pharmaceutical history. Ex-Sterling people were scattered all over the drug industry after this debacle, and the ones I’ve known generally seemed to believe that you could not have hired a gang of saboteurs to do a more thorough job of destruction than what Kodak accomplished.

But that Kodak is not the one we see before us today. they’re not an R&D company any more, for the most part. The company’s stock lost over 99% of its value from 1997 to when they went through a bankruptcy in 2012, and they emerged a smaller firm in every way. This 2011 article comparing the fortunes of Kodak and Eastman in the years after the split mentions that Kodak’s research spending that year was all the way down to $321 million – well, it was down to $42 million in 2019.  President Trump’s remarks that he had reached “a historic agreement with a great American company” just serve to show how out of touch he is – Kodak is a long way from being even in the Fortune 1000. I mean, this is the outfit that announced in 2018 that they were now a big cryptocurrency player, complete with a Kodak-logoed “Kashminer” Bitcoin-mining device, a harebrained scheme that the SEC put the brakes on rather quickly.

Currently, it’s hard to tell how much of Kodak’s business comes from fine chemical manufacturing itself, although it isn’t much. Their most recent 10-K form breaks it down only as far as “Film and Chemicals” (see Note 15 at that link). That segment comes in at about 13% of revenues, but it also includes industrial film for making printed circuit boards and the good ol’ professional and consumer film businesses. Apparently one customer of the latter accounts for 20% of the revenues of this whole category, so that revenue isn’t coming from chemical sales. And after listing those, the report says that this category “Includes related component businesses: Polyester Film; Solvent Recovery; and Specialty Chemicals“, so chemical production per se is basically the last thing on the list, for what that’s worth. The terms “pharmaceutical”, “drug” and “API” appear nowhere in the filing, nor in their 10-Q for the first quarter of this year. I conclude that the company has basically no business in making pharmaceutical APIs at present. They don’t appear to have any revenues from cryptocurrency either, in case you were wondering.

They do, though, have speciality chemical manufacturing in Rochester, and well-known chem-blogosphere guy Chemjobber has mentioned their presence at trade shows and his knowledge of their business. The people who jumped on the idea of Kodak as a pharma manufacturer because “they’re a camera company” are wrong –  as mentioned, it’s not a huge part of their business, but they do have capacity and they are actively engaged in fine chemical manufacturing. There are other reasons – plenty of them – to wonder about this deal, but the basic Kodak-making-chemicals part is not the place to start. Making pharmaceutical ingredients is another sort of business, of course, with a very different regulatory environment, but Kodak can indeed make chemicals. That said, the mention of a manufacturing facility in St.Paul is interesting, since no such operation is listed in the company’s most recent annual report (see Item 2, Properties). The Kodak Polychrome Graphics business has a footprint in Minnesota, but I can’t find anything about chemical manufacturing there. Or not yet?

How Many APIs? And How Much?

So let’s ask a broad question: how many APIs come from foreign suppliers, and in what volume? It’s actually a very difficult question to answer, because there are a lot of suppliers out there, many of whom are (at least at times) middlemen for yet other manufacturers. We saw this in action during the sartan contamination story, when it turned out that material from a single supplier could show in various places. The companies involved know who they’re buying from, but that could (and does) change as market conditions change, and they don’t have to tell you about it, either.  Pharmaceutical supply chains can be rather convoluted, with one ingredient in a given pill being made partly over here and finished off over there, combined with other ingredients from totally different sources, with the whole thing being mixed up and turned into tablets in yet another location. Here’s a good C&E News article on the foreign and domestic API situation, and it features Janet Woodcock basically throwing her hands up in the air:

“We cannot determine with any precision the volume of API that China is actually producing, or the volume of APIs manufactured in China that is entering the U.S. market, either directly or indirectly by incorporation into finished dosages manufactured in China or other parts of the world”

That’s about the size of it. Problem is, when you talk about “25% of the ingredients”, do you mean by number of total APIs? By volume? By revenue, even? It’s just not clear, and that makes a fuzzier situation even fuzzier. I was hoping to get some back-of-the-envelope estimates going, but it’s not possible at present.

But let me highlight another factor:  a finished API is going to be made from something else, of course. Where do you get that something else, and how far back in the synthesis will you be going? Anyone who knows industrial chemistry will tell you that you’re going to bang into issues of Chinese and Indian supply, especially the former. Let’s take hydroxychloroquine, which Kodak has mentioned specifically, as an illustrative example. I continue to believe that it’s basically useless for coronavirus patients, but it is definitely a needed generic drug (I know someone who just started taking it for lupus, for example). How do you get hydroxychloroquine?

Well, you get it from 4,7-dichloroquinoline; the last synthetic step is an amine displacement. Where do you get 4,7-dichloroquinoline? It’s in a lot of catalogs, but remember, almost all of those are people who are selling you material that they bought from somewhere else, which as far as I can tell, is somewhere that is not in the United States. There are a lot of places to source chemicals – here’s one web site that will give you the idea. If you click “Manufacturer” and “ton” to narrow things down, you will still see 3 4.7-dichloroquinoline sources listed as “United States”: Crescent Chemical, Ivy Fine Chemicals, and Kingchem. But from what I can see, Crescent (out on Long Island) is a distributor for other producers – I can’t see any sign that they have domestic production facilities that will make you tons of dichloroquinoline. The only listing they have for it on their website is in 25-gram bottles from Millipore-Sigma. Ivy (in Cherry Hill, NJ) does list dichloroquinoline, but they also look like they are going to contract that order out to “one of their well-established partners all around the world, as they say. And while Kingchem has their own facility for that sort of thing, it’s in Liaoning. So you can technically produce hydroxychloroquine right here in the good ol’ USA, but unless you reach further back you’re going to be producing it from starting materials that you buy overseas and very likely from China.

There are indeed manufacturers of APIs here in the US, don’t get me wrong. In fact, here is the the Bulk Pharmaceuticals Task Force, an industry group of API manufacturers that looks into supply chain issues like this. Kodak is not a member, presumably because they don’t really have any API business, as mentioned. So why, if you are talking about giving out huge loans to encourage such manufacturing, would you not turn to companies that are already doing it?

The Kodak Loans

That takes us back to the subject of the Kodak deal itself. There are a number of people who have been digging into this, for example, this article at the stock-analysis site Epsilon Theory (h/t “Diogenes” on Twitter, nom de guerre of a well-known short seller). This is when such bears come in handy – they have an incentive to look at the bad news, and I’ve always thought that is a useful bit of seasoning to have in the broader world of stock promotion.

One thing that the Epsilon Theory folks draw attention to is the source of this loan money. The DFC is supposed to be in the business of loaning money out for projects in lower- and middle-income countries, not doling out the cash here in the US. But on May 14, the president signed an executive order mandating that the DFC look into domestic supply chain issues relating to the coronavirus epidemic, so here we are. As with the recent stimulus money from the Treasury, the worry here is that this has the potential to be used as a piggy bank to reward friendly businesses and donors – and before any fans of the current administration jump on that, let me note that this is always the case, with any government agency under any administration, when it starts to hand out money to private ventures. The books should be open, because the political and financial temptations are present every time.

How about this time? Well, it’s for sure that the management at Kodak did extremely well off of this deal. The press has already noted the large stock and option awards granted just in the last month or so, and you can see those grants for CEO Jim Continenza, VP Randy Vandagriff, General Counsel Roger Byrd, and CFO David Bullwinkle via their recent SEC filings. This financial blogger believes that Kodak was already signaling by these moves that something big was coming, and interviews with the CEO and other seem to fit with that timeline. To say the least, awarding your corporate insiders big whacking stock and option grants in advance of a hugely favorable government contract is a bit. . .off. As that post notes, the timing of these grants was quite different from Kodak’s usual awards, almost as if they were rushing to meet some sort of deadline. The grants were option-heavy (not the company’s past practice) and had very aggressive strike prices, well above Kodak’s normal trading in the $2 range.

Now, as for that trading, many noticed that the day before this announcement that Kodak suddenly jumped up on much higher volume. This doesn’t appear to be illegal trading, though, or at least it doesn’t have to be explained that way. Local station WROC had a story Monday about an imminent deal, which was pulled, but not before many people saw it. No, I think that stock activity that’s worth watching in Kodak is (sadly) entirely legal, and it has to do with all those grants from a friendly board of directors. All we know about the political side is Kodak’s CEO sayingWe got connected to the White House and we said we’re trying to bring pharmaceuticals back“.

We also know that Kodak’s largest shareholder (by far) is Southeastern Management, who must be quite relieved that their position is no longer with a company that has a fresh “going concern” warning, but instead has had a gigantic stock leap thanks to a lucrative government transaction. It is no doubt a weird coincidence that a former manager of Southeastern Management (Ted Suhl) recently had his sentence for bribery and fraud commuted by President Trump (error-filled announcement of this decision annotated here). Mike Huckabee led the push to get that done, which is only fitting since Suhl used to fly then-Governor Huckabee around on a Southeastern Management private plane back in the day. Friends do favors for each other. And it may also be a coincidence that a current principal at Southeastern is involved in investments with Jared Kushner’s family – a lot of wealthy people know each other.

Let’s keep an eye on this deal, then. Chemically, financially, and politically. I don’t think, frankly, that it’s some weird or special case – a lot of stuff like this happens, all the time. But it’s one that impinges on the pharma industry, and it’s a subject that many of us know about. So we’ll see. . .

 

Oxford Biomedica Signs Three Year Clinical Supply Agreement with Axovant to Manufacture and Supply AXO-Lenti-PD for Parkinson’s Disease

Shots:

  • Oxford Biomedica will manufacture GMP batches for Axovant to support the ongoing and future clinical development of AXO-Lenti-PD (formerly OXB-102) to treat mod. to sev. PD based on Oxford’s LentiVector platform
  • Axovant is conducting a P-II SUNRISE-PD trial with AXO-Lenti-PD while the dosing of all patients in the second cohort is completed with 6mos. safety and efficacy data, expected in the Q4’20. The CSA follows WW license agreement signed b/w the companies in Jun’2018 for AXO-Lenti-PD
  • Oxford expects to manufacture AXO-Lenti-PD in its commercial-scale GMP manufacturing facilities including Oxbox in the UK, and in other facilities, as required to ensure the security of supply

Click here ­to­ read full press release/ article | Ref: Oxford Biomedica | Image: Oxford Biomedica




Complete Guide on Buying Medicines from Online Pharmacies

Thanks to online pharmacies, the ability to treat complex health issues is strengthened as one can avail advanced medicines from any corner of the world. The power of the internet is in full strength that doorstep delivery of medicines is possible.

Buying online drugs safely is still a major concern because after all, medicines must heal & not make you deal with a blunder in turn.

We’re here with a brief guide on all you need to know about buying medicines online. Check it out!

Why you should buy drugs online?

In the times when everything is being delivered to your doorsteps, right from the groceries to home appliances & the ever-growing list of outfits, why not medicines too?!

When you buy drugs online, your idea should be money-saving medicines that are from genuine websites & don’t waste your time. Here are a few reasons for you to buy medicines online.

Convenience

Most people prefer to shop for medicines from online pharmacies due to one or other reasons. It’s very convenient for busy schedules people, old-age people or handicapped persons to even go to local stores & buy medicines. We don’t need to mention why you should go to an online pharmacy, right?

Round-the-clock Availability

Your nearest local store may not be available 24*7 but online medicine order can be placed anytime & from anywhere. The most online pharmaceutical store tries their best to deliver medicines as soon as possible so no-delay treatment is accelerated.

Standardized & Generics Alternatives

When you buy medicines online, you get to buy standardized medicines. In case if prescribed medicine is not available, you’ll get to explore innumerable generic alternatives too.

Product details

Opposite to the local medicine stores, online pharmacy stores have complete product detail displayed. You get to know what the medicine is about, it’s manufacturing & expiry dates as well as precautions to be undertaken while handling the medicine.

Easy Payments

Multiple payment options are also available. You can either pay by cash on delivery or make secured payments online via credit/debit cards as per your preference.

Wide range of medicines shipped worldwide

Some authoritative online medicine stores facilitate the worldwide shipping of medicines.

This simply means that you can order medicines from any corner of the world if it’s not available in your country. Amazing, isn’t it?

How to Buy Medicines safely From an Online Pharmacy?

Buying medicines online safely isn’t a difficult task. You just need to be more cautious & attentive. The website you buy medicines from makes a great difference. Here is what we want you to consider…

5 Signs that you’re Buying Medicine Online from Trusted Pharmacy

Just as you place an online medicine order, millions of people choose to buy medicines online. With the advancement of technology, new doors have opened for the convenience of patients to avail safe, effective & affordable drugs easily.

However, it may seem convincing to buy drugs online but you should ensure if you’re buying from authentic online pharmacy.

You’ve got a list of recommended online drugstores by the National Association of Boards of Pharmacy for US & Canadian provinces, chances prevail that you don’t find time to check out the list.

Meanwhile, you can have a look at these quick signs that are time-saving & can save you from transacting with fake online drugstores too.

  • The website asks for a prescription

None of the pills, tablets, or vaccines is recommended to be consumed without being prescribed by the doctor. The same rule applies while you buy them from online pharmacy. You should be asked for a prescription first.

  • The drugstore is a licensed pharmacy

While buying medicines online, you should always check for the license of the respective pharmaceuticals. A licensed store will adhere to the standardized medicine norms that are in turn safe for you to buy drugs without any fear.

  • Website is HTTPS – Secured

As you place your online medicine order, you’ll have to make payments for which the website of online pharmacy needs to be secured. Look for the green lock of HTTPS in the URL of the website. If it is there, you have the green signal to continue.

  • Physical addresses are mentioned

Having the store address mentioned on the site helps users to contact/reach the store for queries. Don’t forget to check for the physical address, contact number & support Email.

  • Company Profile is represented

How would you know if the pharmaceutical store that’s so grand online exists in reality or not? Simple! Look for the company profile. Check each & every essential stuff from 0location photographs to established year & more.

As you’ve learned a brief about genuine pharmacies online, let us make your online medicine order foolproof experience.

Important Guidelines for Purchasing Medicines Online

Keep the following things in mind while purchasing medicines online:

● Consult the doctor & get a prescription first.

● Ensure that the website asks for a prescription as you add medicines to the cart.

● Check the website thoroughly before buying any particular medicine (company’s profile, license details & more).

● Purchase from authoritative medicine & drug distributors in the USA or Canada only.

● Never buy medicines based on self-diagnosis.

● Buy medicines from licensed pharmacies only.

● Make sure that the website is secured as you make payments.

Browse Complete Range of Drugs available at Actiza Pharmacy

  1. Cardiovascular Drugs
  2. Antibiotics Medicine
  3. Anticancer Injection
  4. Anticancer Drugs
  5. Anti HIV
  6. Fluid therapy LVP
  7. Steroid Products
  8. Skincare & Dermatology
  9. Skin Ointments
  10. CNS Medicines
  11. Digestive Tract Medicines
  12. Analgesic & anti-inflammatory Drugs
  13. Eye & Ear Drops
  14. Nephrology Medicines
  15. Anti-diabetic Drugs
  16. Anti-malarial Drugs
  17. Life-saving Drugs
  18. Respiratory Medicines
  19. Pharmaceutical Medicines
  20. Vaccines

How to Buy Medicines Online?

Do you want to buy medicines online with Actiza Pharmacy; we will be pleased to serve you.

Check out the wide range of medicines available at our online medicine store, cart the desired on-prescription medicines & we’ll deliver them as fast as possible.

Your safety is our concern. Be assured to receive any medicine in hygienic conditions bought with us.

That’s all folks!

Stay tuned with us for more of such informative reads. You’ll never be disappointed!

The post Complete Guide on Buying Medicines from Online Pharmacies appeared first on Actiza Pharmaceutical.

Umedica Laboratories -Walk-In Interviews for Freshers & Experienced On 27th to 29th July’ 2020

Umedica Laboratories -Walk-In Interviews for Freshers & Experienced On 27th to 29th July’ 2020

Umedica Laboratories -Walk-In Interviews for Freshers & Experienced -Manufacturing / Packing / Injection / QA Departments

Time & Venue Details : Walk-In On 27th to 29th July’ 2020 At Umedica Laboratories, Plot No 221, II Phase GIDC Nr Morarji Circle Vapi.

Job Description

•Department : Tablet Mfg
Required Educational Qualification-B.Pharm/ M.Pharm
Required years of Experience: 1-3
Section: Granulation / Compression / Coating.
Required Skill Set:Should have sound experience of handling and troubleshooting in Granulation, Compression, Coating ,documentation, QMS and audit compliance.

•Department : Tablet Packing
Required Educational Qualification-B.Pharm/ M.Pharm/ M.Sc (Chemistry)
Required years of Experience: Fresher’s Only
Section: Primary / Secondary Area.
Required Skill Set: Should have sound in theory background.

•Department : Injection
Required Educational Qualification-B.Pharm/ M.Pharm
Required years of Experience: 1-3 years
Section: Washing / Filling/ Packing / Mfg.
Required Skill Set:Should have sound experience of handling and troubleshooting in Washing / Filling/ Packing / Mfg. Good Command in QMS And Audit Compliance.

•Department : QA
Required Educational Qualification-B.Pharm/ M.Pharm
Required years of Experience: 1-5 years
Section: IPQA Injection/ QMS/ Documents
Required Skill Set:Should have sound experience of handling and troubleshooting in IPQA Injection/ QMS Sections.

Eligibility : Pharma formulation (Preferably USFDA UNIT) experience candidates are eligible for interview.

Interested candidates are requested to bring Updated Resume, Passport Size Photograph, Degree Certificate (Original & Photo Copy), Salary Slip, Last Increment Letter or Appointment Letter (Original & Photo Copy), who are not able to attend interview on schedule date, can send their updated [email protected] HR Contact No.: +91 9712649481 (Website: www.umedicalabs.com)

Umedica Laboratories -Walk-In Interviews for Freshers & Experienced On 27th to 29th July’ 2020

Caption Health Nabs $53M to Commercialize FDA-Cleared AI-Guided Ultrasound Technology

What You Should Know:

– Caption Health raises $53 million series B round to
fuel commercialization and expansion of its FDA-approved AI ultrasound
technology. This is on the heels of its receiving FDA clearance, designated as
a breakthrough technology to fight against COVID-19.

– As ultrasounds, especially cardiac ultrasounds, have
become increasingly important during a time of COVID when resources are
stretched, using AI to help more healthcare professionals take high-quality
images is extremely important, and Caption helps democratize who can do them by
using AI-guided software.

– It’s the first and only technology to allow clinicians
without specialized ultrasound training to capture images at the point of care.
That helps reduce the risk of exposure to the virus for hospital personnel and
the strain on limited resources in hospital emergency rooms and ICUs caring for
patients with COVID-19.


Caption Health, a Brisbane,
CA-based medical artificial
intelligence (AI)
company, closed its Series B funding round with $53
million to further develop and commercialize its FDA-cleared, AI-guided
ultrasound technology that expands patient access to high-quality and essential
care. The financing was led by existing investor DCVC. New investors Atlantic
Bridge and cardiovascular medical device leader Edwards Lifesciences also
participated, along with existing investor Khosla Ventures.

Emulating Expertise with AI

Caption Health was founded in 2013 on a simple but powerful
concept: what if we could use technology to emulate the expertise of highly
trained medical experts and put that ability into the hands of every care
provider? Caption Health delivers AI systems that empower healthcare
providers with new capabilities to acquire and interpret ultrasound
exams. 

FDA-Cleared AI-Guided Ultrasound Technology

Ultrasound is typically used to diagnose cardiac
function. Though access to ultrasound has increased as systems have gotten
smaller and more portable, a fundamental challenge remains: performing an
ultrasound exam is extremely difficult and requires years of specialized
training only a subset of clinicians have. Caption
AI
, the first and only AI-guided medical imaging acquisition system, allows
healthcare providers without lengthy specialized training to perform ultrasound
and obtain diagnostic-quality images, which can help support clinical
decision-making and deliver valuable cost and time savings for medical
institutions. 

Caption AI Key Features

The Caption AI platform, which includes
Caption Guidance™ and Caption Interpretation™, makes it radically easier to
perform ultrasound and obtain diagnostic-quality images by providing: 

Expert Guidance – Caption Guidance provides 90-plus types of real-time feedback and instructions to emulate the guidance of an expert sonographer; 

Automated Quality Assessment – Caption Guidance helps standardize diagnostic-quality exams by accurately assessing, automatically recording, and seamlessly providing real-time feedback on diagnostic image quality; 

– Intelligent Interpretation – Caption Interpretation produces an automated ejection fraction calculation—the most widely used measurement to assess cardiac function—from single or multiple cardiac ultrasound views commonly acquired at the point of care (AP4, AP2, PLAX). 

Caption AI software is currently available fully
integrated with a Terason uSmart 3200T Plus portable ultrasound system, which
offers a full range of clinical applications including lung, vascular, and
abdominal scanning.  

Caption AI is not intended to replace sonographers —
instead, it’s pushing ultrasound into new settings and generating referrals to
technicians when something shows up in the doctor’s office that needs further
examination. In fact, the company is advising the cardiac sonographers’
professional society on developing educational materials for its 17,000 member
physicians, sonographers, nurses, veterinarians, and scientists.

Recent FDA Clearance to Fight COVID-19

Caption Health accelerated its plan to bring Caption AI to
market in late summer after demand from clinicians caring for patients
with COVID-19.
Because it allows clinicians without specialized ultrasound training to capture
images at the point of care, Caption AI helps reduce both the risk of exposure
to the virus for hospital personnel and the strain on limited resources in
hospital emergency rooms and ICUs caring for patients with COVID-19. After
receiving urgent requests from clinicians at leading hospitals across the
country, the U.S. Food and Drug Administration granted Caption Health
expedited clearance for Caption AI
, which is now commercially
available and in use at eleven leading medical centers in the U.S.
including Northwestern Medicine, Allina Health, and Minneapolis Heart
Institute.

As medical providers restart services which were stopped at the height of the pandemic, Caption AI is being used to address the imaging needs of a wave of chronic, elderly and comorbid patients returning for necessary care. By expanding the capacity to perform an ultrasound, this safe and effective diagnostic tool is poised to become an essential part of care beyond the hospital in settings such as retail clinics or even in the home. 

“We are truly grateful to our investors and to our early adopter clinicians, who have believed in us from the beginning,” said Charles Cadieu, CEO of Caption Health. “This capital will enable us to scale our collaborations with leading research institutions, regional health systems and other providers by making ultrasound available where and when it is needed—across departments, inside and outside the hospital.  As the world’s first and only AI-guided ultrasound technology, our goal is to enable all clinicians — regardless of prior experience—to capture diagnostic-quality ultrasounds. In doing so, we aim to have a profound impact on the quality and cost of care for millions of patients around the globe—wherever they access care.” 

Commercialization Efforts

Caption Health will use this funding to scale up its
commercial operations, continue to develop its AI technology platform and form
new partnerships. As more providers adopt the Caption AI platform, the company
plans to add new clinical capabilities to expand the use of Caption AI in
additional care settings. 

UnitedHealth Group Launches New Digital Health Therapy for Type 2 Diabetes

UnitedHealth Group Launches New Digital Health Therapy for Type 2 Diabetes

What You Should Know:

– UnitedHealth Group launches a new pilot for digital health therapy for Type 2 Diabetes to nearly a quarter-million members at no additional cost to more than 230,000 employer-sponsored, fully insured UnitedHealthcare members in 27 states and Washington, D.C.

– The therapy, known as Level2, helps participants gain
real-time insights about their condition and, for some, successfully reduces
spikes in blood sugar levels or achieves Type 2 Diabetes remission.

– The pilot offers a combination of real-time glucose monitoring, lifestyle changes, and one-on-one coaching to help people stabilize blood sugar levels and flag potential COVID-19 infections


UnitedHealth
Group
, today announced the launch of an innovative new therapy that
combines wearable technology and customized personal support to help improve
the health of people living with type 2 diabetes. The therapy — known as Level2 — helps eligible participants gain real-time insights
about their condition and, for some, successfully reduce spikes in blood sugar
levels or even achieve type 2 diabetes remission.

Level 2 Overview

Level2 equips eligible participants with integrated tools that include a mobile continuous glucose monitor (CGM), activity tracker, app-based alerts, and one-on-one clinical coaching to help encourage healthier lifestyle decisions, such as food choices, exercise and sleep patterns. In the future, the UnitedHealth Group may offer the Level2 model to support people with other chronic conditions beyond type 2 diabetes.

Through Level2, the combination of wearable technology, clinical coaching, lifestyle changes, and incentives — all offered at no additional cost to eligible UnitedHealthcare members who enroll — is designed to help empower people with type 2 diabetes to become healthier and potentially achieve remission. This is accomplished by helping participants use the latest scientifically proven techniques and personalized support to understand and more effectively stabilize blood sugar levels.

Importance of Maintaining Appropriate Blood Sugar Levels

Maintaining appropriate blood sugar levels, as measured
by hemoglobin A1C, is a key focus for people with diabetes. This is especially
critical now, as Type 2 diabetes is a significant risk factor for those
infected with COVID-19, and the therapy has demonstrated an ability to improve
the health of those living with the disorder.

Initial Pilot Results

In a pilot study of more than 790 UnitedHealthcare members, certain Level2 participants achieved a clinically meaningful reduction in their A1C within 90 days. Participants who began this therapy with the most significantly elevated A1C (above 8.0%) saw the greatest reduction (more than 1 percentage point decrease on average). By helping participants better control blood sugar levels and, in certain cases, achieve type 2 diabetes remission, some Level2 participants no longer require medication for their condition. To date, Level2 has helped participants improve their health to the degree they eliminated the need for more than 450 prescriptions.

Initial studies show that sudden changes in blood sugar levels among people with type 2 diabetes may indicate potential COVID-19 infections. Closely monitoring this metric among Level2 participants and supporting people at greater risk for COVID-19 complications, Level2 has demonstrated the ability to predict potential COVID-19 infections and help encourage access to needed medical care earlier. Type 2 diabetes may be a risk factor associated with worse outcomes related to COVID-19. However, people with type 2 diabetes whose blood sugar is stable may experience fewer medical complications and greater likelihood of recovery from COVID-19.

Participant Story

Jon Alger, 63, a Florida-based participant and software consultant who tested positive for the coronavirus in March, said Level2 was instrumental in helping him recover from COVID-19. “By helping me to better understand and monitor my glucose levels, Level2 encouraged me to eat at the appropriate times and helped me make better choices.”

Availability/Cost

More than 230,000 UnitedHealthcare employer-sponsored, fully insured health plan participants in 27 states and Washington D.C. are now eligible for Level2 at no additional cost. Participants enrolled in some employer-sponsored health plans may earn financial incentives, including cash or gift cards, by taking actions such as consistently wearing a CGM; completing post-meal walks; interacting with an assigned coach, and following personalized recommendations. Available incentives may vary by state and health plan type/design; eligible incentives offered through Level2 may be in addition to other well-being rewards available through the participants’ health benefits.

Future Expansion Plans

This multifaceted therapy is being expanded on a pilot basis
to select plan participants enrolled in eligible employer-sponsored, fully
insured health plans in the following markets: Arizona, Arkansas, Colorado,
Connecticut, Florida, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana,
Michigan, Mississippi, Missouri, Nebraska, North Carolina, Ohio, Oklahoma,
Oregon, Pennsylvania, Rhode Island, South Carolina, Tennessee, Texas, Virginia,
Washington, D.C., West Virginia, and Wisconsin. Level2 will be made available
to select employers with self-funded plans later this year.

How to Save Money on Payment Processing

Are credit card processing fees cutting into your profit margin? Here are some tips to help your business save money.

Every time a customer pays for services or products with a credit card, the merchant has to pay a card processing fee. While the fee can often be negligible on a single transaction, the cost can add up with many purchases. According to the payment industry newsletter The Nilson Report, the weighted average processing fees for American Express, Mastercard, Visa, and Discover credit cards were between 2.09 to 2.33 percent in 2017. 

When looking for a new payment processor, it’s important not only to find one with the lowest transaction fee, but one that allows your business to bypass the transaction fee. Some payment processors now offer the option to split or pass on the transaction fee to your customer, to save more money for your business.

Breaking Down Credit Card Processing Fees

Several different costs make up credit card processing fees:

Interchange fee: This is usually the highest cost associated with processing, collected by credit card issuers. The interchange fee is a percentage of the transaction, combined with an additional fixed amount. The fee can vary based on the card network, type (including business or rewards credit cards), payment processing method (card swiping or manual entry), and business type.

Service or assessment fee: This fee is paid straight to the credit card network (for example, Visa or Mastercard). This fee is smaller, and rates are usually lower for debit cards than credit cards. This can also sometimes include other fees, like foreign transaction fees.

Payment processor’s markup: The credit card processor also profits from payment processing by charging a small fee.

How can you save money on payment processing for your business?

Shop around for the best processor

It’s always wise to do some comparison shopping before you commit to any significant purchase. You should do your research and see which processors offer the best deal for your business. Before choosing a card processor, ask a lot of questions: what is the total rate with all fees included? Do you charge fees for cancelation or applications? While some processors may offer a lower rate, they might actually charge more in hidden fees.

Set minimum credit card sales

Especially for smaller businesses with small transactions, you can save more money by setting a minimum for credit card sales. With low price purchases, you can end up spending more than you can afford on processing fees. You can enforce your policy by setting up a sign at your shop by the register, so customers will know when making a purchase.

Split or pass on the transaction fee to your customers

While there are many businesses claiming to have the lowest transaction fees, not many allow the business to pass on the transaction fee. A company like Denefits will allow you to split or pass on the transaction fee to your customer, to save your business these additional costs. There is no signup fee or equipment required to use Denefits. Transactions happen through their mobile apps, web interface, and most importantly remotely.

The post How to Save Money on Payment Processing appeared first on Denefits.

How Can I Import Pharmaceutical Products From India?

Are you a complete beginner in the field of pharmaceutical imports? Do you want to know procedures to import pharmaceuticals from India?

In this article, you will get a basic idea upon the import of Pharmaceutical products. You will learn about import procedures and import custom clearance procedures. You will also get information about all the other formalities that you need to fulfill, in order to import pharmaceutical products from foreign countries, especially India.

Most of the procedures and formalities are the same in all countries after the globalization of trade by the General Agreement on Tariff and Trade (GATT). However, some minute variations in rules might occur from country to country. So, before importing any commodity, you must check the requirements of the importing country.

(Note: This is just a beginner’s guide for pharmaceutical importers and for more knowledge on import and export, you need to have detailed information of the particular country’s policies).

Government Registration as Importer

To act as an importer in any country, you need to have government registration as an importer. Foreign Trade government office is the authority of the importing country is responsible for issuing you this registration. For example, the IEC number (Import Export Code Number) is issued from the office of Director General of Foreign Trade office for Indian importers and exporters.

This is an onetime process to become an importer, but you might need to renew it as per the terms and conditions of the foreign trade office of the respective country. This process of registration is digitized in many countries and does not require a lot of time if you have the proper documents with you.

What are the Special Requirements to Import Pharmaceutical Products?

The process to import pharmaceutical products varies from country to country. However, there are some common requirements that you must fulfill for almost all countries:

  • Certificate from wildlife protection board of importing country to import Pharmaceutical products (Required for some specific products)
  • NOC from Drug controller to import Pharmaceutical products
  • Certificate of Origin to import Pharmaceutical products (To avail exemption on import duties and taxes due to the different unilateral and bilateral agreements signed among the countries)

Procedures to Import Pharmaceutical Products

Before the actual shipment of imports, the importer and supplier must mutually agree on the terms and conditions of import sale. Quality specifications, Pricing, terms of payment, delivery terms, mode of transport, and other terms and conditions need to be agreed and mentioned in a purchase order when importing Pharmaceutical products.

In the process to import pharmaceutical products, the necessary documentation along with customs clearance procedures at the importing countries need to be completed either by importer’s customs broker or directly by the importer as per the foreign trade policy of the respective importing country. You also need to import carrier’s document (Bill of Lading /Airway bill), packing list, commercial invoice, certificate of origin, and other required documents along with the actual product.

The international partners of countries share quality measures among each other and exempt multiple inspections on the same set of products for both export and import. However, the policies of most of the developed countries require certification by authorized agencies before the import of goods from the Least Developed Countries (LDC).

Prior Notice to Importing Country

Certain countries require the filing of prior notice regarding the import of Pharmaceutical before the arrival of goods in their entry port.

The post How Can I Import Pharmaceutical Products From India? appeared first on Actiza Pharmaceutical.

Generic VS Brand Name Drugs: Complete Analysis

Generic drugs are one of the few affordable healthcare options left in this growing medical industry. They are copies of brand-name drugs and have exactly the same dosage, intended use, effects, side effects, and strength as the original drug. In other words, generic drugs have the same pharmacological effects as their brand-name counterparts.

Then why are they much cheaper than the brand-name drugs? Do both of them have the same healing ability? Is there a time when one is preferable over the other? Well, there are many such doubts regarding ‘generic vs brand name’. In this article, we intend to answer some of the hot FAQs regarding this topic.

What Is The Difference Between Generic And Brand Name Drugs?

The active ingredients present in generic drugs and their brand name counterparts are exactly the same. According to the US FDA, a generic need to have within 10 percent above or below the blood concentrations achieved with the brand name to get approved. In reality, they vary by 3-4 percent in one direction or another, and the difference is almost unrecognized by the human body.

However, the difference between generic drugs and brand name drugs arises when it comes to their inactive ingredients. Generics do not need to contain the same inactive ingredients as the brand name product. Inactive ingredients have nothing to do with the therapeutic action of the drug. These can be binding materials, dyes, preservatives, and flavoring agents.

So, if the pill you have been taking looks different than the one you had before, you don’t need to worry! It usually means a different manufacturer has made that pill and thus they have different inactive ingredients, which makes them look different. 

The generic name and brand name of the same drug are usually different. For example, a generic drug for diabetes is metformin while its brand name is Glucophage. A generic drug used for hypertension, is metoprolol, while the brand name for the same drug is Lopressor. (Usually, brand names are Capitalized while generic names are not.)

Why Are Generic Drugs Much Cheaper Than Their Branded Counterparts?

Generic drugs are cheaper only because the manufacturers have not had the expenses of developing and marketing a new drug. While bringing a drug on the market, a company needs to spend huge sums of money on research, development, marketing, and promotion of the drug.

In fact, a study estimates that the cost to develop and win marketing approval for a new drug is $2.6 billion. Only then is a patent granted to the company which gives the company an exclusive right to sell the drug until patent is in effect.

At the end of an approximately seven-year period of exclusivity, the patent nears expiration, and the FDA allows one specific generic to be the first to market. That generic is granted a six-month period of exclusivity. Once this time period ends, any manufacturer who can prove that their drug can achieve the same drug concentrations in the blood that the brand name does can produce a generic. Manufacturers of generics don’t even need to do studies in people to prove safety. It is assumed that if they can achieve the same blood concentrations, they will achieve the same results.

Thus, generic manufacturers can produce generics without the startup costs for the development of the drug, and they can afford to make and sell it more cheaply. The prices of a generic go further down as multiple companies begin producing and selling the same drug.

Do Generic Drugs Have Lower Quality Than Branded Drugs?

Absolutely not! As mentioned earlier, generics have nearly the same chemical composition as the brand name drugs. They are manufactured in the same high-quality facilities and similar safety measures are followed while manufacturing them as are done during the production of brand name drugs. As a result, you get the same quality as brand names but at much cheaper costs.

Now, you may wonder that generics are cheaper, they have the same quality and effectiveness. Then why should people buy brand name drugs at all?

However, you should keep in mind that there is a lot of diversity among people. When blood concentration studies are done, they are done on “average” people. As the inactive ingredients and process of manufacturing differ, they can’t assure the same results for everyone.

For example, certain people are very sensitive to small changes in blood concentrations. Others may have a shorter colon or disease that makes food pass through their intestines faster or slower, which creates a noticeable difference in the effectiveness of the drug.

Brand names also become important for NTI (narrow therapeutic index) drugs. The blood concentrations you need for these drugs to achieve a therapeutic dose and the concentrations that will cause toxic effects are very close. Small changes in concentrations can lead to ineffective and even toxic responses. For example, medications for seizures, heart arrhythmias, thyroid hormone, warfarin (blood thinner), and lithium are all NTIs.

Thus, using generics for these cases might get tricky and you need to talk to your physician before switching to generic to make sure you understand the risks and rewards.

The Bottom Line: Generics tend to earn some higher points in the ‘Generic drugs vs Brand name’ debate. They prove to be a more sustainable way to healthcare. However, for certain special cases, you should always prefer brand names over their generic versions.

Actiza pharmaceuticals is one of the top suppliers of generic drugs all around the globe. We also provide export services for more than 30 countries across the globe. So if you are interested in our pharmaceutical services you can feel free to contact us.

The post Generic VS Brand Name Drugs: Complete Analysis appeared first on Actiza Pharmaceutical.

Advantages of Purchasing from a Pharmaceutical Wholesale

With modern fast-paced life, the world is currently facing major health issues. Whether chronic illnesses or day-to-day ailments, the occurrence of diseases is growing exponentially. As a result, the demand for medicines is always high in the market.

However, a lot of duplicities are taking place in the pharma
industry nowadays. Some retailers are selling fake medicines or hiking the
prices to increase their revenues.

 Hence, as a hospital or an independent pharmacist, it becomes necessary to take time before purchasing the medicines. To grow your pharma enterprise, you might need a variety of authentic products at affordable prices. You also need an on-time supply of these products, to keep meeting the market demands.

This is where the pharmaceutical wholesalers step in. The supply of the bulk of quality products from these wholesalers can largely simplify your business. Here are a few advantages of purchasing medicine from a pharmaceutical wholesaler:

  • Most Affordable Prices

It’s true that healthcare is quite expensive for most pharmaceutical suppliers and consumers. So, if you want to cut costs, you should surely buy drugs from wholesalers. As they sell their goods in bulk, they guarantee economies of scale.

Especially, the Indian wholesale suppliers set high standards when it comes to affordability. These wholesalers work hand in hand with the retailer to cater the global needs. India has the highest number of US FDA approved pharma companies, which ensures trust. The superior quality of affordable generic medicines in India is unparalleled, and you should definitely take its advantage to grow your business.

  • Variety & Convenience

Are you someone who has to hop from one store to another to find medicines at wholesale rates? The pharmaceutical wholesalers tend to stock their shops with different types of drugs under the same roof. This eases out your task and saves a lot of your time. With different types of drugs in their stores, you can be assured to meet every consumer’s needs.

  • Valuable Industrial Information

The pharmaceutical market is ever-changing. If you are not updated, you can be left behind. Wholesalers provide you valuable information on the major changes that are likely to affect the pharmaceutical industry.

This will enable you to make prior arrangements on your buying plans. For example, if there are chances of price hikes, then you can buy the drugs in advance.

  • Room for Negotiation

Wholesalers
usually throw in a discount or two to pharmacists who buy their products
regularly. And if you are able to build a good relationship with them, they
also sell medicine and other items at a much lower cost.

This benefits the wholesalers as well, because they can retain their old customers to ensure sufficient business scopes. Thus, it creates a win-win situation for both the parties and you should definitely take its advantage.

  • Consultancy and advice

Most wholesalers carry huge industrial experience. So, if you maintain a good relationship with pharmaceutical wholesalers, they can offer you valuable advice on whichever subject. For example, pharmacies can guide you on how to manage your stock, your marketing strategies, and your public relations among others. You can never get such valuable insights from the retail stores.

  • Deliveries

With wholesalers, you don’t need to actually go to their
shop and get your products. Most of them offer the delivery of the products. They
might charge some extra fees for that, but it is far cheaper than actually going
to their place and buying your products.

This feature is particularly helpful for cross country
trades. In fact, Indian pharmaceutical exporters
can offer you one of the best export services paired with their high-quality pharma
products.

How to
Choose the Best Pharmaceutical Wholesaler

Now
that you know the advantages of pharmaceutical wholesalers, you must
also know how to choose the right wholesaler for your business.

If you simply browse the internet, you will come across many pharmaceutical companies who claim to offer the best services. But, are all these claims genuine?

Well, not exactly. If you fail to do your bit of research
and choose the wrong company, then you may suffer major losses. Here are a few
points you should keep in mind while choosing the right pharmaceutical
wholesaler:

  • Know the services that they provide

You must know about all the services that these companies provide. You should also know the countries they serve and the product categories they offer. For example, Actiza Pharmaceuticals serves more than 30 countries, which include, Gulf countries, South East Asia, Latin America and Africa.

  • Check the reliability of the company

Experience and authenticity matter a lot in the pharma industry. Do make sure that the company has worked with the right clients and has enough experience, before associating with them. Also, try to contact their previous customers (if possible) and get their feedback.

  • Product Quality

Needless to
say, the company should be certified by reputed organizations, so that you can
trust them with their quality. Actiza pharma is certified by WHO, WHO-GMP, US
FDA and ISO 9001.

  • Delivery

Make sure that
the pharmaceutical wholesaler delivers your orders on-time. Defaults have no
place when it comes to the pharma industry.

  • Contacts and Connections

An ideal
wholesaler should have connections with multiple pharmaceutical manufacturers. This
will enable the company to meet all the demands of their clients. Even better
if the company has its own manufacturing unit. For example, Actiza
pharmaceuticals has its world-class manufacturing facility located in Gujrat,
India

If you’re interested in working with Actiza Pharmaceuticals, you can feel free to Contact us. Our representatives are waiting on standby to answer your inquiries, so don’t hesitate and reach out to us as soon as possible.

The post Advantages of Purchasing from a Pharmaceutical Wholesale appeared first on Actiza Pharmaceutical.

Introduce to Actiza Pharmaceutical Pvt. Ltd.

At Actiza Pharmaceutical Pvt. Ltd., we offer a varied range of services covering almost every sphere of the pharmaceutical domain and these include our Pharmaceutical Exportcontract manufacturing, regulatory support, manufacturing services and packaging as well. As it comes to our manufacturing of drugs part, we consistently outperform all other counterparts in the market by using sufficiently certified products and ingredients in preparing the capsules, tablets, liquids, creams, ointments and other drugs solely aimed at the betterment of health of the human beings.

The post Introduce to Actiza Pharmaceutical Pvt. Ltd. appeared first on Actiza Pharmaceutical.

WordPress Resources at SiteGround

WordPress is an award-winning web software, used by millions of webmasters worldwide for building their website or blog. SiteGround is proud to host this particular WordPress installation and provide users with multiple resources to facilitate the management of their WP websites:

Expert WordPress Hosting

SiteGround provides superior WordPress hosting focused on speed, security and customer service. We take care of WordPress sites security with unique server-level customizations, WP auto-updates, and daily backups. We make them faster by regularly upgrading our hardware, offering free CDN with Railgun and developing our SuperCacher that speeds sites up to 100 times! And last but not least, we provide real WordPress help 24/7! Learn more about SiteGround WordPress hosting

WordPress tutorial and knowledgebase articles

WordPress is considered an easy to work with software. Yet, if you are a beginner you might need some help, or you might be looking for tweaks that do not come naturally even to more advanced users. SiteGround WordPress tutorial includes installation and theme change instructions, management of WordPress plugins, manual upgrade and backup creation, and more. If you are looking for a more rare setup or modification, you may visit SiteGround Knowledgebase.

Free WordPress themes

SiteGround experts not only develop various solutions for WordPress sites, but also create unique designs that you could download for free. SiteGround WordPress themes are easy to customize for the particular use of the webmaster.

Actiza Pharmaceutical Pvt Ltd. is a strong presence in Injectable manufacturing

Actiza Pharmaceutical Pvt Ltd. is a rapidly growing Indian pharmaceutical company with a strong presence in Injectable manufacturing. Established in 2011, Actiza Pharmaceutical has emerged as one of the largest focused small volume parenteral manufacturer in India and is engaged in contract manufacturing for all major Indian Pharmaceutical companies.

The post Actiza Pharmaceutical Pvt Ltd. is a strong presence in Injectable manufacturing appeared first on Actiza Pharmaceutical.