Randomized Trial of Ivabradine in Patients With Hyperadrenergic [POTS], 2021, Taub et al

@EndME can you define blinding as you understand it?
It's the definition we all have in our head (edit: I just saw that your definition was something along the lines of "effective blinding is when people only guess the allocation arm correctly as often one would expect by chance", this is definitely not the definition I had in mind and I don't think such a definition could ever make sense). The point that I see mattering that is being neglected is that any meaningful charaterisation of blinding carries information not about whether participants ever figure it out, but about why and when they do. So the procedure of blinding can be succesful even when informational blinding breaks due to efficacy effects. This is pretty much the textbook definition as well, at least as far as my biased eye and google tell me.

You want to avoid bias. Discovering your allocation because the treatment works is not bias, even if it can result in bias.
 
Last edited:
What this discussion however seems to illustrate to me is that short trials for POTS populations, that inevitiably aren't well characterised, with fast acting drugs can often not tell you whether your drugs work even if your trial is a perfectly triple-blinded placebo-controlled trial.

Since no objective measurements to characterise the illness (heart-rate or similar) currently exists you have to include objective measurements of disability and these are probably often hard to accurately be determined in short trials.

I'm unsure about dose-response studies. I guess it depends on the drug and what sort of effects you can expect from the drug and its different dosages.

I think what one actually looks for in such trials are then consistencies. Are the reported effects happening at the same time, what is reproducible ect? I do actually think that the cross-over design chosen here is very useful for exactly those reasons, because at least you gain one extra level of control, but I guess multiple cross-overs might be even more useful when you expect the drug to be fast-acting.

The alternative would be to use active placebos (say things reducing tachychardia) that are known to be ineffective. But for POTS things might become a bit tricky if your main subjective complaints in the population are somehow related to tachychardia, at that point you're probably going to have to better characterise the population?

Edit: I guess I just learned the same thing from the discussion that @Hutan had already written. I should probably be less blind!
 
Last edited:
So the procedure of blinding can be succesful even when informational blinding breaks due to efficacy effects. This is pretty much the textbook definition as well, at least as far as my biased eye and google tell me.

I think Utsikt is right here. The procedure of blinding is not the same as blinding being achieved. Blinding looks to have been broken. Textbook definitions are often simplistic in this sort of context.
Subjective outcomes become problematic as soon as blinding is broken by something other than that outcome itself. If heart rate was monitored and showed less tachycardia then the study provides sound evidence of effect. If the subjects report less tachycardia it is reasonable to assume that correlates well enough with actual heart rate to be valid. But all the other outcome measures are subject to bias because for them blinding has been broken by something else.
 
I think Utsikt is right here. The procedure of blinding is not the same as blinding being achieved. Blinding looks to have been broken. Textbook definitions are often simplistic in this sort of context.
Subjective outcomes become problematic as soon as blinding is broken by something other than that outcome itself. If heart rate was monitored and showed less tachycardia then the study provides sound evidence of effect. If the subjects report less tachycardia it is reasonable to assume that correlates well enough with actual heart rate to be valid. But all the other outcome measures are subject to bias because for them blinding has been broken by something else.
I'm a bit lost to what is being argued here. You seem to be concluding that blinding for the subjective outcome measures is broken on the basis of tachychardia observations. That very much fits the definition of what I proposed as blinding and why it's reasonable to think that the trial doesn't provide meaningful evidence of being of use for the treatment of POTS. Blinding here appears to have been broken by things other than the subjective outcomes measures itself.

If I understand @Uitsikt correctly they argued that a trial is never blinded if people can guess they received the treatment more often than one would expect by chance even if they only make that guess after having observed all possible effects. According to that definition, your Rituximab trial would not have been blinded study, because of course people observed effects and on the basis of this thought it was more likely that they received the drug.
 
This is the following example I hand in mind: A placebo-controlled study is conducted on a population suffering from low levels of an antibody. The placebo perfectly matches the treatment in terms of side-effects and treatment groups are randomly assigned without anybody knowing the labels. The treatment restores the antibody to normal level and starts working after 30 day, but participants don’t know this. At day 29 both groups report that there’s a 50% chance that they thought they are in either group. At day 31 the placebo group reports no changes in symptoms and no changes in guessing the allocation arm. At day 31 the treatment group reports drastic improvements in symptoms and these are correlated to a marker of antibody going up without them having access to seeing this. 80% of the treatment group now says that they believe to have received a drug. Roughly speaking I think both @Utsikt and I would say that this look like this was a successful trial, because objective and subjective measures line up. But I would also conclude this was a blinded trial whilst @Utikt argues it wasn’t (because things aren’t split 50%-50%), which is it?

(If I have misinterpreted what you said @Utsikt please excuse that, this is hypothetical example is merely meant to illustrate the problem)
 
If I understand @Uitsikt correctly they argued that a trial is never blinded if people can guess they received the treatment more often than one would expect by chance even if they only make that guess after having observed all possible effects. According to that definition, your Rituximab trial would not have been blinded study, because of course people observed effects and on the basis of this thought it was more likely that they received the drug.
Exactly!

And that has to be accounted for when interpreting the results.

Sometimes it doesn’t matter if the patients know which group they are in, because the relevant outcomes can’t reasonably be affected by them knowing.

Other times, the outcomes will probably be affected by the patients knowing. Like most subjective outcomes, and some objective ones depending on the context.

Everything else equal, a larger degree of blinding will always be better, but depending on the context, some degree of broken blinding might still be sufficient for the purposes of the trial.
 
According to that definition, your Rituximab trial would not have been blinded study, because of course people observed effects and on the basis of this thought it was more likely that they received the drug.

We may be arguing at cross purpose but I would say that the rituximab study had a blinded design but that blinding would effectively have been broken for those who achieved major improvement. I think the problem is a bit like the concept of a controlled trial. A trial can have a control but not be controlled in a meaningful sense. The semantic complexities are not the same of course but they may be of the same order. I think we have to work on the basis that these terms often aren't as unambiguous as we expect them to be.

For this particular trial I think the nuances are particularly problematic. But we may have come to the same conclusion.
 
Roughly speaking I think both @Utsikt and I would say that this look like this was a successful trial, because objective and subjective measures line up. But I would also conclude this was a blinded trial whilst @Utikt argues it wasn’t (because things aren’t split 50%-50%), which is it?
The trial was blinded in the sense that blinding measures were implemented.

But the participants in the active group were only successfully blinded until the effect of the drug kicked in on day 30.

So does that matter for how reliable the results are? In your example, I’d say it doesn’t because the objective markers line up and the participants had no way of knowing when the effect would occur so they couldn’t possibly have manifested themselves to feel better at the exact right moment. So we can reasonably conclude that the drug caused the observed change.

All blinding eventually breaks (we do that on purpose at the end) - that doesn’t make the trial unblinded in itself. I’d only call it unblinded (or not sufficiently blinded) if the blinding didn’t achieve its purpose of eliminating that source of bias.
 
Yes, we are talking about the difference between a trial design being well blinded, and the effectiveness of a treatment resulting in a loss of blinding that potentially biases assessment of outcomes.

An example of the bias arising from people correctly guessing that they are in an arm with an effective treatment is when the treatment stops further damage but does not at least immediately fix damage caused in the past. And yet people, in the euphoria of feeling that one aspect of their illness is better, may report global improvement, including to symptoms that will not improve or will only improve much more slowly.

Screenshot 2025-10-10 at 8.10.01 am.png

It seems likely that ivabradine reduces tachycardia. Beyond that we don't know whether any subjective reports of other aspects of QOL are attributable to that reduction in tachycardia (which is quite plausible) or to unblinded role-playing on the part of patients who know they are supposed to be appreciative.
From the data here, I'd say we only have evidence that ivabradine reduces heart rate (both resting heart rate and standing heart rate). There is no strong evidence that it corrected an abnormal increase in heart rate upon standing. That's mainly because there isn't much evidence of the participants having an abnormal increase in heart rate on standing, at baseline or in either treatment arm.

I think it's a bit questionable whether the average participant in the ivabradine arm with the increase in heart rate of 13 bpm would be able better able to correctly guess that things had improved than the average participant in the placebo arm with the increase in heart rate of 17 bpm. That's why I'm not sure that the question of blinding being broken matters all that much to this study.

The differences in heart rate are definitely significant, impressive, even. Whether they represent a meaningful benefit is another question, although from experience, that reduction alone is a clear benefit in itself, but there is definitely an objective improvement, and for sure the drops in heart rate are the kind that is noticeable when you are used to the higher, unstable, heart rates.
In this case, because the heart rate data suggests that most of the participants didn't have an orthostatic tachycardia problem, I don't think there is evidence that ivabradine is useful. I think the good p value reported there is comparing the baseline 21 bpm change with the ivabradine 13 bpm. The difference is much less impressive and possibly not significant if the outcomes in the two arms are compared (13 versus 17 for the placebo). Importantly both mean differences are normal responses to standing.

Rvallee, you say from experience that the sort of reduction seen is beneficial. But I don't think the mean increases in heart rate with 10 minutes of standing shown here would be causing symptoms - they are pretty normal.

A better analysis or at least an important analysis would have been to categorise the heart rate response to standing as within normal bounds (e.g. <30 bpm) or outside it, and report the percentage of people with an abnormal response before and after treatment.

It is possible that the data collected doesn't illustrate the participants' everyday problems with orthostatic tachycardia well. Perhaps they normally have problems in the afternoons and the assessment was done in the morning. Perhaps the stress and excitement of participating in the trial normalised their heart rate response. Against that though is the lack of an improvement in general health in the ivabradine arm over that achieved in the placebo arm.
 
Last edited:
The trial was blinded in the sense that blinding measures were implemented.

But the participants in the active group were only successfully blinded until the effect of the drug kicked in on day 30.

So does that matter for how reliable the results are? In your example, I’d say it doesn’t because the objective markers line up and the participants had no way of knowing when the effect would occur so they couldn’t possibly have manifested themselves to feel better at the exact right moment. So we can reasonably conclude that the drug caused the observed change.

All blinding eventually breaks (we do that on purpose at the end) - that doesn’t make the trial unblinded in itself. I’d only call it unblinded (or not sufficiently blinded) if the blinding didn’t achieve its purpose of eliminating that source of bias.
Yes, it seems were only discussing semantics once in the context of blinded trials and once in the context of when blinding becomes compromised and I can see there being a need for a differentiation and nuance. I think we're in agreement on the "actual stuff".
 
In this case, because the heart rate data suggests that most of the participants didn't have an orthostatic tachycardia problem, I don't think there is evidence that ivabradine is useful. I think the good p value reported there is comparing the baseline 21 bpm change with the ivabradine 13 bpm. The difference is much less impressive and possibly not significant if the outcomes in the two arms are compared (13 versus 17 for the placebo). Importantly both mean differences are normal responses to standing.
Comparing Table 1 and Table 2 to me suggests that the reported HR values in Table 2 are not the ones taken at the first visit but rather the ones taken before ivabradine was given, which for the placebo first group happened after the first round of placebo (if we assume Table 1 also refers to the OVM measurements). This suggest that there are somewhat significant mean drops in HR in the first placebo group in upright position (it looks like they must have gone from 101 BMP to 89 BMP). I do think this additionally calls into question whether people could tell things apart based on HR that easily. If one just looks at this data.

However, a rather different problem is that there was a titration based on supine HR results. Now it doesn't seem like the placebo had a large impact on supine heart rate, however with the instructors forcing heart rate reductions and this seemingly also working in only the ivabradine group I would think that this would destroy the blinding along the lines of "the prinicipal investigator can easily tell placebo from treatment by seeing whether an increased titration caused the necessary reduction when lying down"?
 
Last edited:
I have been on Ivabradine since 2014. It doesn't affect most of my symptoms but it does lower my heart rate. My EP Cardiologist was clear to explain that the purpose was to slow down my heart so it would fill up better - thus helping the health of my heart. He understood I have low blood volume (seen in many of us with ME) so my heart was responding exactly as it should in response to not having enough blood. He was also clear this wasn't going to improve my other symptoms. I agree that Mestinon may be more effective at helping muscle function. I think it makes sense that slowing down my heart rate is good for my overall heart health. It can't be good to have it beat like a hummingbird every time I stand up. I would like to see a study of using Mestinon with Ivabradine.

FYI - Ivabradine is now in generic form so those of us in the US are more likely to be able to afford to try it. Up until now it has been very expensive!
 
I have been on Ivabradine since 2014. It doesn't affect most of my symptoms but it does lower my heart rate. My EP Cardiologist was clear to explain that the purpose was to slow down my heart so it would fill up better - thus helping the health of my heart. He understood I have low blood volume (seen in many of us with ME) so my heart was responding exactly as it should in response to not having enough blood.
How do you measure low blood volume? I thought this was a proposed reason for POT/POTS, but unverifiable. Correct me if I’m wrong!?
 
Rvallee, you say from experience that the sort of reduction seen is beneficial. But I don't think the mean increases in heart rate with 10 minutes of standing shown here would be causing symptoms - they are pretty normal.
I only meant resting and average heart rate, I have never had any test like TTT, NASA lean or otherwise, but I can definitely tell the difference when my resting and average heart rate back down to a more normal value, and it's about in the ranges seen in this study, although of course this is definitely lacking in data, 24/7 monitoring has been available for a long time and although average and resting rates are useful, they're only a few data points out of many.

I just plain don't understand why so many MDs seem to be insistent that with tachycardia like this, if you don't monitor it you wouldn't even know it. This is just plain wrong, there are all sorts of physiological consequences here. I can't believe this is what they actually think, and are just saying it because they see the problem as psychosomatic and want to be "reassuring", or at least this is what they convince themselves of.
 
In this case, because the heart rate data suggests that most of the participants didn't have an orthostatic tachycardia problem, I don't think there is evidence that ivabradine is useful. I think the good p value reported there is comparing the baseline 21 bpm change with the ivabradine 13 bpm. The difference is much less impressive and possibly not significant if the outcomes in the two arms are compared (13 versus 17 for the placebo). Importantly both mean differences are normal responses to standing.

It seems like all the patients did a tilt-table test as part of the screening for the study. Only people who met the 30BPM on that tilt table were included in the study as 3 did not meet the threshold on that test and were excluded. Given that the QoL measures were so low, I think it is fair to assume these were people who met the criteria of POTS as a syndrome. To me, and I assume to the average person diagnosing POTS, this means more than having a few palpitations and general OI issues but I can't find what their specific symptoms were in this study.

In terms of the tests not reaching the 30bpm threshold the protocol for those measurements were as follows:

"To conduct OVM, patients lay supine for at least 3 min to establish a baseline. At the end of 3 min, heart rate and BP were measured. Then, patients stood for at least 3 min; at the end of 3 min, their standing heart rate and BP were measured."

The fact that they only tested a 3 min standing test is probably why the baseline average was lower than 30. Clearly a 3 min standing test is a lot easier and less painful than a full TTT. I would have liked to have seen full TTT but since we know these people do have POTS the important measure is the difference between the HR in placebo and control not the absolute numbers.


On the issue of blinding, the relevant outcomes are always going to be impacted by blinding because what matters are improvements in QoL. It makes no difference to measure some 'objective variable' if that variable is not somehow correlated to someone's QoL. It is all about how reasonable we believe it is to conclude that the changes in QoL are caused by the drug and not from people thinking they might have got the drug. In this case I think the spread of improvements in QoL (more improvement in physical functioning than anything else and little improvement in emotional well being) could be suggestive of the drug and decrease in standing HR having some real impact and not of people knowing they got the drug and feeling better overall.
 
Besides the small sample size and lack of tilt testing for the HR data, I actually think this study was relatively well done (at least for a POTS or ME/CFS study). It is the fact that the improvement in QoL aren't that impressive that is the biggest issue for me. But still I think this helps to justify why a POTS label is not a useless category both in the ability to test and group people with a certain type of OI.
 
But still I think this helps to justify why a POTS label is not a useless category both in the ability to test and group people with a certain type of OI.
Most of them did not have POTS according to the criteria..
 
Ivabradine measurably affects heart rate when lying down in this trial and placebo does not. Heart rate when lying down was measured multiple times as part of this trial. People had their dosage changed respectively chosen according to those results.

I find the section on dosage and titration slightly opaque. They seem to mention only the changes in dosage in the Ivabradine group but surely one must have also changed dosages, possibly without effect when lying down, in the placebo group, otherwise there wouldn't have been blinding to begin with. So people in the placebo group often had their dosages upped without effect because they were above the 70BMP threshold, so they often ended the placebo round on the maximal dosage. Then at crossover when they got the real drug it looks like the dosages were reset to low and they will have gotten a smaller dosage because people were started on a smaller dosage. So I'm now getting a lower dose with more effect and the results were measured and looked at. Doesn't all of this procedure effect blinding quite heavily?
 
Back
Top Bottom