Player FM 앱으로 오프라인으로 전환하세요!
Circulation: Arrhythmia and Electrophysiology On the Beat March 2018
Manage episode 201202525 series 1452724
Paul Wang: Welcome to the monthly podcast “On The Beat”, for Circulation: Arrhythmia and Electrophysiology. I am Dr. Paul Wang, Editor-in-Chief, with some of the key highlights from this month's issue. We'll also hear from Dr. Suraj Kapa, reporting on new research from the latest journal articles in the field.
In our first article, Adetola Ladejobi and associates studied 1,433 patients, between 2000 and 2012, who were discharged alive after sudden cardiac arrest. A reversible and correctable cause was identified in 792 patients, or 55%. A reversible cause for sudden cardiac arrest was defined as significant electrolyte or metabolic abnormality, evidence of acute myocardial infarction or ischemia, recent initiation of antiarrhythmic drug, or illicit drug use, or other reversible circumstances.
Of the 792 sudden cardiac arrest survivors, due to reversible or correctable cause, 207 or 26% of the patients received an ICD after their indexed sudden cardiac arrest. During a mean follow-up of 3.8 years, 319 or 40% of patients died. ICD implantation was highly associated with a lower all-cause mortality, p < 0.001, even after correcting for unbalanced baseline characteristics.
In subgroup analyses, only patients with sudden cardiac arrest, were not associated with myocardial infarction, extracted benefit from the ICD, p < 0.001.
The authors concluded that in survivors of sudden cardiac arrest, due to a reversible and correctable cause, ICD therapies associated with lower all-cause mortality, except if the sudden cardiac arrest was due to myocardial infarction.
Further prospect of multi-center randomized control trials will be needed to confirm this observation.
In our next study, Carlo Pappone and associates, studied 81 patients with persistent atrial fibrillation, randomized to undergo high density electrophysiological mapping, to identify repetitive regular activities, before modified circumferential pulmonary vein ablation, or modified circumferential pulmonary vein ablation alone. The primary endpoint was freedom from arrhythmia recurrence at one year.
In the 81 patients with persistent atrial fibrillation, there were 479 regions exhibiting repetitive regular activities in these patients, or 5.9 repetitive regular activities per patient. There were 232 regions in the mapping group, which consisted of 41 patients, and 247 regions in the control group, consisting of 40 patients. Overall, 39% of the repetitive regular activities were identified within pulmonary veins, whereas 61% were identified in non-pulmonary vein regions.
Mapping-guided ablation resulted in higher arrhythmia termination rate, as compared to conventional strategy, 61% vs. 30%, p < 0.007. Total RF duration, mapping, and fluoroscopy times were not significantly different between the groups. No major procedure related adverse events occurred.
After one year, 73% of the mapping group of patients were free of recurrences, compared to 50% of the control group, p = 0.03.
The authors concluded that targeted ablation of regions showing repetitive regular activities provided adjunctive benefit in terms of arrhythmia freedom at one year in treatment of patients with persistent atrial fibrillation. These findings should be confirmed by additional larger randomized multi-centered studies.
In the next article, Maciej Kubala and associates examine repolarization abnormalities in 40 patients with arrhythmogenic right ventricular cardiomyopathy, comparing extent and location of abnormal T-waves of one millimeter or greater in depth, downsloping elevated ST segment in two or more adjacent leads to the area and location of endocardial bipolar and unipolar, and epicardial bipolar voltage abnormalities. They found an abnormal unipolar right ventricular endocardial area of 33.4% with presence in eight patients without negative T-waves. Patients with negative T-waves extending beyond V3, seen in 20 patients, had larger low bipolar and unipolar endocardial areas, and larger epicardial low bipolar areas, compared to those with negative T-waves limited to leads V1 to V3.
ECG localization of negative T-waves regionalized to the location of substrate. Patients with downsloping elevated ST segment, all localized to leads V1, V2 had more unipolar endocardial abnormalities involving outflow in mid-right ventricle, compared to patients without downsloping elevated ST segment.
The authors concluded that in arrhythmogenic right ventricular cardiomyopathy, abnormal electric current areas were proportional to the extent of T-wave inversion on the 12 lead electrocardiogram. Marked voltage abnormalities can exist without repolarization changes. Downsloping elevated ST segment patterns in V1 and V2 occurs with more unipolar endocardial voltage abnormalities, consistent with more advanced trans neural disease.
In the next manuscript, Teresa Oloriz and associates examine the timing and value of program stimulation after catheter ablation for ventricular tachycardia. They performed 218 program ventricular stimulations six days after ablation in 210 consecutive patients, 48% with ischemic cardiomyopathy in the median left ventricular ejection fraction of 37%. After ablation, ICDs were programmed according to NIPS results. Class A were noninducible, Class B non documented inducible VT, and Class C documented inducible VT. Concordance between the programmed ventricular stimulation at the end of the procedure and at six days was 67%. The positive predictive value and negative predictive value were higher for the programmed ventricular stimulation at day six. Ischemic patients and those with preserved ejection fraction showed the highest negative predictive value.
Among noninducible patients at the end of the procedure, but inducible at day six, 59 patients had VT recurrence at one year follow-up. Recurrences were 9% when both studies were noninducible. There were no inappropriate shocks, incidents of syncope with 3%, none harmful. The rate of appropriate shocks per patient per month according to NIPS was significantly reduced, comparing the month before and after the ablation.
The authors concluded that programmed ventricular stimulation at day six predicts VT recurrence.
In the next study, Tor Biering-Sørensen and associates examined ECG global electrical heterogeneity, GEH, in its longitudinal changes, are associated with cardiac structure and function, in their Atherosclerosis Risk and Community study, ARIC, consisting of 5,114 patients, 58% which were female and 22% African Americans. Using the resting 12-lead ECGs, and echocardiographic assessments of left ventricular ejection fraction, global strain, left ventricular mass index, end diastolic volume index, end systolic volume index at visit five.
Longitudinal analysis included ARIC participants with measured GEH at visits one to four. GEH was quantified by spatial ventricular gradient, the QRST angle, and the sum of the absolute QRST integral. Cross sectional and longitudinal regressions were adjusted for manifest subclinical cardiovascular disease.
Having four abnormal GEH parameters was associated with a 6.4% left ventricular ejection fraction decline, a 24.2 gram/meter square increase in left ventricular mass index, a 10.3 milliliter/meter square increase in left ventricular end diastolic volume index, and a 7.8 milliliter/meter square increase in left ventricular end systolic index. All together, clinical and ECG parameters accounted for approximately one third of the left ventricular volume in 20% of the systolic function variability.
The associates were significantly stronger in patients with subclinical cardiovascular disease. The QRST integral increased by 20 millivolts/meter second for each three year period participants who demonstrated left ventricular dilatation at visit five. Sudden cardiac death victims demonstrated rapid GEH worsening, while those with left ventricular dysfunction demonstrated slow GEH worsening.
The authors concluded that GEH is a marker of subclinical abnormalities in cardiac structure and function.
In the next manuscript, Takumi Yamada and associates studied 19 patients with idiopathic ventricular arrhythmias, originating in the parietal band in 14 patients, in the septal band in 5 patients. Among 294 consecutive patients with right ventricular arrhythmia origins, parietal band and septal band ventricular arrhythmias exhibited a left bundle branch block, with left inferior in 12 patients', superior in 2 patients' axes, in left or right inferior axis pattern in four and one patients respectively.
In Lead 1, all parietal band ventricular arrhythmias exhibited R-waves, while septal band ventricular arrhythmias often exhibited S-waves. A QS pattern in lead AVR, in the presence of a knock in the mid QRS were common in all infundibular muscle ventricular arrhythmias. During infundibular muscle ventricular arrhythmias, a far-field ventricular electrogram, with an early activation, was always recorded in the His bundle region, regardless of the location of ventricular arrhythmia regions. With 9.2 radiofrequency applications in a duration of 972 seconds, catheter ablation was successful in 15 of the 19 patients. Ventricular arrhythmias recurred in four patients during a fallout period of 43 months.
In the next paper, Uma Mahesh Avula and associates examine the mechanisms underlying spontaneous atrial fibrillation, in an Ovine model of left atrial myocardial infarction. The left atrial myocardial infarction was created by ligating the atrial branch of the left anterior descending artery. ECG loop recorders were implanted to monitor atrial fibrillation episodes.
In seven sheep, Dantrolene, a Ryanodine receptor blocker, was administered in vivo, during the observation period. The left atrial myocardial infarction animals experienced numerous episodes of atrial fibrillation during the eight day monitoring period, that were suppressed by Dantrolene. Optical mapping showed spontaneous focal discharges originating through the ischemic/normal-zone border. These spontaneous focal discharges were calcium driven, rate dependent, and enhanced by isoproterenol, but suppressed by Dantrolene.
In addition, these spontaneous focal discharges initiated atrial fibrillation-maintaining reentrant rotors anchored by marked conduction delays at the ischemic/normal-zone border. Nitric oxide synthase one protein expression decreased in ischemic zone myocytes, or NADPA oxidase in xanthine oxidase enzyme activities in reactive oxygen species increased. Calmodulin aberrantly increased, Ryanodine binding to cardiac Ryanodine receptors in the ischemic zone. Dantrolene restored the physiologically binding of Calmodulin to the cardiac Ryanodine receptors.
The authors concluded that atrial ischemia causes spontaneous atrial fibrillation episodes in sheep, caused by spontaneous focal discharges that initiate re-entry. Nitroso redox imbalance in the ischemic zone is associated with intensive reactive oxygen species production, and altered the Ryanodine receptor responses to Calmodulin. Dantrolene administered normalize the Calmodulin response and prevents left atrial myocardial infarction, spontaneous focal discharges in atrial fibrillation initiation.
In the next study, Wouter van Everdingen and associates examine the use of QLV for achieving optimal acute hemodynamic response to CRT with a quadripolar left ventricular lead. 48 heart failure patients with left bundle branch block were studied. Mean ejection fraction 28%, mean QRS duration 176 milliseconds. Immediately after CRT implantation, invasive left ventricular pressure volume loops were recorded during biventricular pacing, with each separate electrode at four atrial ventricular delays.
Acute CRT response, measured as a change in stroke work compared to intrinsic conduction, was related to the intrinsic interval between the Q on the electrocardiogram and the left ventricular sensing delay, that is the QLV, normalized for the QRS duration, resulting in QLV over QRS duration in the electrode position.
QLV over QRS duration was 84% and variation between the four electrodes was 9%. The change in stroke work was 89% and varied by 39% between the electrodes. In univariate analysis, an anterolateral or lateral electrode position in a high QLV to QRS duration ratio had a significant association with a large change in stroke work, all P less than 0.01.
In a combined model, only QLV over QRS duration remained significantly associated with a change in stroke work, P less than 0.5. However, a direct relationship between QLV over QRS duration in stroke work was only seen in 24 patients, while 24 other patients had an inverse relation.
The authors concluded that a large variation in acute hemodynamic response indicates that the choice of stimulated electrode on the quadripolar electrode is important. Although QLV to QRS duration ratio was associated with acute hemodynamic response at a group level, it cannot be used to select the optimal electrode in the individual patient.
In the next study, Antonio Pani and associates conducted a multi-centered prospective study evaluating the determinance of zero-fluoroscopy ablation of supraventricular arrhythmias. They studied 430 patients with an indication for EP study and/or ablation of SVT. A procedure was defined as zero-fluoroscopy when no fluoroscopy was used. The total fluoroscopy time inversely was related to number of procedures previously performed by each operator since the study start. 289 procedures, or 67%, were zero-fluoro. Multi-variable analyses identified as predictors of zero-fluoro was the 30th procedure for each operator, as compared to procedures up to the ninth procedure, the type of arrhythmia, AVNRT having the highest probability of zero-fluoro, the operator, and the patient's age. Among operators, achievement of zero-fluoro varied from 0% to 100%, with 8 operators, or 23%, achieving zero-fluoro in 75% of their procedures. The probability of zero-fluoro increased by 2.8% as the patient's age decreased by one year. Acute procedural success was obtained in all cases.
The authors concluded that the use of 3D mapping completely avoided the use of fluoroscopy in most cases, with very low fluoro time in the remaining, and high safety and effectiveness profiles.
In the next paper, Demosthenes Katritsis and associates examine the role of slow pathway ablation from the septum as an alternative to right-sided ablation. Retrospectively, 1,342 undergoing right septal slow pathway ablation for AV nodal reentry were studied. Of these, 15 patients, 11 with typical and 4 with atypical AVNRT, had a left septal approach following unsuccessful right sided ablation, that is, the righted left group. In addition, 11 patients were subjected prospectively to a left septal only approach for slow pathway ablation, without previous right septal ablation, that is, left group. Fluoroscopy times in the right and left group, and the left groups were 30.5 minutes and 20 minutes respectively, P equals 0.6. The rate of [inaudible 00:18:24] current delivery time for comparable, 11.3 minutes and 10.0 minutes respectively.
There are no additional ablation lesions at other anatomical sites in either group, and no cases of AV block were encountered. Recurrence rate for arrhythmias in the right and left group was 6.7% and 0% in the left group, in the three months following ablation.
The authors concluded that the left septal anatomical ablation of the left inferior nodal extension is an alternative to ablation of both typical and atypical AV nodal reentry when ablation at the right posterior septum is ineffective.
In our next study, Mark Belkin and associates reported prior reports of new-onset device-detected atrial tachyarrhythmias. Despite the clear association between atrial fibrillation and the risk of thromboembolism, the clinical significance of new-onset device-detected atrial tachyarrhythmias and thromboembolism remains disputed.
The authors aim to determine the risk of thromboembolic events in these patients. Using the Ovid Medline, Cochrane, SCOPUS databases to identify 4,893 reports of randomized control trials, perspective or retrospective studies of pacemaker and defibrillator patients reporting the incidence of device detected atrial tachyarrhythmias.
The authors examine 28 studies, following a total of 24,984 patients. They had an average age of 69.9 years and a mean study duration of 21.8 months. New-onset device-detected atrial tachyarrhythmias was observed in 23% of patients. Among nine studies, consisting of 8,181 patients, reporting thromboembolism, the absolute incidence was 2.1%. Thromboembolic events were significantly greater among patients with new-onset device-detected arrhythmias, with a relative risk of 2.88, compared to those who had less than one minute of tachyarrhythmias, 1.77 risk ratio.
The authors concluded that new-onset device-detected atrial tachyarrhythmias is common, affecting close to one quarter of all patients with implanted pacemakers and defibrillators.
In our last paper, Sanghamitra Mohanty and associates performed a meta-analysis systematically evaluating the outcome of pulmonary vein isolation with and without thermoablation in patients with atrial fibrillation. For pulmonary vein ablation alone, only randomized trials conducted in the last three years reporting single procedure success rates, off antiarrhythmic drugs at 12 months or greater follow-up were included. In the PVI plus FIRM group, all public studies reporting a single procedure off antiarrhythmic drug success rate with at least one year follow-up were identified.
Meta-analytic estimates were derived, using the DerSimonian and Laird Random-effects Models, and pooled estimates of success rates. Statistical heterogeneity was assessed using the Cochran Q test and I-square. Study quality was assessed with the Newcastle-Ottawa Scale.
15 trials were included, 10 with PVI plus FIRM, with 511 patients, non-randomized perspective design, and 5 pulmonary vein isolation-only trials, consisting of 295 patients, all randomized.
All patients in the pulmonary vein only trials had 100% non paroxysmal atrial fibrillation, except for one study, and no prior ablations. About 24% of the PVI plus FIRM patients had paroxysmal atrial fibrillation.
After 15.9 months of follow-up, the off antiarrhythmic drug pooled success was 50% with FIRM plus PVI, compared to 58% in the PVI alone. The difference in the effect size between the groups was not statistically significant. No significant heterogeneity was observed in this meta-analysis.
The authors concluded that the overall pooled estimate did not show any therapeutic benefit of PVI FIRM over PVI alone.
That's it for this month, but keep listening. Suraj Kapa will be surfing all journals for the latest topics of interest in our field. Remember to download the podcast On The Beat. Take it away, Suraj.
Suraj Kapa: Thank you, Paul, and welcome back to “On The Beat”. Again, my name is Suraj Kapa and I'm here to review with you articles across the cardiac electrophysiology literature that were particularly hard hitting in the month of February.
To start, we review the area of atrial fibrillation, focusing on anticoagulation. Reviewing an article published in this past month's issue of the Journal of the American Heart Association, by Steinberg et al., entitled Frequency and Outcomes of Reduced Dose Non-Vitamin K Antagonist Anticoagulants, results from ORBIT AF II. The ORBIT AF II registry, also called the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation, is a prospective national observational registry of AF patients.
The author sought to describe the frequency, appropriateness, and outcomes of patients prescribed reduced doses of NOACs in the community practice. They reviewed the records of almost 8,000 patients receiving NOACs and noted that the vast majority, nearly 84%, received a standard dose of NOACs, consistent with the U.S. FDA labeling. While only 16% received a reduced dose, only 43% of these were consistent with labeling instructions. Those who received reduced dose NOACs inappropriately more often tended to be younger and have, interestingly, lower overall bleeding risks scores.
Furthermore, compared with those appropriately receiving dosing, patients receiving inappropriately reduced dose NOACs had a higher unadjusted rates of thromboembolic events and death.
These data are important to understand, in that, discussion with patients, that inappropriate reduction of NOACs does not necessarily offer appropriate protection against long-term risk of thromboembolic events. Thus, close attention must be paid to consideration of the use cases and instructions for use.
While the registry cannot get into the details of why the dose was reduced in the spectrum of patients, it does highlight the fact that this continues to be a problem in general practice.
Further data is needed to understand what leads to inappropriate dose reduction, which could include factors such as patient preference, or physician education.
Staying within the realm of anticoagulation and understanding individual needs, we next review an article published in this past month's issue of Circulation, by Nielsen et al., entitled Female Sex Is a Risk Modifier Rather Than a Risk Factor for Stroke in Atrial Fibrillation. Should we use a CHA2DS2-VA score rather than CHA2DS2-VASc? In this review, the authors sought to evaluate whether female sex is truly an overall risk factor, as opposed to a risk modifier.
Using three nationwide registries, they identified patients with nonvalvular atrial fibrillation between 1997 and 2015, and they calculated two sets of scores. The first score, they termed a CHA2DS2-VA score, calculated for men and women with follow-up of one year in the Danish National Patient Registry. They wanted to calculate the risk based on this pseudo-value method. They then reviewed female sex as a prognostic factor by inclusion as an interaction term on the CHA2DS2-VA score, to calculate overall thromboembolic risk.
Amongst over 200,000 patients with atrial fibrillation, almost half of whom are women, they noted that the mean CHA2DS2-VA score, where sex is excluded, was a tad higher in women than men, namely 2.7 vs. 2.3. However, women had an overall higher one year thromboembolic rate of 7.3 vs. 5.7 per 100 person-years. Interestingly, with a CHA2DS2-VA score of zero, the absolute risk of thromboembolism was equal amongst men and women, around .5%. Once overall points increased above one, however, women exhibited a higher stroke risk. This interaction was statistically significant.
Thus, the authors indicated that female sex is a risk modifier for stroke in patients with atrial fibrillation, rather than a risk factor. The terminology is important to consider. Essentially, what they are noting is that at the lower risk level, female sex, in and of itself, is not something that necessarily puts somebody in the higher risk cohorts. Instead, at higher risk levels, because of other factors, a woman may have a higher overall risk of stroke than men. Thus, stroke risk is accentuated in women, who would have been eligible for oral anticoagulating treatment anyway, on the basis of a CHADS score above one.
These data highlight the importance of thinking about the fact that at the lower risk score level, female sex alone might not be sufficient to say that a patient has reached the CHA2DS2-VASc score of one and above. But, really, you need an overall CHA2DS2-VA score, or a risk score, inclusive of at least two other risk factors to indicate that now, being a female is going to modify the risk and further accentuate it.
Now, one thing to note is, these data are very consistent with the guidelines. The European guidelines indicates that female sex alone, which in the CHA2DS2-VASc score would confer a risk score of one, should not, by itself, construe the need to put somebody on anticoagulation.
However, it's important to highlight that these data show that at a CHA2DS2-VASc score of one in females, they should really be construed as equivalent to a CHA2DS2-VASc score of zero in men.
Using the CHA2DS2-VA score, where sex is excluded, but considering that women overall have a higher incidence of stroke at any given CHA2DS2-VA level above one, will help better counsel women about the importance of being on anticoagulants.
The next article we review relates to long-term risk related to atrial fibrillation, published in February's issue of Heart Rhythm, by Nishtala et al., entitled Atrial Fibrillation and Cognitive Decline in the Framingham Heart Study. While there's much out there about the potential long-term role of cognitive decline in atrial fibrillation patients, longitudinal research investigating the relationship is relatively sparse. Thus, the authors sought to investigate the association between atrial fibrillation and cognitive performance, cross-sectionally and longitudinally.
They chose patients within the Framingham study who are dementia and stroke-free at the time of baseline neuropsychological assessments. They evaluated atrial fibrillation status as a two level variable, namely prevalent atrial fibrillation vs. no atrial fibrillation in cross-sectional analyses. And they also separated into prevalent atrial fibrillation at baseline, interim development of atrial fibrillation, and those who didn't develop any atrial fibrillation in longitudinal analysis.
They studied 2,682 participants in the Framingham Heart study, including original and offspring cohorts. They noted that a baseline of about 4% had diagnosed atrial fibrillation. Prevalent AF was noted to be significantly associated with poorer attention. Interestingly, sex differences were noted, with men performing worse on test of abstract reasoning and executive function than women.
They noted that prevalent atrial fibrillation was significantly associated with the longitudinal decline in executive function, in both the original cohorts, as well as interim atrial fibrillation being significantly associated with longitudinal decline in executive function of the offspring cohorts. Thus, they noted that atrial fibrillation is associated with a profile of long-term change in cognitive function.
The importance of these data are to further highlight the potential contribution of atrial fibrillation to cognitive decline. While the exact mechanisms remain to be fully elucidated, the question of how to get ahead of the cognitive decline associated with atrial fibrillation is further put out by these data.
Whether the relationship between atrial fibrillation and cognitive decline is due to recurrent thromboembolic events vs. the therapies used vs. other factors such as humid anatomic factors resulting in poor brain perfusion, are relatively unclear.
Certainly it is also possible that atrial fibrillation simply reflects a process associated with other factors that might lead to cognitive decline. However, again, further mechanistic studies and potential treatment interventions to mitigate the risk of cognitive decline are still needed.
Speaking of this, we next review a paper published in the European Heart Journal this past month, by Friberg and Rosenqvist, entitled Less Dementia with Oral Anticoagulation in Atrial Fibrillation.
Speaking of treatments to avoid long-term cognitive decline, the authors sought to evaluate if oral anticoagulant treatment might offer protection against long-term dementia risk in atrial fibrillation.
These retrospective registry studies of patients with the hospital diagnoses of atrial fibrillation and no prior diagnosis of dementia in Sweden, including patients between 2006 and 2014. The study included a total of 444,106 patients over 1.5 million years. They noted that patients who were on anticoagulant treatment at baseline were associated with a 29% lower risk of dementia than patients without anticoagulant treatments. Thus, there is an overall 48% lower risk on treatments with the appropriate anticoagulation. There is no difference on whether Warfarin or the newer oral anticoagulants were used.
Thus, the authors concluded that the risk of dementia is higher without oral anticoagulant treatment in patients with atrial fibrillation, suggesting that early initiation of anticoagulant treatment in patients with atrial fibrillation could be of value to preserve long-term cognitive function.
This relates directly back to the previous paper, which focused more on the epidemiologic risk, while this paper focuses on elements that might construe mechanism or treatment options.
Many authors have concluded the incredible importance of early recognition of the need for anticoagulant initiation in patients with atrial fibrillation. While the exact mechanism of cognitive decline and dementia in atrial fibrillation remains to be completely elucidated, certainly recurrent thromboembolic events that might be relatively silent as they occur, but result in a long-term cumulative risk might be helped by placing patients on anticoagulants.
This becomes another reason to counsel patients on the importance of long-term anticoagulant therapy. Certainly, the limitations of these studies, however, are the retrospective nature and the fact that there might be some subtle differences that may not be otherwise able to be construed from retrospective registry data regarding the relative role of anticoagulants in truly protecting against long-term cognitive decline. However, the data are certainly provocative.
Continuing within realm and discussing outcomes associated atrial fibrillation, we next review an article by Leung et al., entitled The Impact of Atrial Fibrillation Clinical Subtype on Mortality, published in JACC: Clinical Electrophysiology this past month.
The author sought to investigate the prognostic implications of a subtype of atrial fibrillation, paroxysmal or persistent, on long-term prognosis. They sought to evaluate differences in mortality between paroxysmal or persistent atrial fibrillation amongst 1,773 patients. They adjusted for comorbid diseases associated with atrial fibrillation, as well as CHA2DS2-VASc score. In the study, a total of about 1,005 patients or about 57% had persistent atrial fibrillation. Over the follow-up period, about 10% of those with paroxysmal atrial fibrillation and 17% of those with persistent atrial fibrillation died.
They noted that persistent atrial fibrillation, after correcting for other comorbidities, was independently associated with worse survival. Thus, they concluded that persistent atrial fibrillation is independently associated with increased mortality in the long term.
These data are relevant in that they highlight that persistent atrial fibrillation in its nature might construe an overall higher risk cohort. It remains to be fully understood what are the true mechanistic differences between persistent and paroxysmal atrial fibrillation. Overall, however, the community grossly agrees that persistent atrial fibrillation likely suggests a higher degree of atrial myopathy. If we believe this, then it is reasonable to believe that the risk associated with this specific form of atrial fibrillation might result in higher long-term harm.
Of course, these data are subject to the same limitations of all retrospective data. Namely, these persistent atrial fibrillation patients might have received different therapies or been more sick to start with that cannot be construed by comorbidities alone.
Furthermore, these data do not necessarily get to the point of whether treating atrial fibrillation in the persistent patient more aggressively necessarily reduces the risk equivalent to that of paroxysmal patients. Thus, further understanding is needed to understand how to use these data to reduce this mortality difference.
Continuing within the realm of epidemiology of atrial fibrillation, we next review an article published in this past month's issue of Circulation, by Mandalenakis et al., entitled Atrial Fibrillation Burden in Young Patients with Congenital Heart Disease. It is assumed that patients with congenital heart disease are vulnerable to atrial fibrillation because of multiple factors. These include residual shunts, hemodynamic issues, atrial scars from previous heart surgery, valvulopathy and other factors.
However, there's limited data on the overall risk of developing atrial fibrillation and complications associated with it, especially in children and young adults with congenital heart disease. Furthermore, these children and young adults with congenital heart disease have never been compared with overall risk and control subjects.
The authors use the Swedish Patient and Cause of Death Registries to identify all patients with diagnoses of congenital heart disease born from 1970 to 1993. They then matched these patients with control subjects from the Total Population Register in Sweden. They noted amongst almost 22,000 patients with congenital heart disease and almost 220,000 matched control subjects that 654 patients amongst the congenital heart disease cohort developed atrial fibrillation, while only 328 amongst the larger control group developed atrial fibrillation. The mean follow-up overall was 27 years.
They noted the risk of developing atrial fibrillation was almost 22 times higher amongst patients with congenital heart disease than control subjects. They noted the highest risk with a hazard ratio of over 84 was noted in patients with conotruncal defects. Furthermore, at the age of 42 years, over 8% of patients with congenital heart disease had a recorded diagnosis of atrial fibrillation.
Interestingly, heart failure was a particularly important complication in patients with congenital heart disease and atrial fibrillation, with over 10% of patients developing atrial fibrillation and [inaudible 00:38:20] congenital heart disease developing a diagnosis of heart failure as well.
These data are important in that they help in counseling the importance of close follow-up of patients with congenital heart disease and their long-term risk of other complications. Even if patients might be perceivably well managed, incident atrial fibrillation might increase risk of stroke in these patients. It is further important to note that many of these patients cannot be evaluated according to traditional risk or evaluations. Thus, it is important to consider whether or not a patient should be treated with anticoagulation once they develop atrial fibrillation.
The high risk of overall atrial fibrillation incidents, particularly in patients with more complex congenital defects, needs to be taken into consideration when advising on the frequency of follow-up.
It is important to further note that we must think of this overall risk as the minimum possible risk, namely, counseling a congenital heart disease patient that up to one in ten of them may develop atrial fibrillation by the age of 42 years, is likely the minimum amount. The reason for this is many patients, due to either lack of follow-up or lack of sufficient monitoring, and the asymptomatic nature of atrial fibrillation in many patients might have not been diagnosed.
Implications or treatments remain to be seen, and whether or not there are methods to reduce the overall risk of atrial fibrillation is unclear. However, engaging congenital heart disease experts and advising patients, especially at younger ages, on the importance of close electrocardiographic monitoring for a potential atrial fibrillation risk is critical.
Next within the realm of atrial fibrillation, we switch to the topic of ablation. And review an article by Pallisgaard et al., published in this last month's issue of European Heart Journal, entitled Temporal Trends in Atrial Fibrillation Recurrence Rates After Ablation, between 2005 and 2014: a nationwide Danish cohort study.
Ablation has been increasingly used as a rhythm control strategy for patients with atrial fibrillation. Over this time, we have all noted evolution in both the experience and the techniques used. Thus, the authors sought to evaluate whether recurrence rate of atrial fibrillation has changed over the last decade. They included all patients with first-time AF ablation done between 2005 and 2014 in Denmark. They then evaluated recurrent atrial fibrillation based on a one year follow-up. They included a total of 5,425 patients undergoing first-time ablation.
They noted, interestingly, that the patient median age increased over time, and the median AF duration prior to ablation decreased over time. However, the rates of recurrent atrial fibrillation decreased from 45% in 2005 to 31% in the more recent years of 2013, 2014. With the relative risk of recurrent atrial fibrillation almost being cut in half.
They noted that female gender, hypertension, atrial fibrillation duration more than two years, and cardioversion with one year prior to ablation were all associated with an increased risk of recurrent atrial fibrillation, regardless of year.
These data, again, are retrospective and thus must be taken in the context of that consideration. However, they highlight that it is possible either our selection of appropriate patients for atrial fibrillation ablation or our techniques have improved overall success.
The fact that atrial fibrillation ablation is still a relatively young field, with evolving approaches and evolving techniques, needs to be taken into consideration when advising patients on success rates. Using data from many years prior to informed discussion today is fraught with potential error, especially as our catheter design and mapping system use and understanding of appropriate lesion set changes.
Of course, some criticism is required as well. While the patients included were relatively older in more recent years, the total AF duration prior to ablation decreased over the years. This suggests that patients are being ablated earlier than they were in the early days of atrial fibrillation ablation.
There is some data out there to suggest that earlier ablation for atrial fibrillation might result in a lower long-term recurrence rate. Thus, this might account for some of the difference. However, it is unlikely that it accounts for all of it, given the degree of reduction in overall risk of occurrence.
Staying within the trend of talking about changes in techniques for atrial fibrillation ablation, we next review an article published in this past month's issue of Heart Rhythm, by Conti et al., entitled Contact Force Sensing for Ablation of Persistent Atrial Fibrillation: A Randomized, Multicenter Trial. Contact force sensing is one of the newer techniques being used to optimize the success rates for atrial fibrillation ablation. It is generally felt that understanding when one is in contact will optimize atrial fibrillation ablation outcomes by ensuring the physician knows each time they are in contact, and also potentially reducing complications by avoiding excessive contact.
Thus, the authors designed the TOUCH AF trial to compare contact force sensing-guided ablation vs. contact force sensing-blinded ablation. They included a total of 128 patients undergoing first-time ablation for persistent atrial fibrillation, and thus randomized them to a situation where the operator was aware of the contact force vs. blinded to the contact force. While the force data was hidden in the blinded cohort, it was still recorded on the backend.
In all patients, wide antral pulmonary vein isolation plus a roof line was performed, and patients were followed at 3, 6, 9, and 12 months, with clinical visits, ECGs, and 48-hour Holter monitoring.
The primary endpoint was cumulative radio frequency time for procedures, and atrial arrhythmia is greater than 30 seconds after three months is considered a recurrence.
They noted that average force was higher in the contact force-guided arm than contact force-blinded arm, though not statistically significant, with an average of 12 grams in the latter and 14 grams in the former.
Interestingly, the total time of ablation did not differ between the two groups. Furthermore, there was no difference in the single procedure freedom from atrial arrhythmia, computing to about 60% in the contact force-guided arm vs. the 63% in the contact force-blinded arm. They did notice, however, that lesions with associated gaps were associated with significantly less force and less force-time integral.
The authors concluded from this, the contact force-guided ablation did not result in significant decrease in total radio frequency time or 12-month outcomes in terms of freedom from atrial arrhythmias.
These data are important to help guide us in terms of thinking about how the tools we use, as they change, actually alter outcomes. Sometimes we may perceive benefits based on logical thinking that's knowing more about what is happening when we are performing a procedure should optimize that procedure. However, this is not necessarily always the case, and thus highlights the importance of randomized trials to directly compare different situations, such as awareness of contact force vs. lack of awareness of contact force.
The relevance of these particular articles is that when we compare catheters with different designs, it does not necessarily highlight the importance of the force number itself. Namely, comparing a contact force catheter vs. non-contact force catheter implicates use of essentially two completely different catheters. To understand the incremental utility of force in making decisions, it is important to consider the same catheter, but simply with awareness or lack of awareness of the actual force number.
One of the limitations, however, is that individuals who might have been trained on using the same force sensing catheter might have some degree of tactile feedback and understanding of the amount of force being applied to the tip of the catheter, based on having been repeatedly exposed to contact force numbers during use of said catheter. Thus, there might be a difference in being blinded to contact force in early stage operators than in later stage operators who might have been trained based on repeated feedback.
Thus, it's difficult to conclude, necessarily, that contact force is not offering mental benefit. In fact, there's a fair chance that it does. However, offering a skeptical viewpoint to help guide the importance of continually evolving technology in actually improving outcomes is important.
Finally, within the realm of atrial fibrillation, we review an article published by Pathik et al., in this past month's issue of Heart Rhythm, entitled Absence of Rotational Activity Detected Using 2-Dimensional Phase Mapping and the Corresponding 3-Dimensional Phase Maps in Human Persistent Atrial Fibrillation.
Current clinically used phase mapping systems involve 2-dimensional maps. However, this process may affect accurate detection of rotors. The authors sought to develop 3-dimensional phase mapping technique that uses a 3D location of the same basket electrodes that are used to create the currently available 2-dimensional maps. Specifically, they wanted to determine whether the rotors detected in 2D phase maps were present in the corresponding time segments and anatomical locations in 3D phase maps.
They used one minute left atrial atrial fibrillation recordings obtained in 14 patients, using the basket catheter, and analyzed them offline, using the same phase values, based on 2-dimensional vs. 3-dimensional representations.
They noted rotors in 3.3% using 2D phase mapping, 9 to 14 patients demonstrated about 10 transient rotors, with a mean rotor duration of about 1.1 seconds. They noted none of the 10 rotors, however, were seen at the corresponding time segments and anatomical locations in 3D phase maps. When looking at 3D phases maps, 4 of the 10 corresponded with single wavefronts, 2 of 10 corresponded with simultaneous wavefronts, 1 of 10 corresponded with disorganized activity, and 3 of 10 had no coverage by the basket catheter at the corresponding 3D anatomical locations.
These data are important, in that they highlight the importance of when we consider reflecting 2-dimensional systems in a 3-dimensional world of atrial fibrillation. The role of ablating rotors is still in question. However, it is still an important question, and it requires continued study. The best way of identifying a rotor, knowing a rotor is a rotor, and understanding where the rotor is, are going to be critical to further evaluating whether actual ablation of these rotors has any relevance to long-term atrial fibrillation ablation.
The truth is, that we need to be sure that we are properly identifying all the rotors in order to help guide whether or not we are actually being successful in ablating atrial fibrillation. The importance of the study is in reflecting whether 2-dimensional representations of the 3-dimensional geometry is sufficient to reflect what is actually happening in that 3-dimensional geometry. These authors suggest that it is not.
One of the limitations, however, might be that when we wrap a 2-dimensional framework into 3 dimensions and perform additional post-processing, this might result in some degree of attenuation of the data. However, it does highlight the importance for continued rigorous evaluation of current approaches to phase mapping.
Several articles have been published in recent months as well, about different single processing techniques to evaluate whether or not a rotor is, in fact, a rotor and to help optimize identification of them.
The jury is still out on whether or not targeted ablation of rotors will, in fact, improve overall long-term atrial fibrillation ablation outcomes. The limitations might not necessarily be that rotors are not an appropriate target, but that we just don't understand entirely where rotors are, based on limited single processing options, or based on limitations of anatomical localization.
Next, delving into the realm of ablation at large, we review an article by Iwasawa et al., published in this past month's issue of Europace, entitled Trans Cranial Measurement of Cerebral Microembolic Signals During Left-Sided Catheter Ablation with the Use of Different Approaches - the Potential Microembolic Risk of a Transseptal Approach.
The authors note the importance of considering microemolization in subclinical brain damage during catheter ablation procedures. They evaluated microembolic signals detected by transcranial Doppler during ablation of supraventricular or ventricular arrhythmias with the use of either a transseptal or a retrograde approach.
The study set was small, only including 36 patients who underwent catheter ablation. They noted in about 11 patients left-sided ablation was done with transaortic approach, and in 9 patients a transseptal approach was used. The other 16 patients were not included, as they only had right-sided ablation.
The total amount of microembolic signature, based on transcranial Doppler were counted throughout the procedure and then analyzed offline. There is no significant difference in number of radio frequency applications, total energy delivery time, total application of energy, or total procedure time between the different groups. However, they did note that the mean total number of microembolic signals was highest in those undergoing transseptal approach to left-sided ablation. It was significantly lower in those having retrograde aortic approach, and lowest in those having right-sided only ablation.
Interestingly, many of the microembolic signals were detected during the transseptal puncture period, and then during the remainder of the procedure there was relatively even distribution of emboli formation. A frequency analysis suggested that the vast majority of microembolic signals are gaseous, in particularly Group 1 and Group 3, though only 91% in Group 2. No neurological impairment was observed in any of the patients after the procedure.
Recently, there's been a lot of focus on the potential long-term risk of cognitive impairments due to microembolic events in the setting of ablation. At least one recent paper in ventricular arrhythmias and several recent papers in atrial fibrillation ablation have suggested a fairly high risk of incidence cerebral emboli noted on MRI post ablation. While these results do not necessarily get at MRI lesions, they do suggest microembolic events. And what is most interesting, they look at microembolic events that occur throughout the entire ablation period with different approaches.
Interestingly, there is a massive spike in overall microembolic signals during the transseptal puncture period, and relatively even distribution throughout ablation, irrespective of application of radio frequency or not. Furthermore, while nearly all microembolic signals are gaseous, based on frequency analysis, with retroaortic approach or in those having right-sided only ablation, significantly less seem to be due to gaseous events in those having a transseptal approach.
It is known that there's possible damage to the internal dilation system when exposing it to transseptal needles or wires. Thus, one has to wonder whether some of the embolization could be from material associated with the actual transseptal puncture, either from portions of the punctured septum itself, or perhaps from the plastic material that which is being pushed transseptally.
These data still need to be considered and we have yet to see what the long-term applications of these kinds of findings are. It may be possible that while transseptal approach seems to offer more instant microembolic signals, if the long-term risk is no different, does it really matter?
However, these findings are provocative in the sense that they highlight potential significant differences and the risk of silent cerebral damage, based on the approach we use to ablation.
Changing gears, we next focus on the role of devices. And the first paper review is in the last month issue of JACC: Heart Failure, by Gierula et al., entitled Rate Response Programming Tailored to the Force Frequency Relationship Improves Exercise Tolerance in Chronic Heart Failure.
The authors sought to examine whether the heart rate at which the force frequency relationship slope peaks can be used to tailor heart rate response in chronic heart failure patients with cardiac pacemakers, and to see whether this favorably influences exercise capacity.
They performed an observational study in both congestive heart failure and healthy subjects with pacemaker devices. They then evaluated in a double-blind, randomized, controlled crossover study, the effects of tailored pacemaker rate response programming on the basis of a calculation of force frequency relationship based on critical heart rate, peak contractility, and the FFR slope.
They enrolled a total of 90 patients with congestive heart failure into the observational study cohorts, and 15 control subjects with normal LLV function. A total of 52 patients took part in the crossover study. They noted that those who had rate response settings limiting heart rate rise to below the critical heart rate were associated with greater exercise time and higher peak oxygen consumption, suggesting the tailored rate response program can offer significant benefit, particularly in congestive heart failure patients.
The importance of this trial is in that it highlights the importance of thoughtful decision-making in programming devices, and that group decision-making involving exercise physiologists, alongside pacemaker programming, and involving our congestive heart failure specialists might be the most critical in optimizing the approach to programming.
It might be that more aggressive measures are needed in congestive heart failure patients to decide on what optimal programming is, than it is in otherwise normal patients.
Staying within the realm of devices, we next focus on a publication by Sanders et al., published in this past month's issue of JACC: Clinical Electrophysiology, entitled Increased Hospitalizations and Overall Healthcare Utilization in Patients Receiving Implantable Cardioverter-Defibrillator Shocks Compared With Antitachycardia Pacing.
The authors sought to evaluate the effect of different therapies and healthcare utilization in a large patient cohorts. Specifically comparing antitachycardia pacing with high voltage shocks. They used the PROVIDE registry, which is a prospective study of patients receiving ICDs for primary prevention in 97 U.S. centers. They categorized these patients by type of therapy delivered, namely no therapy, ATP only, or at least one shock. They then adjudicated all ICD therapies, hospitalizations, and deaths.
Of the 1,670 patients included, there was a total follow-up of over 18 months. The vast majority, 1,316 received no therapy, 152 had ATP only, and 202 received at least one shock.
They noted that patients receiving no therapy and those receiving only ATP had a lower cumulative hospitalization rate and had a lower risk of death or hospitalization. The cost of hospitalization was known to be significantly higher for those receiving at least one shock than for those receiving only ATP therapy.
They noted no difference in outcomes or cost between patients receiving only ATP and those without therapy. Thus, the authors concluded that those receiving no therapy or those receiving only ATP therapy had similar outcomes, and had significantly reduced hospitalizations, mortality, and costs compared to those who received at least one high voltage shock.
The relevant findings from this study is similar to prior studies that suggest that any shock over follow-up is associated with potential increase in long-term mortality. The difficulty in assessing this, however, is the fact that it might be that those who have VT that can be appropriately ATP terminated, might be at a somewhat lower risk than those who need to be shocked to get out of their VT. Thus, the presumption of needing a shock to restore normal rhythm might suggest a higher risk cohort, it cannot be gleaned from traditional evaluation of morbid risk factors.
This is why the importance of considering how devices are programmed and whether or not a patient who has received shocks can be reprogrammed to offer ATP only therapy to terminate those same VTs, needs to be taken into consideration. How to best tailor this therapy, however, is still remaining to be determined, though more and more clinical trials are coming out to suggest in terms of optimal overall population-wide programming for devices.
Staying with the realm of devices, we next review an article by Koyak et al., in this past month's issue of Europace, entitled Cardiac Resynchronization Therapy in Adults with Congenital Heart Disease.
Heart failure is one of the leading causes of morbidity and mortality amongst patients with congenital heart disease. But there's limited experience in the role of cardiac resynchronization therapy amongst these patients. Thus, the authors sought to evaluate the efficacy of CRT in adults with congenital heart disease.
They performed a retrospective study on a limited number of 48 adults with congenital heart disease who received CRT, amongst four tertiary referral centers. They have defined responders as those who showed improvement in NYHA functional class or improvement in systemic ventricular ejection fraction. The median age at CRT implant was 47 years, with 77% being male. There was a variety of syndromes included.
They noted that the majority of patients, nearly 77%, responded to CRT, either by definition of improvement of NYHA functional class, or systemic ventricular function, with a total of 11 non-responders.
They noted that CRT was accomplished with a success rate comparable to those with acquired heart disease. However, the anatomy is much more complex and those technical challenges in achieving success of implantation was more difficult.
The authors concluded that further studies are needed to establish the appropriate guidelines for patient selection amongst these patients. Certainly, one can state that given this is a retrospective study, patient selection might have been based on knowledge of a reasonable coronary sinus [inaudible 01:00:39] anatomy.
However, it does highlight the importance of consideration of CRT in these patients equivalent to what we would consider in those with other acquired heart disease, such as myocardial infarction, or primary prevention-type myopathic diseases.
I fully agree with the authors that we need better understanding of which specific congenital heart disease patients resynchronization is optimal for, however.
Finally, within the realm of devices, we review an article published in this past month's issue in JACC: Clinical Electrophysiology, by Khurshid et al., entitled Reversal of Pacing-Induced Cardiomyopathy Following Cardiac Resynchronization Therapy.
The authors sought to determine the extent, time course, and predictors of improvement following CRT upgrade among pacing-induced cardiomyopathy patients. They retrospectively studied over 1,200 consecutive patients undergoing CRT procedures between 2003 and 2016. They specifically looked at those who underwent CRT upgrade for a dual chamber or single chamber ventricular pacemaker due to pacemaker-induced cardiomyopathy.
They defined pacemaker-induced cardiomyopathy as a decrease of more than 10% in left ventricular ejection fraction, resulting in EF less than 50% among patients receiving more than 20% RV pacing, without an alternative cause of cardiomyopathy.
Severe pacing-induced cardiomyopathy was defined as pre-upgrade LVEF less than or equal to 35%. They noted a total of 69 pacing-induced cardiomyopathy patients amongst the larger cohorts. After CRT upgrade, ejection fraction improved from 29 to over 45%, over a median seven months of follow-up. 54 patients who actually had severe pacing-induced cardiomyopathy again defined as a pre-upgrade LVEF ejection fraction less than or equal to 35%. The vast majority, 72%, improved to an ejection fraction of about 35% over a median seven months.
Most of the improvement occurred within the first three months. Although in some, improvement continued over the remainder of the first year. A narrower native QRS was associated with a greater LVEF improvement after CRT upgrade.
The authors concluded that CRT is highly efficacious in reversing pacing-induced cardiomyopathy, with 72% of those with severe pacing-induced cardiomyopathy achieving LV ejection fractions greater than 35%. Thus, the authors suggest that these data support initially upgrading to a CRT pacemaker, prior to considering upgrading to a CRT defibrillator, and holding off on consideration of upgrading to a CRT defibrillator for at least one year, on the basis that in some patients we see continued improvement in ejection fraction over follow-up for the remainder of one year, even after the first three months.
This article's provocative for several reasons. First off, the question is, what should a patient with an ejection fraction less than 35% with a new diagnosis of presumed pacing-induced cardiomyopathy be upgraded to? Should it be a CRT pacemaker or a CRT defibrillator? Some people would consider CRT defibrillator should be implanted to avoid multiple procedures over time. However, defibrillators are larger devices with more complex lead systems and potential risk of inappropriate shocks. So there is risk of harm as well.
Furthermore, the question is the timing at which one should consider further upgrade to a defibrillator. Should it be due to lack of recovery over three months of follow-up or longer? This is an area of active debates, as follow-up studies, which have looked at patients with primary prevention devices, over long-term evaluation has suggested that as many as a third of patients might see improvement ejection fraction to where they no longer meet primary prevention defibrillator indications.
Thus, these findings are provocative, though in a relatively small number of patients that, in fact, amongst patients in whom they have presumed pacing-induced cardiomyopathy, the primary role of intervention should be upgraded to a CRT pacemaker. And decision on further need of implantation of a defibrillator should really be deferred for at least one year. This might inform future prospective studies to evaluate the same.
Changing gears yet again, we will next focus within the realm of ventricular arrhythmias. The first article we will review was published in the last month's issue of Heart Rhythm, by Hyman et al., entitled Class IC Antiarrhythmic Drugs for Suspected Premature Ventricular Contraction-Induced Cardiomyopathy.
The authors sought to evaluate the potential utility of Class IC antiarrhythmic drugs to suppress PVCs amongst patients who have [inaudible 01:04:57] presumed PVC induced left ventricular dysfunction. It is widely recognized that IC drugs are associated with increased mortality in patients with PVCs and left ventricular dysfunction after myocardial infarction. However, in those who have what appears to be reversible myopathy due to PVC induced issues, is not established.
The authors reviewed a small number of patients, namely 20 patients, who had PVC induced cardiomyopathy and were treated with Class IC drugs. Those patients had an average of 1.3 plus or minus .2 previous unsuccessful ablations. A total of six actually had an ICD or wearable defibrillator.
They noted a mean reduction in PVC burden from 36% to 10% with use of a Class IC agent. With an associate increase in mean left ventricular ejection fraction of 37% to 49%. Among seven patients within the cohort who also had myocardial delayed enhancement on cardiac MRI, they noticed similar improvement in ejection fraction with reduction of PVC burden. There were no sustained ventricular arrhythmias or sudden deaths noted over about 3.8 treatment years of follow-up.
Thus, the authors concluded that amongst patients with presumed PVC-induced cardiomyopathy, Class IC drugs can effectively suppress PVCs and lead to LVEF recovery, even in a small subset of patients who have MRI evidence of structural disease, namely myocardial delayed enhancements.
Of course, this patient cohort is small, and thus might not have been large enough to see the relatively rare outcomes associated with these Class IC drugs. However, it is important to consider what prior data shows us when considering whether or not a drug can be used in different cohorts. When we presume patients as having structural heart disease and we consider whether or not a Class IC drug is appropriate, not all structural heart diseases are necessarily created equal. The original data that suggests in patients post MI have a higher risk of events because of use of a Class IC agent, should not necessarily inform the decision to use Class IC agents in patients who have different causes of their cardiomyopathy.
Larger clinical trials are, however, needed to evaluate whether or not the simple presence of a low EF or myocardial enhancement should obviate the use of Class IC agents.
These data are provocative and they suggest potential utility of using Flecainide or Propafenone however, in patients with specific PVC and cardiomyopathies that might not have been amenable to ablation.
Staying within the realm of ventricular arrhythmias, we focus on an article by Greet et al., published in this past month's issue of JACC: Clinical Electrophysiology, entitled Incidence, Predictors and Significance of Ventricular Arrhythmias in Patients with Continuous-Flow Left Ventricular Assist Devices: a 15-year Institutional Experience.
The authors sought to evaluate the incidence, predictors, and associated mortality of pre-implantation, early and late ventricular arrhythmias associated with implantation of continuous-flow left ventricular assist devices. Unfortunately, there's limited data currently on the prognostic impact of ventricular arrhythmias in contemporary LVADs.
Thus, the authors performed retrospective review to identify all patients with LVAD and evaluate ventricular arrhythmias associated risk.
A total of 517 patients were included in the analysis. They noted that early ventricular arrhythmias after LVAD implant were associated with significant reduction in survival, the hazard ration around 1.8, when compared with patients with either late ventricular arrhythmias or no ventricular arrhythmias.
Pre-implantation variables that predicted early ventricular arrhythmias included prior cardiac surgery and the presence of pre-implantation ventricular tachycardia storm. They, however, noted that the incidence of early ventricular arrhythmias was highest in the early period of implantation, namely, as high as 47% in the period of 2000 to 2007, but reducing to much lower of less than 22% in more recent years between 2008 and 2015. Thus, suggesting a temporal trend of decreased ventricular arrhythmia incidents post LVAD implantation.
The difficulty of retrospective data sets is to understand the "why" of what we are seeing. It is likely that those with early ventricular arrhythmias after LVAD might represent a sicker cohort to start with. Thus, whether or not suppressing these ventricular arrhythmias early after LVAD implantation will alter outcomes is unclear.
However, these data suggest that further study into whether or not more aggressive interventions to prevent early ventricular arrhythmias is needed should be considered. The fact that there's really no difference in outcomes amongst those with either no ventricular arrhythmias or late ventricular arrhythmias, however, is interesting. It suggests that when seeing a patient with new-onset ventricular tachycardia late after LVAD implantation may not necessarily need as aggressive intervention, unless they are experience associated symptoms.
However, it also notes that those with late ventricular arrhythmias might not have associated worse long-term outcomes, when considering what to do with device therapy or other interventions.
These data might inform power analyses for prospective clinical trials on the role of optimal approaches to try and suppress incidents of early ventricular arrhythmias after LVAD implantation.
Next, focusing with the realm of genetic arrhythmias, we review an article by Kapplinger et al., published in Circulation: Genomic and Precision Medicine this past month, entitled Yield of the RYR2 Genetic Test in Suspected Catecholaminergic Polymorphic Ventricular Tachycardia and Implications for Test Interpretation.
Pathogenic ryanodine receptor variants account for almost 60% of clinically definite cases of CPVT. However, there's also a significant rate of rare benign variants in the general population, that makes test interpretation difficult. Thus, the authors sought to examine the results of the genetic tests among patients referred for a commercial genetic testing, examining factors that might impact variant interpretability. They did frequency and location comparisons, [inaudible 01:10:43] Ryanodine receptors identified amongst 1,355 patients of varying clinical certainty of having CPVT, and over 60,000 controls.
They noted a total of 18% of patients referred for commercial testing hosted rare ryanodine receptor variants. There is a significantly higher potential genetic false discovery rate among referrals that hosted rare variants.
They noted that current expert recommendations have resulted in increased use of ryanodine receptor genetic testing in patient with questionable clinical phenotypes. This is the largest to-date catecholaminergic polymorphic ventricular tachycardia patient vs. control comparison.
They noted that, overall, the background rates of rare benign variants is around 3.2%. They also noted that in silico tools largely failed to show evidence toward enhancement of variant interpretation amongst patients.
The importance of these data lies in the fact that it highlights that in patients with questionable clinical phenotypes, use of ryanodine receptor genetic testing does not necessarily elucidate the mechanism or the presence of the disease, especially given the relatively high background rates of rare variants and the potential high rate of identifying rare variants amongst patients with presumed disease.
The yield of genetic testing amongst clinically definite cases was reported to be 59%, but again, a total of 18% of patients hosted rare variants, which is statistically less.
This might imply that many patients do not actually have the disease, but it also results in identification of high frequency of rare variants, which are hard to interpret because of the 3% background rate of rare benign variants.
These issues highlight the importance of not simply relying on the results of the genetic test to say to a patient whether or not they might have the disease, or in terms of how best to treat them. This highlights the importance of having genetic centers to which patients can be referred, in order to discuss the results of their genetic tests, and further clarify what the likelihood of them actually having the disease to inform further therapy, in particular IC implantation should be.
Staying with the realm of genetic channelopathies, we next review an article by Huang et al., published in Science Advances this past month, entitled Mechanisms of KCNQ1 Channel Dysfunction in Long QC Syndrome Involving Voltage Sensor Domain Mutations.
It is well recognized that mutations that induce loss of function or dysfunction of the human KCNQ1 channel are responsible for susceptibility to a life-threatening heart rhythm disorders in congenital long QT syndrome.
While hundreds of mutations have been identified, the molecular mechanisms responsible for impaired function are not as well understood. Thus, the authors sought to investigate the impact of different variants with mutations located in the voltage sensor domain, to understand exactly what is leading to the arrhythmogenic potential.
Using experimentation combined with channel functional data, they sought to classify each mutation into six mechanistic categories. They demonstrated high heterogeneity in the mechanisms resulting in channel dysfunction or loss of function amongst mutations in the voltage sensor domain.
More than half were associated with destabilization of the structure of the voltage sensor domain, generally accompanied by mistrafficking and degradation by the proteasome. These observations reveal a critical role for the helix portion as a central scaffold to help organize and stabilize the KCNQ1 voltage sensor domain, and is likely to associate with importance in these domains and many other ion channels.
The importance of this work lies in better understanding the functional significance of variants in specific regions of the gene. Speaking to the previous discussion by Kapplinger et al., the simple presence of a mutation alone might not be sufficient to imply the likelihood of disease causation. Prior work by several authors has suggested understanding location in the gene might be as important, if not more important, than the simple presence of the mutation.
However, better understanding the gene and the reasons by which specific areas of the gene might be associated with the higher likelihood of the pathogenicity might further help clarify that a specific rare variant is of importance or of no importance, based on where it is located.
For these reasons, further study into where in a gene the mutation is located and likelihood of disease causation via these kinds of mechanistic assays will continue to be of importance to help clarify genetic testing, especially as it becomes more and more available.
Finally, we review two articles, both from the realm of basic electrophysiology. The first article we review is by Chu et al., published in this past month's issue of Circulation, entitled Increased Cardiac Arrhythmogenesis Associated With Gap Junction Remodeling With Upregulation of RNA-Binding Protein FXR1.
The authors sought to identify the functional properties of XR1 expression in terms of identifying the mechanisms regulating gap junction remodeling in cardiac disease. Gap junction remodeling is well established as a consistent feature of human heart disease, especially associated with the presence of spontaneous ventricular arrhythmias. However, mechanisms underlying gap junction remodeling are not very well understood.
Thus, the authors sought to specifically evaluate how FXR1, an RNA-binding protein, plays a role in this function. They looked at both human and mouse samples of dilated cardiomyopathy. They noted that FXR1 is a multi-functional protein involved in translational regulation and stabilization of mRNA targets in heart muscle.
Furthermore, they demonstrated by introducing an FXR1 adeno-associated viral vector into mice led to redistribution of gap junctions and promoted ventricular tachycardia, suggesting a functional significance of FXR1 upregulation, that is already seen in dilated cardiomyopathy in ventricular arrhythmogenesis.
Based on these results, the authors suggested that FXR1 expression plays an important role in disease progression of dilated cardiomyopathy by regulating gap junction remodeling. This in turn can lead to an increased risk of ventricular arrhythmogenesis.
While at the basic level, this article is important that it potentially highlights an area that results in the increased arrhythmogenic potential in dilated cardiomyopathy, it is well recognized from a clinical perspective that ablation in non-ischemic cardiomyopathy tends to be less effective than ischemic cardiomyopathy. This may partly be due to the fact that the mechanisms are substantially different.
In ischemic cardiomyopathy, there is dysregulation of gap junction, resulting in potentially slow regions conduction. While this is due to multiple factors, in particular, extensive area scarring. In dilated cardiomyopathy, in where dysregulation of gap junction connections might be the cause of variable and often patchy cardiac conduction abnormalities, the role of ablation might be less clear.
Identifying novel molecular targets to limit arrhythmogenic potential, however, might provide novel approaches to treatments. Understanding these mechanisms might also in the future offer more targeted molecular approaches to suppression of ventricular arrhythmia risk.
Finally, within the realm of basic electrophysiology, we review an article by Kugler et al., published in this past month's issue of Heart Rhythm, entitled Presence of Cardiomyocytes Exhibiting Purkinje-type Morphology and Prominent Connexin45 Immunoreactivity in the Myocardial Sleeves of Cardiac Veins.
It is well recognized that the pulmonary vein myocardium is a potential source of atrial fibrillation. However, one question that remains is whether myocardial extension caval veins and the coronary sinus vascular should have similar properties. No studies today have document specific pacemaker or conductive properties of the human extracardiac myocardium, specifically those seen in the vein sleeves.
Thus, the authors sought to characterize histology and immunohistochemical features of myocardial sleeves seen in the walls of cardiac veins.
They sectioned 32 human hearts, including specimens of pulmonary veins, superior caval vein, the CS, the sinoatrial and atrioventricular nodes, and left ventricle. They noted that myocardial sleeve was found in the walls of pulmonary veins in 15 of 16 hearts, in 21 of 22 superior vena cava, and in all coronary sinuses.
Interestingly, bundles of glycogen-positive cardiomyocytes exhibiting pale cytoplasm and peripheral myofibrils were observed in all the venous sleeves. Based on staining and Connexin labeling, these were felt to be very consistent with Purkinje-type fibers.
This is the first data to suggest that cells that exist in the vein sleeves might have potential pacemaker or conductive properties.
The importance of these findings lie in highlighting the potential mechanisms and relevance of myocardial vein sleeves that extend into the coronary and [inaudible 01:19:22] vasculature. The potential relevance of the type of cells, especially in terms of deciding what therapy to use, particularly in terms of Antiarrhythmic drug therapy, also plays a role. What specific roles these types of cells play in these areas is unclear.
Furthermore, the embryologic basis for why they occur where they occur is unclear. However, they may highlight the importance and consideration of targeting these areas when performing whether ablation procedures or a targeted drug interventions.
I appreciate everyone's attention to these key and heartening articles that we have just focused on from this past month of cardiac electrophysiology across literature. Thanks for listening. Now back to Paul.
Paul Wang: Thanks Raj, you did a terrific job surveying all journals for the latest articles on topics of interest in our field. There's not an easier way to stay in touch with the latest advances. These summaries and a list of all major articles in our field each month can be downloaded from the Circulation: Arrhythmia and Electrophysiology website. We hope that you'll find the journal to be the go-to place for everyone interested in the field.
See you next month.
42 에피소드
Manage episode 201202525 series 1452724
Paul Wang: Welcome to the monthly podcast “On The Beat”, for Circulation: Arrhythmia and Electrophysiology. I am Dr. Paul Wang, Editor-in-Chief, with some of the key highlights from this month's issue. We'll also hear from Dr. Suraj Kapa, reporting on new research from the latest journal articles in the field.
In our first article, Adetola Ladejobi and associates studied 1,433 patients, between 2000 and 2012, who were discharged alive after sudden cardiac arrest. A reversible and correctable cause was identified in 792 patients, or 55%. A reversible cause for sudden cardiac arrest was defined as significant electrolyte or metabolic abnormality, evidence of acute myocardial infarction or ischemia, recent initiation of antiarrhythmic drug, or illicit drug use, or other reversible circumstances.
Of the 792 sudden cardiac arrest survivors, due to reversible or correctable cause, 207 or 26% of the patients received an ICD after their indexed sudden cardiac arrest. During a mean follow-up of 3.8 years, 319 or 40% of patients died. ICD implantation was highly associated with a lower all-cause mortality, p < 0.001, even after correcting for unbalanced baseline characteristics.
In subgroup analyses, only patients with sudden cardiac arrest, were not associated with myocardial infarction, extracted benefit from the ICD, p < 0.001.
The authors concluded that in survivors of sudden cardiac arrest, due to a reversible and correctable cause, ICD therapies associated with lower all-cause mortality, except if the sudden cardiac arrest was due to myocardial infarction.
Further prospect of multi-center randomized control trials will be needed to confirm this observation.
In our next study, Carlo Pappone and associates, studied 81 patients with persistent atrial fibrillation, randomized to undergo high density electrophysiological mapping, to identify repetitive regular activities, before modified circumferential pulmonary vein ablation, or modified circumferential pulmonary vein ablation alone. The primary endpoint was freedom from arrhythmia recurrence at one year.
In the 81 patients with persistent atrial fibrillation, there were 479 regions exhibiting repetitive regular activities in these patients, or 5.9 repetitive regular activities per patient. There were 232 regions in the mapping group, which consisted of 41 patients, and 247 regions in the control group, consisting of 40 patients. Overall, 39% of the repetitive regular activities were identified within pulmonary veins, whereas 61% were identified in non-pulmonary vein regions.
Mapping-guided ablation resulted in higher arrhythmia termination rate, as compared to conventional strategy, 61% vs. 30%, p < 0.007. Total RF duration, mapping, and fluoroscopy times were not significantly different between the groups. No major procedure related adverse events occurred.
After one year, 73% of the mapping group of patients were free of recurrences, compared to 50% of the control group, p = 0.03.
The authors concluded that targeted ablation of regions showing repetitive regular activities provided adjunctive benefit in terms of arrhythmia freedom at one year in treatment of patients with persistent atrial fibrillation. These findings should be confirmed by additional larger randomized multi-centered studies.
In the next article, Maciej Kubala and associates examine repolarization abnormalities in 40 patients with arrhythmogenic right ventricular cardiomyopathy, comparing extent and location of abnormal T-waves of one millimeter or greater in depth, downsloping elevated ST segment in two or more adjacent leads to the area and location of endocardial bipolar and unipolar, and epicardial bipolar voltage abnormalities. They found an abnormal unipolar right ventricular endocardial area of 33.4% with presence in eight patients without negative T-waves. Patients with negative T-waves extending beyond V3, seen in 20 patients, had larger low bipolar and unipolar endocardial areas, and larger epicardial low bipolar areas, compared to those with negative T-waves limited to leads V1 to V3.
ECG localization of negative T-waves regionalized to the location of substrate. Patients with downsloping elevated ST segment, all localized to leads V1, V2 had more unipolar endocardial abnormalities involving outflow in mid-right ventricle, compared to patients without downsloping elevated ST segment.
The authors concluded that in arrhythmogenic right ventricular cardiomyopathy, abnormal electric current areas were proportional to the extent of T-wave inversion on the 12 lead electrocardiogram. Marked voltage abnormalities can exist without repolarization changes. Downsloping elevated ST segment patterns in V1 and V2 occurs with more unipolar endocardial voltage abnormalities, consistent with more advanced trans neural disease.
In the next manuscript, Teresa Oloriz and associates examine the timing and value of program stimulation after catheter ablation for ventricular tachycardia. They performed 218 program ventricular stimulations six days after ablation in 210 consecutive patients, 48% with ischemic cardiomyopathy in the median left ventricular ejection fraction of 37%. After ablation, ICDs were programmed according to NIPS results. Class A were noninducible, Class B non documented inducible VT, and Class C documented inducible VT. Concordance between the programmed ventricular stimulation at the end of the procedure and at six days was 67%. The positive predictive value and negative predictive value were higher for the programmed ventricular stimulation at day six. Ischemic patients and those with preserved ejection fraction showed the highest negative predictive value.
Among noninducible patients at the end of the procedure, but inducible at day six, 59 patients had VT recurrence at one year follow-up. Recurrences were 9% when both studies were noninducible. There were no inappropriate shocks, incidents of syncope with 3%, none harmful. The rate of appropriate shocks per patient per month according to NIPS was significantly reduced, comparing the month before and after the ablation.
The authors concluded that programmed ventricular stimulation at day six predicts VT recurrence.
In the next study, Tor Biering-Sørensen and associates examined ECG global electrical heterogeneity, GEH, in its longitudinal changes, are associated with cardiac structure and function, in their Atherosclerosis Risk and Community study, ARIC, consisting of 5,114 patients, 58% which were female and 22% African Americans. Using the resting 12-lead ECGs, and echocardiographic assessments of left ventricular ejection fraction, global strain, left ventricular mass index, end diastolic volume index, end systolic volume index at visit five.
Longitudinal analysis included ARIC participants with measured GEH at visits one to four. GEH was quantified by spatial ventricular gradient, the QRST angle, and the sum of the absolute QRST integral. Cross sectional and longitudinal regressions were adjusted for manifest subclinical cardiovascular disease.
Having four abnormal GEH parameters was associated with a 6.4% left ventricular ejection fraction decline, a 24.2 gram/meter square increase in left ventricular mass index, a 10.3 milliliter/meter square increase in left ventricular end diastolic volume index, and a 7.8 milliliter/meter square increase in left ventricular end systolic index. All together, clinical and ECG parameters accounted for approximately one third of the left ventricular volume in 20% of the systolic function variability.
The associates were significantly stronger in patients with subclinical cardiovascular disease. The QRST integral increased by 20 millivolts/meter second for each three year period participants who demonstrated left ventricular dilatation at visit five. Sudden cardiac death victims demonstrated rapid GEH worsening, while those with left ventricular dysfunction demonstrated slow GEH worsening.
The authors concluded that GEH is a marker of subclinical abnormalities in cardiac structure and function.
In the next manuscript, Takumi Yamada and associates studied 19 patients with idiopathic ventricular arrhythmias, originating in the parietal band in 14 patients, in the septal band in 5 patients. Among 294 consecutive patients with right ventricular arrhythmia origins, parietal band and septal band ventricular arrhythmias exhibited a left bundle branch block, with left inferior in 12 patients', superior in 2 patients' axes, in left or right inferior axis pattern in four and one patients respectively.
In Lead 1, all parietal band ventricular arrhythmias exhibited R-waves, while septal band ventricular arrhythmias often exhibited S-waves. A QS pattern in lead AVR, in the presence of a knock in the mid QRS were common in all infundibular muscle ventricular arrhythmias. During infundibular muscle ventricular arrhythmias, a far-field ventricular electrogram, with an early activation, was always recorded in the His bundle region, regardless of the location of ventricular arrhythmia regions. With 9.2 radiofrequency applications in a duration of 972 seconds, catheter ablation was successful in 15 of the 19 patients. Ventricular arrhythmias recurred in four patients during a fallout period of 43 months.
In the next paper, Uma Mahesh Avula and associates examine the mechanisms underlying spontaneous atrial fibrillation, in an Ovine model of left atrial myocardial infarction. The left atrial myocardial infarction was created by ligating the atrial branch of the left anterior descending artery. ECG loop recorders were implanted to monitor atrial fibrillation episodes.
In seven sheep, Dantrolene, a Ryanodine receptor blocker, was administered in vivo, during the observation period. The left atrial myocardial infarction animals experienced numerous episodes of atrial fibrillation during the eight day monitoring period, that were suppressed by Dantrolene. Optical mapping showed spontaneous focal discharges originating through the ischemic/normal-zone border. These spontaneous focal discharges were calcium driven, rate dependent, and enhanced by isoproterenol, but suppressed by Dantrolene.
In addition, these spontaneous focal discharges initiated atrial fibrillation-maintaining reentrant rotors anchored by marked conduction delays at the ischemic/normal-zone border. Nitric oxide synthase one protein expression decreased in ischemic zone myocytes, or NADPA oxidase in xanthine oxidase enzyme activities in reactive oxygen species increased. Calmodulin aberrantly increased, Ryanodine binding to cardiac Ryanodine receptors in the ischemic zone. Dantrolene restored the physiologically binding of Calmodulin to the cardiac Ryanodine receptors.
The authors concluded that atrial ischemia causes spontaneous atrial fibrillation episodes in sheep, caused by spontaneous focal discharges that initiate re-entry. Nitroso redox imbalance in the ischemic zone is associated with intensive reactive oxygen species production, and altered the Ryanodine receptor responses to Calmodulin. Dantrolene administered normalize the Calmodulin response and prevents left atrial myocardial infarction, spontaneous focal discharges in atrial fibrillation initiation.
In the next study, Wouter van Everdingen and associates examine the use of QLV for achieving optimal acute hemodynamic response to CRT with a quadripolar left ventricular lead. 48 heart failure patients with left bundle branch block were studied. Mean ejection fraction 28%, mean QRS duration 176 milliseconds. Immediately after CRT implantation, invasive left ventricular pressure volume loops were recorded during biventricular pacing, with each separate electrode at four atrial ventricular delays.
Acute CRT response, measured as a change in stroke work compared to intrinsic conduction, was related to the intrinsic interval between the Q on the electrocardiogram and the left ventricular sensing delay, that is the QLV, normalized for the QRS duration, resulting in QLV over QRS duration in the electrode position.
QLV over QRS duration was 84% and variation between the four electrodes was 9%. The change in stroke work was 89% and varied by 39% between the electrodes. In univariate analysis, an anterolateral or lateral electrode position in a high QLV to QRS duration ratio had a significant association with a large change in stroke work, all P less than 0.01.
In a combined model, only QLV over QRS duration remained significantly associated with a change in stroke work, P less than 0.5. However, a direct relationship between QLV over QRS duration in stroke work was only seen in 24 patients, while 24 other patients had an inverse relation.
The authors concluded that a large variation in acute hemodynamic response indicates that the choice of stimulated electrode on the quadripolar electrode is important. Although QLV to QRS duration ratio was associated with acute hemodynamic response at a group level, it cannot be used to select the optimal electrode in the individual patient.
In the next study, Antonio Pani and associates conducted a multi-centered prospective study evaluating the determinance of zero-fluoroscopy ablation of supraventricular arrhythmias. They studied 430 patients with an indication for EP study and/or ablation of SVT. A procedure was defined as zero-fluoroscopy when no fluoroscopy was used. The total fluoroscopy time inversely was related to number of procedures previously performed by each operator since the study start. 289 procedures, or 67%, were zero-fluoro. Multi-variable analyses identified as predictors of zero-fluoro was the 30th procedure for each operator, as compared to procedures up to the ninth procedure, the type of arrhythmia, AVNRT having the highest probability of zero-fluoro, the operator, and the patient's age. Among operators, achievement of zero-fluoro varied from 0% to 100%, with 8 operators, or 23%, achieving zero-fluoro in 75% of their procedures. The probability of zero-fluoro increased by 2.8% as the patient's age decreased by one year. Acute procedural success was obtained in all cases.
The authors concluded that the use of 3D mapping completely avoided the use of fluoroscopy in most cases, with very low fluoro time in the remaining, and high safety and effectiveness profiles.
In the next paper, Demosthenes Katritsis and associates examine the role of slow pathway ablation from the septum as an alternative to right-sided ablation. Retrospectively, 1,342 undergoing right septal slow pathway ablation for AV nodal reentry were studied. Of these, 15 patients, 11 with typical and 4 with atypical AVNRT, had a left septal approach following unsuccessful right sided ablation, that is, the righted left group. In addition, 11 patients were subjected prospectively to a left septal only approach for slow pathway ablation, without previous right septal ablation, that is, left group. Fluoroscopy times in the right and left group, and the left groups were 30.5 minutes and 20 minutes respectively, P equals 0.6. The rate of [inaudible 00:18:24] current delivery time for comparable, 11.3 minutes and 10.0 minutes respectively.
There are no additional ablation lesions at other anatomical sites in either group, and no cases of AV block were encountered. Recurrence rate for arrhythmias in the right and left group was 6.7% and 0% in the left group, in the three months following ablation.
The authors concluded that the left septal anatomical ablation of the left inferior nodal extension is an alternative to ablation of both typical and atypical AV nodal reentry when ablation at the right posterior septum is ineffective.
In our next study, Mark Belkin and associates reported prior reports of new-onset device-detected atrial tachyarrhythmias. Despite the clear association between atrial fibrillation and the risk of thromboembolism, the clinical significance of new-onset device-detected atrial tachyarrhythmias and thromboembolism remains disputed.
The authors aim to determine the risk of thromboembolic events in these patients. Using the Ovid Medline, Cochrane, SCOPUS databases to identify 4,893 reports of randomized control trials, perspective or retrospective studies of pacemaker and defibrillator patients reporting the incidence of device detected atrial tachyarrhythmias.
The authors examine 28 studies, following a total of 24,984 patients. They had an average age of 69.9 years and a mean study duration of 21.8 months. New-onset device-detected atrial tachyarrhythmias was observed in 23% of patients. Among nine studies, consisting of 8,181 patients, reporting thromboembolism, the absolute incidence was 2.1%. Thromboembolic events were significantly greater among patients with new-onset device-detected arrhythmias, with a relative risk of 2.88, compared to those who had less than one minute of tachyarrhythmias, 1.77 risk ratio.
The authors concluded that new-onset device-detected atrial tachyarrhythmias is common, affecting close to one quarter of all patients with implanted pacemakers and defibrillators.
In our last paper, Sanghamitra Mohanty and associates performed a meta-analysis systematically evaluating the outcome of pulmonary vein isolation with and without thermoablation in patients with atrial fibrillation. For pulmonary vein ablation alone, only randomized trials conducted in the last three years reporting single procedure success rates, off antiarrhythmic drugs at 12 months or greater follow-up were included. In the PVI plus FIRM group, all public studies reporting a single procedure off antiarrhythmic drug success rate with at least one year follow-up were identified.
Meta-analytic estimates were derived, using the DerSimonian and Laird Random-effects Models, and pooled estimates of success rates. Statistical heterogeneity was assessed using the Cochran Q test and I-square. Study quality was assessed with the Newcastle-Ottawa Scale.
15 trials were included, 10 with PVI plus FIRM, with 511 patients, non-randomized perspective design, and 5 pulmonary vein isolation-only trials, consisting of 295 patients, all randomized.
All patients in the pulmonary vein only trials had 100% non paroxysmal atrial fibrillation, except for one study, and no prior ablations. About 24% of the PVI plus FIRM patients had paroxysmal atrial fibrillation.
After 15.9 months of follow-up, the off antiarrhythmic drug pooled success was 50% with FIRM plus PVI, compared to 58% in the PVI alone. The difference in the effect size between the groups was not statistically significant. No significant heterogeneity was observed in this meta-analysis.
The authors concluded that the overall pooled estimate did not show any therapeutic benefit of PVI FIRM over PVI alone.
That's it for this month, but keep listening. Suraj Kapa will be surfing all journals for the latest topics of interest in our field. Remember to download the podcast On The Beat. Take it away, Suraj.
Suraj Kapa: Thank you, Paul, and welcome back to “On The Beat”. Again, my name is Suraj Kapa and I'm here to review with you articles across the cardiac electrophysiology literature that were particularly hard hitting in the month of February.
To start, we review the area of atrial fibrillation, focusing on anticoagulation. Reviewing an article published in this past month's issue of the Journal of the American Heart Association, by Steinberg et al., entitled Frequency and Outcomes of Reduced Dose Non-Vitamin K Antagonist Anticoagulants, results from ORBIT AF II. The ORBIT AF II registry, also called the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation, is a prospective national observational registry of AF patients.
The author sought to describe the frequency, appropriateness, and outcomes of patients prescribed reduced doses of NOACs in the community practice. They reviewed the records of almost 8,000 patients receiving NOACs and noted that the vast majority, nearly 84%, received a standard dose of NOACs, consistent with the U.S. FDA labeling. While only 16% received a reduced dose, only 43% of these were consistent with labeling instructions. Those who received reduced dose NOACs inappropriately more often tended to be younger and have, interestingly, lower overall bleeding risks scores.
Furthermore, compared with those appropriately receiving dosing, patients receiving inappropriately reduced dose NOACs had a higher unadjusted rates of thromboembolic events and death.
These data are important to understand, in that, discussion with patients, that inappropriate reduction of NOACs does not necessarily offer appropriate protection against long-term risk of thromboembolic events. Thus, close attention must be paid to consideration of the use cases and instructions for use.
While the registry cannot get into the details of why the dose was reduced in the spectrum of patients, it does highlight the fact that this continues to be a problem in general practice.
Further data is needed to understand what leads to inappropriate dose reduction, which could include factors such as patient preference, or physician education.
Staying within the realm of anticoagulation and understanding individual needs, we next review an article published in this past month's issue of Circulation, by Nielsen et al., entitled Female Sex Is a Risk Modifier Rather Than a Risk Factor for Stroke in Atrial Fibrillation. Should we use a CHA2DS2-VA score rather than CHA2DS2-VASc? In this review, the authors sought to evaluate whether female sex is truly an overall risk factor, as opposed to a risk modifier.
Using three nationwide registries, they identified patients with nonvalvular atrial fibrillation between 1997 and 2015, and they calculated two sets of scores. The first score, they termed a CHA2DS2-VA score, calculated for men and women with follow-up of one year in the Danish National Patient Registry. They wanted to calculate the risk based on this pseudo-value method. They then reviewed female sex as a prognostic factor by inclusion as an interaction term on the CHA2DS2-VA score, to calculate overall thromboembolic risk.
Amongst over 200,000 patients with atrial fibrillation, almost half of whom are women, they noted that the mean CHA2DS2-VA score, where sex is excluded, was a tad higher in women than men, namely 2.7 vs. 2.3. However, women had an overall higher one year thromboembolic rate of 7.3 vs. 5.7 per 100 person-years. Interestingly, with a CHA2DS2-VA score of zero, the absolute risk of thromboembolism was equal amongst men and women, around .5%. Once overall points increased above one, however, women exhibited a higher stroke risk. This interaction was statistically significant.
Thus, the authors indicated that female sex is a risk modifier for stroke in patients with atrial fibrillation, rather than a risk factor. The terminology is important to consider. Essentially, what they are noting is that at the lower risk level, female sex, in and of itself, is not something that necessarily puts somebody in the higher risk cohorts. Instead, at higher risk levels, because of other factors, a woman may have a higher overall risk of stroke than men. Thus, stroke risk is accentuated in women, who would have been eligible for oral anticoagulating treatment anyway, on the basis of a CHADS score above one.
These data highlight the importance of thinking about the fact that at the lower risk score level, female sex alone might not be sufficient to say that a patient has reached the CHA2DS2-VASc score of one and above. But, really, you need an overall CHA2DS2-VA score, or a risk score, inclusive of at least two other risk factors to indicate that now, being a female is going to modify the risk and further accentuate it.
Now, one thing to note is, these data are very consistent with the guidelines. The European guidelines indicates that female sex alone, which in the CHA2DS2-VASc score would confer a risk score of one, should not, by itself, construe the need to put somebody on anticoagulation.
However, it's important to highlight that these data show that at a CHA2DS2-VASc score of one in females, they should really be construed as equivalent to a CHA2DS2-VASc score of zero in men.
Using the CHA2DS2-VA score, where sex is excluded, but considering that women overall have a higher incidence of stroke at any given CHA2DS2-VA level above one, will help better counsel women about the importance of being on anticoagulants.
The next article we review relates to long-term risk related to atrial fibrillation, published in February's issue of Heart Rhythm, by Nishtala et al., entitled Atrial Fibrillation and Cognitive Decline in the Framingham Heart Study. While there's much out there about the potential long-term role of cognitive decline in atrial fibrillation patients, longitudinal research investigating the relationship is relatively sparse. Thus, the authors sought to investigate the association between atrial fibrillation and cognitive performance, cross-sectionally and longitudinally.
They chose patients within the Framingham study who are dementia and stroke-free at the time of baseline neuropsychological assessments. They evaluated atrial fibrillation status as a two level variable, namely prevalent atrial fibrillation vs. no atrial fibrillation in cross-sectional analyses. And they also separated into prevalent atrial fibrillation at baseline, interim development of atrial fibrillation, and those who didn't develop any atrial fibrillation in longitudinal analysis.
They studied 2,682 participants in the Framingham Heart study, including original and offspring cohorts. They noted that a baseline of about 4% had diagnosed atrial fibrillation. Prevalent AF was noted to be significantly associated with poorer attention. Interestingly, sex differences were noted, with men performing worse on test of abstract reasoning and executive function than women.
They noted that prevalent atrial fibrillation was significantly associated with the longitudinal decline in executive function, in both the original cohorts, as well as interim atrial fibrillation being significantly associated with longitudinal decline in executive function of the offspring cohorts. Thus, they noted that atrial fibrillation is associated with a profile of long-term change in cognitive function.
The importance of these data are to further highlight the potential contribution of atrial fibrillation to cognitive decline. While the exact mechanisms remain to be fully elucidated, the question of how to get ahead of the cognitive decline associated with atrial fibrillation is further put out by these data.
Whether the relationship between atrial fibrillation and cognitive decline is due to recurrent thromboembolic events vs. the therapies used vs. other factors such as humid anatomic factors resulting in poor brain perfusion, are relatively unclear.
Certainly it is also possible that atrial fibrillation simply reflects a process associated with other factors that might lead to cognitive decline. However, again, further mechanistic studies and potential treatment interventions to mitigate the risk of cognitive decline are still needed.
Speaking of this, we next review a paper published in the European Heart Journal this past month, by Friberg and Rosenqvist, entitled Less Dementia with Oral Anticoagulation in Atrial Fibrillation.
Speaking of treatments to avoid long-term cognitive decline, the authors sought to evaluate if oral anticoagulant treatment might offer protection against long-term dementia risk in atrial fibrillation.
These retrospective registry studies of patients with the hospital diagnoses of atrial fibrillation and no prior diagnosis of dementia in Sweden, including patients between 2006 and 2014. The study included a total of 444,106 patients over 1.5 million years. They noted that patients who were on anticoagulant treatment at baseline were associated with a 29% lower risk of dementia than patients without anticoagulant treatments. Thus, there is an overall 48% lower risk on treatments with the appropriate anticoagulation. There is no difference on whether Warfarin or the newer oral anticoagulants were used.
Thus, the authors concluded that the risk of dementia is higher without oral anticoagulant treatment in patients with atrial fibrillation, suggesting that early initiation of anticoagulant treatment in patients with atrial fibrillation could be of value to preserve long-term cognitive function.
This relates directly back to the previous paper, which focused more on the epidemiologic risk, while this paper focuses on elements that might construe mechanism or treatment options.
Many authors have concluded the incredible importance of early recognition of the need for anticoagulant initiation in patients with atrial fibrillation. While the exact mechanism of cognitive decline and dementia in atrial fibrillation remains to be completely elucidated, certainly recurrent thromboembolic events that might be relatively silent as they occur, but result in a long-term cumulative risk might be helped by placing patients on anticoagulants.
This becomes another reason to counsel patients on the importance of long-term anticoagulant therapy. Certainly, the limitations of these studies, however, are the retrospective nature and the fact that there might be some subtle differences that may not be otherwise able to be construed from retrospective registry data regarding the relative role of anticoagulants in truly protecting against long-term cognitive decline. However, the data are certainly provocative.
Continuing within realm and discussing outcomes associated atrial fibrillation, we next review an article by Leung et al., entitled The Impact of Atrial Fibrillation Clinical Subtype on Mortality, published in JACC: Clinical Electrophysiology this past month.
The author sought to investigate the prognostic implications of a subtype of atrial fibrillation, paroxysmal or persistent, on long-term prognosis. They sought to evaluate differences in mortality between paroxysmal or persistent atrial fibrillation amongst 1,773 patients. They adjusted for comorbid diseases associated with atrial fibrillation, as well as CHA2DS2-VASc score. In the study, a total of about 1,005 patients or about 57% had persistent atrial fibrillation. Over the follow-up period, about 10% of those with paroxysmal atrial fibrillation and 17% of those with persistent atrial fibrillation died.
They noted that persistent atrial fibrillation, after correcting for other comorbidities, was independently associated with worse survival. Thus, they concluded that persistent atrial fibrillation is independently associated with increased mortality in the long term.
These data are relevant in that they highlight that persistent atrial fibrillation in its nature might construe an overall higher risk cohort. It remains to be fully understood what are the true mechanistic differences between persistent and paroxysmal atrial fibrillation. Overall, however, the community grossly agrees that persistent atrial fibrillation likely suggests a higher degree of atrial myopathy. If we believe this, then it is reasonable to believe that the risk associated with this specific form of atrial fibrillation might result in higher long-term harm.
Of course, these data are subject to the same limitations of all retrospective data. Namely, these persistent atrial fibrillation patients might have received different therapies or been more sick to start with that cannot be construed by comorbidities alone.
Furthermore, these data do not necessarily get to the point of whether treating atrial fibrillation in the persistent patient more aggressively necessarily reduces the risk equivalent to that of paroxysmal patients. Thus, further understanding is needed to understand how to use these data to reduce this mortality difference.
Continuing within the realm of epidemiology of atrial fibrillation, we next review an article published in this past month's issue of Circulation, by Mandalenakis et al., entitled Atrial Fibrillation Burden in Young Patients with Congenital Heart Disease. It is assumed that patients with congenital heart disease are vulnerable to atrial fibrillation because of multiple factors. These include residual shunts, hemodynamic issues, atrial scars from previous heart surgery, valvulopathy and other factors.
However, there's limited data on the overall risk of developing atrial fibrillation and complications associated with it, especially in children and young adults with congenital heart disease. Furthermore, these children and young adults with congenital heart disease have never been compared with overall risk and control subjects.
The authors use the Swedish Patient and Cause of Death Registries to identify all patients with diagnoses of congenital heart disease born from 1970 to 1993. They then matched these patients with control subjects from the Total Population Register in Sweden. They noted amongst almost 22,000 patients with congenital heart disease and almost 220,000 matched control subjects that 654 patients amongst the congenital heart disease cohort developed atrial fibrillation, while only 328 amongst the larger control group developed atrial fibrillation. The mean follow-up overall was 27 years.
They noted the risk of developing atrial fibrillation was almost 22 times higher amongst patients with congenital heart disease than control subjects. They noted the highest risk with a hazard ratio of over 84 was noted in patients with conotruncal defects. Furthermore, at the age of 42 years, over 8% of patients with congenital heart disease had a recorded diagnosis of atrial fibrillation.
Interestingly, heart failure was a particularly important complication in patients with congenital heart disease and atrial fibrillation, with over 10% of patients developing atrial fibrillation and [inaudible 00:38:20] congenital heart disease developing a diagnosis of heart failure as well.
These data are important in that they help in counseling the importance of close follow-up of patients with congenital heart disease and their long-term risk of other complications. Even if patients might be perceivably well managed, incident atrial fibrillation might increase risk of stroke in these patients. It is further important to note that many of these patients cannot be evaluated according to traditional risk or evaluations. Thus, it is important to consider whether or not a patient should be treated with anticoagulation once they develop atrial fibrillation.
The high risk of overall atrial fibrillation incidents, particularly in patients with more complex congenital defects, needs to be taken into consideration when advising on the frequency of follow-up.
It is important to further note that we must think of this overall risk as the minimum possible risk, namely, counseling a congenital heart disease patient that up to one in ten of them may develop atrial fibrillation by the age of 42 years, is likely the minimum amount. The reason for this is many patients, due to either lack of follow-up or lack of sufficient monitoring, and the asymptomatic nature of atrial fibrillation in many patients might have not been diagnosed.
Implications or treatments remain to be seen, and whether or not there are methods to reduce the overall risk of atrial fibrillation is unclear. However, engaging congenital heart disease experts and advising patients, especially at younger ages, on the importance of close electrocardiographic monitoring for a potential atrial fibrillation risk is critical.
Next within the realm of atrial fibrillation, we switch to the topic of ablation. And review an article by Pallisgaard et al., published in this last month's issue of European Heart Journal, entitled Temporal Trends in Atrial Fibrillation Recurrence Rates After Ablation, between 2005 and 2014: a nationwide Danish cohort study.
Ablation has been increasingly used as a rhythm control strategy for patients with atrial fibrillation. Over this time, we have all noted evolution in both the experience and the techniques used. Thus, the authors sought to evaluate whether recurrence rate of atrial fibrillation has changed over the last decade. They included all patients with first-time AF ablation done between 2005 and 2014 in Denmark. They then evaluated recurrent atrial fibrillation based on a one year follow-up. They included a total of 5,425 patients undergoing first-time ablation.
They noted, interestingly, that the patient median age increased over time, and the median AF duration prior to ablation decreased over time. However, the rates of recurrent atrial fibrillation decreased from 45% in 2005 to 31% in the more recent years of 2013, 2014. With the relative risk of recurrent atrial fibrillation almost being cut in half.
They noted that female gender, hypertension, atrial fibrillation duration more than two years, and cardioversion with one year prior to ablation were all associated with an increased risk of recurrent atrial fibrillation, regardless of year.
These data, again, are retrospective and thus must be taken in the context of that consideration. However, they highlight that it is possible either our selection of appropriate patients for atrial fibrillation ablation or our techniques have improved overall success.
The fact that atrial fibrillation ablation is still a relatively young field, with evolving approaches and evolving techniques, needs to be taken into consideration when advising patients on success rates. Using data from many years prior to informed discussion today is fraught with potential error, especially as our catheter design and mapping system use and understanding of appropriate lesion set changes.
Of course, some criticism is required as well. While the patients included were relatively older in more recent years, the total AF duration prior to ablation decreased over the years. This suggests that patients are being ablated earlier than they were in the early days of atrial fibrillation ablation.
There is some data out there to suggest that earlier ablation for atrial fibrillation might result in a lower long-term recurrence rate. Thus, this might account for some of the difference. However, it is unlikely that it accounts for all of it, given the degree of reduction in overall risk of occurrence.
Staying within the trend of talking about changes in techniques for atrial fibrillation ablation, we next review an article published in this past month's issue of Heart Rhythm, by Conti et al., entitled Contact Force Sensing for Ablation of Persistent Atrial Fibrillation: A Randomized, Multicenter Trial. Contact force sensing is one of the newer techniques being used to optimize the success rates for atrial fibrillation ablation. It is generally felt that understanding when one is in contact will optimize atrial fibrillation ablation outcomes by ensuring the physician knows each time they are in contact, and also potentially reducing complications by avoiding excessive contact.
Thus, the authors designed the TOUCH AF trial to compare contact force sensing-guided ablation vs. contact force sensing-blinded ablation. They included a total of 128 patients undergoing first-time ablation for persistent atrial fibrillation, and thus randomized them to a situation where the operator was aware of the contact force vs. blinded to the contact force. While the force data was hidden in the blinded cohort, it was still recorded on the backend.
In all patients, wide antral pulmonary vein isolation plus a roof line was performed, and patients were followed at 3, 6, 9, and 12 months, with clinical visits, ECGs, and 48-hour Holter monitoring.
The primary endpoint was cumulative radio frequency time for procedures, and atrial arrhythmia is greater than 30 seconds after three months is considered a recurrence.
They noted that average force was higher in the contact force-guided arm than contact force-blinded arm, though not statistically significant, with an average of 12 grams in the latter and 14 grams in the former.
Interestingly, the total time of ablation did not differ between the two groups. Furthermore, there was no difference in the single procedure freedom from atrial arrhythmia, computing to about 60% in the contact force-guided arm vs. the 63% in the contact force-blinded arm. They did notice, however, that lesions with associated gaps were associated with significantly less force and less force-time integral.
The authors concluded from this, the contact force-guided ablation did not result in significant decrease in total radio frequency time or 12-month outcomes in terms of freedom from atrial arrhythmias.
These data are important to help guide us in terms of thinking about how the tools we use, as they change, actually alter outcomes. Sometimes we may perceive benefits based on logical thinking that's knowing more about what is happening when we are performing a procedure should optimize that procedure. However, this is not necessarily always the case, and thus highlights the importance of randomized trials to directly compare different situations, such as awareness of contact force vs. lack of awareness of contact force.
The relevance of these particular articles is that when we compare catheters with different designs, it does not necessarily highlight the importance of the force number itself. Namely, comparing a contact force catheter vs. non-contact force catheter implicates use of essentially two completely different catheters. To understand the incremental utility of force in making decisions, it is important to consider the same catheter, but simply with awareness or lack of awareness of the actual force number.
One of the limitations, however, is that individuals who might have been trained on using the same force sensing catheter might have some degree of tactile feedback and understanding of the amount of force being applied to the tip of the catheter, based on having been repeatedly exposed to contact force numbers during use of said catheter. Thus, there might be a difference in being blinded to contact force in early stage operators than in later stage operators who might have been trained based on repeated feedback.
Thus, it's difficult to conclude, necessarily, that contact force is not offering mental benefit. In fact, there's a fair chance that it does. However, offering a skeptical viewpoint to help guide the importance of continually evolving technology in actually improving outcomes is important.
Finally, within the realm of atrial fibrillation, we review an article published by Pathik et al., in this past month's issue of Heart Rhythm, entitled Absence of Rotational Activity Detected Using 2-Dimensional Phase Mapping and the Corresponding 3-Dimensional Phase Maps in Human Persistent Atrial Fibrillation.
Current clinically used phase mapping systems involve 2-dimensional maps. However, this process may affect accurate detection of rotors. The authors sought to develop 3-dimensional phase mapping technique that uses a 3D location of the same basket electrodes that are used to create the currently available 2-dimensional maps. Specifically, they wanted to determine whether the rotors detected in 2D phase maps were present in the corresponding time segments and anatomical locations in 3D phase maps.
They used one minute left atrial atrial fibrillation recordings obtained in 14 patients, using the basket catheter, and analyzed them offline, using the same phase values, based on 2-dimensional vs. 3-dimensional representations.
They noted rotors in 3.3% using 2D phase mapping, 9 to 14 patients demonstrated about 10 transient rotors, with a mean rotor duration of about 1.1 seconds. They noted none of the 10 rotors, however, were seen at the corresponding time segments and anatomical locations in 3D phase maps. When looking at 3D phases maps, 4 of the 10 corresponded with single wavefronts, 2 of 10 corresponded with simultaneous wavefronts, 1 of 10 corresponded with disorganized activity, and 3 of 10 had no coverage by the basket catheter at the corresponding 3D anatomical locations.
These data are important, in that they highlight the importance of when we consider reflecting 2-dimensional systems in a 3-dimensional world of atrial fibrillation. The role of ablating rotors is still in question. However, it is still an important question, and it requires continued study. The best way of identifying a rotor, knowing a rotor is a rotor, and understanding where the rotor is, are going to be critical to further evaluating whether actual ablation of these rotors has any relevance to long-term atrial fibrillation ablation.
The truth is, that we need to be sure that we are properly identifying all the rotors in order to help guide whether or not we are actually being successful in ablating atrial fibrillation. The importance of the study is in reflecting whether 2-dimensional representations of the 3-dimensional geometry is sufficient to reflect what is actually happening in that 3-dimensional geometry. These authors suggest that it is not.
One of the limitations, however, might be that when we wrap a 2-dimensional framework into 3 dimensions and perform additional post-processing, this might result in some degree of attenuation of the data. However, it does highlight the importance for continued rigorous evaluation of current approaches to phase mapping.
Several articles have been published in recent months as well, about different single processing techniques to evaluate whether or not a rotor is, in fact, a rotor and to help optimize identification of them.
The jury is still out on whether or not targeted ablation of rotors will, in fact, improve overall long-term atrial fibrillation ablation outcomes. The limitations might not necessarily be that rotors are not an appropriate target, but that we just don't understand entirely where rotors are, based on limited single processing options, or based on limitations of anatomical localization.
Next, delving into the realm of ablation at large, we review an article by Iwasawa et al., published in this past month's issue of Europace, entitled Trans Cranial Measurement of Cerebral Microembolic Signals During Left-Sided Catheter Ablation with the Use of Different Approaches - the Potential Microembolic Risk of a Transseptal Approach.
The authors note the importance of considering microemolization in subclinical brain damage during catheter ablation procedures. They evaluated microembolic signals detected by transcranial Doppler during ablation of supraventricular or ventricular arrhythmias with the use of either a transseptal or a retrograde approach.
The study set was small, only including 36 patients who underwent catheter ablation. They noted in about 11 patients left-sided ablation was done with transaortic approach, and in 9 patients a transseptal approach was used. The other 16 patients were not included, as they only had right-sided ablation.
The total amount of microembolic signature, based on transcranial Doppler were counted throughout the procedure and then analyzed offline. There is no significant difference in number of radio frequency applications, total energy delivery time, total application of energy, or total procedure time between the different groups. However, they did note that the mean total number of microembolic signals was highest in those undergoing transseptal approach to left-sided ablation. It was significantly lower in those having retrograde aortic approach, and lowest in those having right-sided only ablation.
Interestingly, many of the microembolic signals were detected during the transseptal puncture period, and then during the remainder of the procedure there was relatively even distribution of emboli formation. A frequency analysis suggested that the vast majority of microembolic signals are gaseous, in particularly Group 1 and Group 3, though only 91% in Group 2. No neurological impairment was observed in any of the patients after the procedure.
Recently, there's been a lot of focus on the potential long-term risk of cognitive impairments due to microembolic events in the setting of ablation. At least one recent paper in ventricular arrhythmias and several recent papers in atrial fibrillation ablation have suggested a fairly high risk of incidence cerebral emboli noted on MRI post ablation. While these results do not necessarily get at MRI lesions, they do suggest microembolic events. And what is most interesting, they look at microembolic events that occur throughout the entire ablation period with different approaches.
Interestingly, there is a massive spike in overall microembolic signals during the transseptal puncture period, and relatively even distribution throughout ablation, irrespective of application of radio frequency or not. Furthermore, while nearly all microembolic signals are gaseous, based on frequency analysis, with retroaortic approach or in those having right-sided only ablation, significantly less seem to be due to gaseous events in those having a transseptal approach.
It is known that there's possible damage to the internal dilation system when exposing it to transseptal needles or wires. Thus, one has to wonder whether some of the embolization could be from material associated with the actual transseptal puncture, either from portions of the punctured septum itself, or perhaps from the plastic material that which is being pushed transseptally.
These data still need to be considered and we have yet to see what the long-term applications of these kinds of findings are. It may be possible that while transseptal approach seems to offer more instant microembolic signals, if the long-term risk is no different, does it really matter?
However, these findings are provocative in the sense that they highlight potential significant differences and the risk of silent cerebral damage, based on the approach we use to ablation.
Changing gears, we next focus on the role of devices. And the first paper review is in the last month issue of JACC: Heart Failure, by Gierula et al., entitled Rate Response Programming Tailored to the Force Frequency Relationship Improves Exercise Tolerance in Chronic Heart Failure.
The authors sought to examine whether the heart rate at which the force frequency relationship slope peaks can be used to tailor heart rate response in chronic heart failure patients with cardiac pacemakers, and to see whether this favorably influences exercise capacity.
They performed an observational study in both congestive heart failure and healthy subjects with pacemaker devices. They then evaluated in a double-blind, randomized, controlled crossover study, the effects of tailored pacemaker rate response programming on the basis of a calculation of force frequency relationship based on critical heart rate, peak contractility, and the FFR slope.
They enrolled a total of 90 patients with congestive heart failure into the observational study cohorts, and 15 control subjects with normal LLV function. A total of 52 patients took part in the crossover study. They noted that those who had rate response settings limiting heart rate rise to below the critical heart rate were associated with greater exercise time and higher peak oxygen consumption, suggesting the tailored rate response program can offer significant benefit, particularly in congestive heart failure patients.
The importance of this trial is in that it highlights the importance of thoughtful decision-making in programming devices, and that group decision-making involving exercise physiologists, alongside pacemaker programming, and involving our congestive heart failure specialists might be the most critical in optimizing the approach to programming.
It might be that more aggressive measures are needed in congestive heart failure patients to decide on what optimal programming is, than it is in otherwise normal patients.
Staying within the realm of devices, we next focus on a publication by Sanders et al., published in this past month's issue of JACC: Clinical Electrophysiology, entitled Increased Hospitalizations and Overall Healthcare Utilization in Patients Receiving Implantable Cardioverter-Defibrillator Shocks Compared With Antitachycardia Pacing.
The authors sought to evaluate the effect of different therapies and healthcare utilization in a large patient cohorts. Specifically comparing antitachycardia pacing with high voltage shocks. They used the PROVIDE registry, which is a prospective study of patients receiving ICDs for primary prevention in 97 U.S. centers. They categorized these patients by type of therapy delivered, namely no therapy, ATP only, or at least one shock. They then adjudicated all ICD therapies, hospitalizations, and deaths.
Of the 1,670 patients included, there was a total follow-up of over 18 months. The vast majority, 1,316 received no therapy, 152 had ATP only, and 202 received at least one shock.
They noted that patients receiving no therapy and those receiving only ATP had a lower cumulative hospitalization rate and had a lower risk of death or hospitalization. The cost of hospitalization was known to be significantly higher for those receiving at least one shock than for those receiving only ATP therapy.
They noted no difference in outcomes or cost between patients receiving only ATP and those without therapy. Thus, the authors concluded that those receiving no therapy or those receiving only ATP therapy had similar outcomes, and had significantly reduced hospitalizations, mortality, and costs compared to those who received at least one high voltage shock.
The relevant findings from this study is similar to prior studies that suggest that any shock over follow-up is associated with potential increase in long-term mortality. The difficulty in assessing this, however, is the fact that it might be that those who have VT that can be appropriately ATP terminated, might be at a somewhat lower risk than those who need to be shocked to get out of their VT. Thus, the presumption of needing a shock to restore normal rhythm might suggest a higher risk cohort, it cannot be gleaned from traditional evaluation of morbid risk factors.
This is why the importance of considering how devices are programmed and whether or not a patient who has received shocks can be reprogrammed to offer ATP only therapy to terminate those same VTs, needs to be taken into consideration. How to best tailor this therapy, however, is still remaining to be determined, though more and more clinical trials are coming out to suggest in terms of optimal overall population-wide programming for devices.
Staying with the realm of devices, we next review an article by Koyak et al., in this past month's issue of Europace, entitled Cardiac Resynchronization Therapy in Adults with Congenital Heart Disease.
Heart failure is one of the leading causes of morbidity and mortality amongst patients with congenital heart disease. But there's limited experience in the role of cardiac resynchronization therapy amongst these patients. Thus, the authors sought to evaluate the efficacy of CRT in adults with congenital heart disease.
They performed a retrospective study on a limited number of 48 adults with congenital heart disease who received CRT, amongst four tertiary referral centers. They have defined responders as those who showed improvement in NYHA functional class or improvement in systemic ventricular ejection fraction. The median age at CRT implant was 47 years, with 77% being male. There was a variety of syndromes included.
They noted that the majority of patients, nearly 77%, responded to CRT, either by definition of improvement of NYHA functional class, or systemic ventricular function, with a total of 11 non-responders.
They noted that CRT was accomplished with a success rate comparable to those with acquired heart disease. However, the anatomy is much more complex and those technical challenges in achieving success of implantation was more difficult.
The authors concluded that further studies are needed to establish the appropriate guidelines for patient selection amongst these patients. Certainly, one can state that given this is a retrospective study, patient selection might have been based on knowledge of a reasonable coronary sinus [inaudible 01:00:39] anatomy.
However, it does highlight the importance of consideration of CRT in these patients equivalent to what we would consider in those with other acquired heart disease, such as myocardial infarction, or primary prevention-type myopathic diseases.
I fully agree with the authors that we need better understanding of which specific congenital heart disease patients resynchronization is optimal for, however.
Finally, within the realm of devices, we review an article published in this past month's issue in JACC: Clinical Electrophysiology, by Khurshid et al., entitled Reversal of Pacing-Induced Cardiomyopathy Following Cardiac Resynchronization Therapy.
The authors sought to determine the extent, time course, and predictors of improvement following CRT upgrade among pacing-induced cardiomyopathy patients. They retrospectively studied over 1,200 consecutive patients undergoing CRT procedures between 2003 and 2016. They specifically looked at those who underwent CRT upgrade for a dual chamber or single chamber ventricular pacemaker due to pacemaker-induced cardiomyopathy.
They defined pacemaker-induced cardiomyopathy as a decrease of more than 10% in left ventricular ejection fraction, resulting in EF less than 50% among patients receiving more than 20% RV pacing, without an alternative cause of cardiomyopathy.
Severe pacing-induced cardiomyopathy was defined as pre-upgrade LVEF less than or equal to 35%. They noted a total of 69 pacing-induced cardiomyopathy patients amongst the larger cohorts. After CRT upgrade, ejection fraction improved from 29 to over 45%, over a median seven months of follow-up. 54 patients who actually had severe pacing-induced cardiomyopathy again defined as a pre-upgrade LVEF ejection fraction less than or equal to 35%. The vast majority, 72%, improved to an ejection fraction of about 35% over a median seven months.
Most of the improvement occurred within the first three months. Although in some, improvement continued over the remainder of the first year. A narrower native QRS was associated with a greater LVEF improvement after CRT upgrade.
The authors concluded that CRT is highly efficacious in reversing pacing-induced cardiomyopathy, with 72% of those with severe pacing-induced cardiomyopathy achieving LV ejection fractions greater than 35%. Thus, the authors suggest that these data support initially upgrading to a CRT pacemaker, prior to considering upgrading to a CRT defibrillator, and holding off on consideration of upgrading to a CRT defibrillator for at least one year, on the basis that in some patients we see continued improvement in ejection fraction over follow-up for the remainder of one year, even after the first three months.
This article's provocative for several reasons. First off, the question is, what should a patient with an ejection fraction less than 35% with a new diagnosis of presumed pacing-induced cardiomyopathy be upgraded to? Should it be a CRT pacemaker or a CRT defibrillator? Some people would consider CRT defibrillator should be implanted to avoid multiple procedures over time. However, defibrillators are larger devices with more complex lead systems and potential risk of inappropriate shocks. So there is risk of harm as well.
Furthermore, the question is the timing at which one should consider further upgrade to a defibrillator. Should it be due to lack of recovery over three months of follow-up or longer? This is an area of active debates, as follow-up studies, which have looked at patients with primary prevention devices, over long-term evaluation has suggested that as many as a third of patients might see improvement ejection fraction to where they no longer meet primary prevention defibrillator indications.
Thus, these findings are provocative, though in a relatively small number of patients that, in fact, amongst patients in whom they have presumed pacing-induced cardiomyopathy, the primary role of intervention should be upgraded to a CRT pacemaker. And decision on further need of implantation of a defibrillator should really be deferred for at least one year. This might inform future prospective studies to evaluate the same.
Changing gears yet again, we will next focus within the realm of ventricular arrhythmias. The first article we will review was published in the last month's issue of Heart Rhythm, by Hyman et al., entitled Class IC Antiarrhythmic Drugs for Suspected Premature Ventricular Contraction-Induced Cardiomyopathy.
The authors sought to evaluate the potential utility of Class IC antiarrhythmic drugs to suppress PVCs amongst patients who have [inaudible 01:04:57] presumed PVC induced left ventricular dysfunction. It is widely recognized that IC drugs are associated with increased mortality in patients with PVCs and left ventricular dysfunction after myocardial infarction. However, in those who have what appears to be reversible myopathy due to PVC induced issues, is not established.
The authors reviewed a small number of patients, namely 20 patients, who had PVC induced cardiomyopathy and were treated with Class IC drugs. Those patients had an average of 1.3 plus or minus .2 previous unsuccessful ablations. A total of six actually had an ICD or wearable defibrillator.
They noted a mean reduction in PVC burden from 36% to 10% with use of a Class IC agent. With an associate increase in mean left ventricular ejection fraction of 37% to 49%. Among seven patients within the cohort who also had myocardial delayed enhancement on cardiac MRI, they noticed similar improvement in ejection fraction with reduction of PVC burden. There were no sustained ventricular arrhythmias or sudden deaths noted over about 3.8 treatment years of follow-up.
Thus, the authors concluded that amongst patients with presumed PVC-induced cardiomyopathy, Class IC drugs can effectively suppress PVCs and lead to LVEF recovery, even in a small subset of patients who have MRI evidence of structural disease, namely myocardial delayed enhancements.
Of course, this patient cohort is small, and thus might not have been large enough to see the relatively rare outcomes associated with these Class IC drugs. However, it is important to consider what prior data shows us when considering whether or not a drug can be used in different cohorts. When we presume patients as having structural heart disease and we consider whether or not a Class IC drug is appropriate, not all structural heart diseases are necessarily created equal. The original data that suggests in patients post MI have a higher risk of events because of use of a Class IC agent, should not necessarily inform the decision to use Class IC agents in patients who have different causes of their cardiomyopathy.
Larger clinical trials are, however, needed to evaluate whether or not the simple presence of a low EF or myocardial enhancement should obviate the use of Class IC agents.
These data are provocative and they suggest potential utility of using Flecainide or Propafenone however, in patients with specific PVC and cardiomyopathies that might not have been amenable to ablation.
Staying within the realm of ventricular arrhythmias, we focus on an article by Greet et al., published in this past month's issue of JACC: Clinical Electrophysiology, entitled Incidence, Predictors and Significance of Ventricular Arrhythmias in Patients with Continuous-Flow Left Ventricular Assist Devices: a 15-year Institutional Experience.
The authors sought to evaluate the incidence, predictors, and associated mortality of pre-implantation, early and late ventricular arrhythmias associated with implantation of continuous-flow left ventricular assist devices. Unfortunately, there's limited data currently on the prognostic impact of ventricular arrhythmias in contemporary LVADs.
Thus, the authors performed retrospective review to identify all patients with LVAD and evaluate ventricular arrhythmias associated risk.
A total of 517 patients were included in the analysis. They noted that early ventricular arrhythmias after LVAD implant were associated with significant reduction in survival, the hazard ration around 1.8, when compared with patients with either late ventricular arrhythmias or no ventricular arrhythmias.
Pre-implantation variables that predicted early ventricular arrhythmias included prior cardiac surgery and the presence of pre-implantation ventricular tachycardia storm. They, however, noted that the incidence of early ventricular arrhythmias was highest in the early period of implantation, namely, as high as 47% in the period of 2000 to 2007, but reducing to much lower of less than 22% in more recent years between 2008 and 2015. Thus, suggesting a temporal trend of decreased ventricular arrhythmia incidents post LVAD implantation.
The difficulty of retrospective data sets is to understand the "why" of what we are seeing. It is likely that those with early ventricular arrhythmias after LVAD might represent a sicker cohort to start with. Thus, whether or not suppressing these ventricular arrhythmias early after LVAD implantation will alter outcomes is unclear.
However, these data suggest that further study into whether or not more aggressive interventions to prevent early ventricular arrhythmias is needed should be considered. The fact that there's really no difference in outcomes amongst those with either no ventricular arrhythmias or late ventricular arrhythmias, however, is interesting. It suggests that when seeing a patient with new-onset ventricular tachycardia late after LVAD implantation may not necessarily need as aggressive intervention, unless they are experience associated symptoms.
However, it also notes that those with late ventricular arrhythmias might not have associated worse long-term outcomes, when considering what to do with device therapy or other interventions.
These data might inform power analyses for prospective clinical trials on the role of optimal approaches to try and suppress incidents of early ventricular arrhythmias after LVAD implantation.
Next, focusing with the realm of genetic arrhythmias, we review an article by Kapplinger et al., published in Circulation: Genomic and Precision Medicine this past month, entitled Yield of the RYR2 Genetic Test in Suspected Catecholaminergic Polymorphic Ventricular Tachycardia and Implications for Test Interpretation.
Pathogenic ryanodine receptor variants account for almost 60% of clinically definite cases of CPVT. However, there's also a significant rate of rare benign variants in the general population, that makes test interpretation difficult. Thus, the authors sought to examine the results of the genetic tests among patients referred for a commercial genetic testing, examining factors that might impact variant interpretability. They did frequency and location comparisons, [inaudible 01:10:43] Ryanodine receptors identified amongst 1,355 patients of varying clinical certainty of having CPVT, and over 60,000 controls.
They noted a total of 18% of patients referred for commercial testing hosted rare ryanodine receptor variants. There is a significantly higher potential genetic false discovery rate among referrals that hosted rare variants.
They noted that current expert recommendations have resulted in increased use of ryanodine receptor genetic testing in patient with questionable clinical phenotypes. This is the largest to-date catecholaminergic polymorphic ventricular tachycardia patient vs. control comparison.
They noted that, overall, the background rates of rare benign variants is around 3.2%. They also noted that in silico tools largely failed to show evidence toward enhancement of variant interpretation amongst patients.
The importance of these data lies in the fact that it highlights that in patients with questionable clinical phenotypes, use of ryanodine receptor genetic testing does not necessarily elucidate the mechanism or the presence of the disease, especially given the relatively high background rates of rare variants and the potential high rate of identifying rare variants amongst patients with presumed disease.
The yield of genetic testing amongst clinically definite cases was reported to be 59%, but again, a total of 18% of patients hosted rare variants, which is statistically less.
This might imply that many patients do not actually have the disease, but it also results in identification of high frequency of rare variants, which are hard to interpret because of the 3% background rate of rare benign variants.
These issues highlight the importance of not simply relying on the results of the genetic test to say to a patient whether or not they might have the disease, or in terms of how best to treat them. This highlights the importance of having genetic centers to which patients can be referred, in order to discuss the results of their genetic tests, and further clarify what the likelihood of them actually having the disease to inform further therapy, in particular IC implantation should be.
Staying with the realm of genetic channelopathies, we next review an article by Huang et al., published in Science Advances this past month, entitled Mechanisms of KCNQ1 Channel Dysfunction in Long QC Syndrome Involving Voltage Sensor Domain Mutations.
It is well recognized that mutations that induce loss of function or dysfunction of the human KCNQ1 channel are responsible for susceptibility to a life-threatening heart rhythm disorders in congenital long QT syndrome.
While hundreds of mutations have been identified, the molecular mechanisms responsible for impaired function are not as well understood. Thus, the authors sought to investigate the impact of different variants with mutations located in the voltage sensor domain, to understand exactly what is leading to the arrhythmogenic potential.
Using experimentation combined with channel functional data, they sought to classify each mutation into six mechanistic categories. They demonstrated high heterogeneity in the mechanisms resulting in channel dysfunction or loss of function amongst mutations in the voltage sensor domain.
More than half were associated with destabilization of the structure of the voltage sensor domain, generally accompanied by mistrafficking and degradation by the proteasome. These observations reveal a critical role for the helix portion as a central scaffold to help organize and stabilize the KCNQ1 voltage sensor domain, and is likely to associate with importance in these domains and many other ion channels.
The importance of this work lies in better understanding the functional significance of variants in specific regions of the gene. Speaking to the previous discussion by Kapplinger et al., the simple presence of a mutation alone might not be sufficient to imply the likelihood of disease causation. Prior work by several authors has suggested understanding location in the gene might be as important, if not more important, than the simple presence of the mutation.
However, better understanding the gene and the reasons by which specific areas of the gene might be associated with the higher likelihood of the pathogenicity might further help clarify that a specific rare variant is of importance or of no importance, based on where it is located.
For these reasons, further study into where in a gene the mutation is located and likelihood of disease causation via these kinds of mechanistic assays will continue to be of importance to help clarify genetic testing, especially as it becomes more and more available.
Finally, we review two articles, both from the realm of basic electrophysiology. The first article we review is by Chu et al., published in this past month's issue of Circulation, entitled Increased Cardiac Arrhythmogenesis Associated With Gap Junction Remodeling With Upregulation of RNA-Binding Protein FXR1.
The authors sought to identify the functional properties of XR1 expression in terms of identifying the mechanisms regulating gap junction remodeling in cardiac disease. Gap junction remodeling is well established as a consistent feature of human heart disease, especially associated with the presence of spontaneous ventricular arrhythmias. However, mechanisms underlying gap junction remodeling are not very well understood.
Thus, the authors sought to specifically evaluate how FXR1, an RNA-binding protein, plays a role in this function. They looked at both human and mouse samples of dilated cardiomyopathy. They noted that FXR1 is a multi-functional protein involved in translational regulation and stabilization of mRNA targets in heart muscle.
Furthermore, they demonstrated by introducing an FXR1 adeno-associated viral vector into mice led to redistribution of gap junctions and promoted ventricular tachycardia, suggesting a functional significance of FXR1 upregulation, that is already seen in dilated cardiomyopathy in ventricular arrhythmogenesis.
Based on these results, the authors suggested that FXR1 expression plays an important role in disease progression of dilated cardiomyopathy by regulating gap junction remodeling. This in turn can lead to an increased risk of ventricular arrhythmogenesis.
While at the basic level, this article is important that it potentially highlights an area that results in the increased arrhythmogenic potential in dilated cardiomyopathy, it is well recognized from a clinical perspective that ablation in non-ischemic cardiomyopathy tends to be less effective than ischemic cardiomyopathy. This may partly be due to the fact that the mechanisms are substantially different.
In ischemic cardiomyopathy, there is dysregulation of gap junction, resulting in potentially slow regions conduction. While this is due to multiple factors, in particular, extensive area scarring. In dilated cardiomyopathy, in where dysregulation of gap junction connections might be the cause of variable and often patchy cardiac conduction abnormalities, the role of ablation might be less clear.
Identifying novel molecular targets to limit arrhythmogenic potential, however, might provide novel approaches to treatments. Understanding these mechanisms might also in the future offer more targeted molecular approaches to suppression of ventricular arrhythmia risk.
Finally, within the realm of basic electrophysiology, we review an article by Kugler et al., published in this past month's issue of Heart Rhythm, entitled Presence of Cardiomyocytes Exhibiting Purkinje-type Morphology and Prominent Connexin45 Immunoreactivity in the Myocardial Sleeves of Cardiac Veins.
It is well recognized that the pulmonary vein myocardium is a potential source of atrial fibrillation. However, one question that remains is whether myocardial extension caval veins and the coronary sinus vascular should have similar properties. No studies today have document specific pacemaker or conductive properties of the human extracardiac myocardium, specifically those seen in the vein sleeves.
Thus, the authors sought to characterize histology and immunohistochemical features of myocardial sleeves seen in the walls of cardiac veins.
They sectioned 32 human hearts, including specimens of pulmonary veins, superior caval vein, the CS, the sinoatrial and atrioventricular nodes, and left ventricle. They noted that myocardial sleeve was found in the walls of pulmonary veins in 15 of 16 hearts, in 21 of 22 superior vena cava, and in all coronary sinuses.
Interestingly, bundles of glycogen-positive cardiomyocytes exhibiting pale cytoplasm and peripheral myofibrils were observed in all the venous sleeves. Based on staining and Connexin labeling, these were felt to be very consistent with Purkinje-type fibers.
This is the first data to suggest that cells that exist in the vein sleeves might have potential pacemaker or conductive properties.
The importance of these findings lie in highlighting the potential mechanisms and relevance of myocardial vein sleeves that extend into the coronary and [inaudible 01:19:22] vasculature. The potential relevance of the type of cells, especially in terms of deciding what therapy to use, particularly in terms of Antiarrhythmic drug therapy, also plays a role. What specific roles these types of cells play in these areas is unclear.
Furthermore, the embryologic basis for why they occur where they occur is unclear. However, they may highlight the importance and consideration of targeting these areas when performing whether ablation procedures or a targeted drug interventions.
I appreciate everyone's attention to these key and heartening articles that we have just focused on from this past month of cardiac electrophysiology across literature. Thanks for listening. Now back to Paul.
Paul Wang: Thanks Raj, you did a terrific job surveying all journals for the latest articles on topics of interest in our field. There's not an easier way to stay in touch with the latest advances. These summaries and a list of all major articles in our field each month can be downloaded from the Circulation: Arrhythmia and Electrophysiology website. We hope that you'll find the journal to be the go-to place for everyone interested in the field.
See you next month.
42 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.