Mentally, I file this as "Over Cooked"
There are real issues in our medical literature today - too often publishing trees without providing the context of the forest.
In a recent SBRT article for metastatic disease (Ref 1), the following sentence from the abstract literally jumped off the page to me:
It was in the results, about 3 sentences after this one:
No wonder we are confused. 36.25Gy works like magic in prostate cancer - 95% control across all risk groups! (Ref 2). But here in metastatic disease for targets about that size we need at least 41.2 Gy and probably closer to 55.2 Gy in 5 fractions. (EQD2 math done for you- at least 20-25% higher dose to perhaps 2x our dose. Yep, 2x the dose for metastatic disease is recommended vs. a definitive setting dose.
Hmmm, where to mentally file this one?
Let’s see - do I think that if I treat oligometastatic disease with an ablative or even semi-ablative dose and they rapidly fail that treatment, they do worse? Yes. On some level does dose help achieve local control? Yes. But despite this paper gaining some prominent press, do I trust on any level the 41.2 and 55.2 answers - ie will I go add that to a list of goals today in my planning system and add a Dmin (minimum dose) to the GTV/ITV? No. When a headline piece of the abstract seems that out of context, do I degrade the conclusions in the remainder of the piece? Yes.
Often, I try to present different views - while I think this article has some valid information and does in fact add something to the literature in its publication, I would have strongly pressed on the math / statistics had I reviewed the paper and pushed for a less precise statement of findings.
One opinion, please comment below if I’m off target.
This isn’t just a radiation oncology problem - to me this is a publication problem. Somewhere along the way, academics turned into the profession of simply cranking out papers. Sure they have data and I’m sure the data looks REALLY good at 41.2 compared to 41.1 and we used “really fancy” stats to place that decimal point, but come on.
A few weeks back I wrote about the forest and the trees - we MUST see both.
In science and research, both details and broader context matter - as scientists and those that read the medical literature, it is incumbent upon us to be skilled at the merger of these two widely divergent perspectives. We must view both the forest and the trees at the same time and when we present, we should be able to convey that blended perspective - that is what the best do.
Wait, what is so bad?
Here is the context that, to me, makes this make no sense:
They lump non-small cell lung cancer and prostate together despite widely divergent alpha / betas. Yep likely >10 and <2.
Well ok. Lets do a quick EQD2 calc: a range of 10 to 2 is an 85.9% difference in equivalent dose and then we have the audacity to put a decimal down. Seriously? And it is, in fact worse than that as prostate cancer (you know the one that works “magically” when 36.25 Gy is given in the definitive setting) with its low alpha / beta should “see” significantly more dose - that is if you fully believe this overly simplistic model that really serves as one of the basic formulas in our specialty.
I don’t want to belabor the issue but.. It is a large review of data but retrospective in nature which then condenses dose recommendations to a decimal point across a variety of histologies. And it really isn’t just a single simplification either, they add to it a second dichotomous split of a continuous variable. (size / volume) based apparently on maximum separation of the cohort. This process which is quite common today in our literature has known statistical issues - in simple terms overstating the significance and applicability of the conclusions. And in the end I come away a bit disappointed.
Back in the day, I think this would presented closer to something with broader context - in my opinion, more appropriate context. Something like:
EQD2 doses in excess of 100 Gy appear to correlate with improved control. Specifically for large non-small cell cancers there was a trend towards that cohort requiring higher doses of 110-120 for strong local regional control.
(I made this up - especially the non-small cell part - just demonstrating that more generic writing still can drive home important clinical points without the need for such statistical wizardry).
Maybe you like the decimal and the super precise approach and maybe I’m just old, but I’d rather continue to see broad context and details be merged in our literature so that it makes sense globally within our literature. What I don’t see in this final sentence is an acknowledgement of the strength of the dataset reviewed in the presentation of the abstract - it is retrospective, mixed histology, multi-institution. Can it provide good information and trend type answers - certainly. Can it provide a dose with a decimal point - no. To me, the abstract should represent this level of potential accuracy.
It is the old engineering question of rounding to the correct decimal place. Carrying an incorrect number of non-significant decimal places doesn’t make the calculation result more precise. In fact, in a real way, it makes it more incorrect.
Again, I like the paper and I WILL file that I need to get control of the metastatic lesion and give a pretty good dose, but I honestly would file it with higher importance had the abstract results been truncated without the highlighted statement in the opening image.
To me it is less a problem of the authors. I think many believe they have to report details in order to meet some level of “importance” for publication and perhaps that is true. But for me, it is representative of a genuine problem with a exponential increase in publications and for some reason, that increase has come with a desire to use statistics and try to find something magical that separates one article from a sea of publications - almost a publication of individual trees rather than any attempt at broader perspective - it is widespread in our medical literature from my perspective. Just last week, we covered several authors proposing two different PSA response / kinetic results - in different journals literally 2 weeks apart. Literally.
For comparison, here’s a 2000 abstract - yes it presents a dichotomous approach, but read it first, then we’ll discuss.
Dose-volume histogram analysis of the 78-Gy patients showed a significant correlation between the percentage of rectum irradiated to 70 Gy or greater and the likelihood of developing late rectal complications. Patients with more than 25% of the rectum receiving 70 Gy or greater had a 5-year risk of Grade 2 or higher complications of 37% compared to 13% for patients with 25% or less (p = 0.05).
This was NOT done via an idealized cutpoint. We looked at the data, saw correlations, presented a generalized statement regarding what we saw and then clinically we had a meeting and discussed the various options for how this should be presented within the framework of doses and treatment schedules being used at that time. We then opted for the above.
It was a blend of statistics and clinical context - straightforward, memorable and consistent within the framework of our literature at the time. (Credit: Pollack) And remember, this is about as strong of a dataset as you can get - randomized prospective trial data - if you want to create a decimal point, it would come from this type of data.
Today, I’m pretty sure this gets shipped to a statistics department and might come back as:
Using sensitivity analysis, cutoff thresholds were chosen to maximize difference and strength of comparisons between the two cohorts. Patients receiving 24.3% of the rectum to 64.3Gy saw complications of 38.3% compared to 8.7% (p=0.0001). (really today it would be a hazard rate, but you get my point).
Maybe once again, I’m from Arkansas and like to keep things simple. But I believe it is a real problem.
Finally, a likely position might be that there is a disclaimer / limitations section to every paper where these types of issues are addressed or touched upon. Sure. But I still think that if 95% of readers only read the abstract, it is important to bring the correct level of context of the dataset there. If you really think it was an interesting analysis, commit a full paragraph in the results to this level of possible analysis but keep the context of the accuracy / precision of the data within the abstract. At least that is how I would recommend blending details and context.
And so,
Mentally, this one filed as: “Over Cooked.”
Thanks for reading along on a Sunday as we keep searching for better. Next week I hope to get to the AI article, but I needed to pause as I need to spend more time and, at least for me, make sure it is up to internal standards - lots to digest.
Thank you sincerely for the support. The article last week on industry troubles seemed well received and I appreciate the kind words, and people subscribing and sharing the site.
REFERENCES:
The impact of local control on widespread progression and survival in oligometastasis-directed SBRT: results from a large international database - Radiotherapy and Oncology (thegreenjournal.com)
https://www.thegreenjournal.com/article/S0167-8140(23)00307-9/fulltextStereotactic Body Radiation Therapy for Localized Prostate Cancer: A Systematic Review and Meta-Analysis of Over 6,000 Patients Treated On Prospective Studies
https://pubmed.ncbi.nlm.nih.gov/30959121/Complications from radiotherapy dose escalation in prostate cancer: preliminary results of a randomized trial
https://pubmed.ncbi.nlm.nih.gov/11020558/