false
Catalog
2023 AOSSM Annual Meeting Recordings with CME
Developing Your Research Career in Early Practice
Developing Your Research Career in Early Practice
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I then cut my slides in half, and the irony is that Beth Schubenstein with Jupiter and Kevin Shea and Jim Carrey and the people with Rock would call, and we would talk for an hour or two about mistakes I'd made, wins, successes, whatever. So this is just a really quick overview, and then if you're thinking about doing something like this, like many of the people have said today, I really enjoy mentoring and helping people with it. So the conflicts, really the only conflict that applies to today is the fact that Mars has been NIH-funded, which has been fantastic and really has allowed us to do this. So we had collected 487 primary ACL reconstructions, 10 of us were working in the original moon with Kurt. We analyzed and had a previous five-year study with Kurt, pre-moon, where both times the strongest predictor for a bad outcome was if it was a revision. And we said, what's going on with revisions? This is crazy. And we knew that we couldn't, it was only 10% of the cohorts, we knew to get the numbers, we were never going to get there with 10 people. And so as I like to say, a single moon wouldn't do it, we needed a planet. So we came up with Mars. And so AOSSM, the society, really bought into it. We had more than 100 surgeons participate in four training and design sessions and ended up with 83 that were able to get IRBs at 52 sites. And it's actually more private practice than academic. It's about 55-45, which really makes it generalizable and really made it so, and you'll see a couple of the things that we did to make it approachable by anyone in the society that wanted to participate Here's our sites, it's skewed a little bit to the eastern half of the country, much as our population is, and then some West Coast sites. So it's a prospectively enrolled cohort. Surgeons were able to use any technique that they wanted. If they used an allograft, we standardized the allograft to MTF. PROMs were validated, all validated at the time, which really helped. So IKDC, subjective, COOS, the WOMAC, the Marks Activity, which is a validated activity score, and then used what was fairly novel at the time and is obviously a little more commonplace now, multivariate analysis to determine independent predictors of outcome. Why a prospective longitudinal cohort versus a randomized trial? Well, you know, with a randomized trial, you're really only going to be able to look at one or two variables. Randomizing is hard. You've got to blind it. Your participation, you look in JBGS or AGSM, any randomized trial has less than 50% people willing to do it. Statistics may be a little more simple, but for a million dollars of funding, you ask about one question. For the prospective longitudinal cohort, and just think Framingham study. You know, that was a prospective longitudinal cohort looking at predictors of cardiovascular disease. And you can't randomize a bunch. You're not going to randomize smoking. So this is like Framingham. So it's everyone. We knew we had 98% participation. So out of 100 patients we asked, would you participate in the study, 98 would say, sure, I'll do this. What, fill out a little questionnaire? Not that big a deal. And so for a million dollars, you may be able to look at 50 variables. So this was our first study that we published, just a descriptive epidemiology to get an early win and an early publication. We were able to enroll 1,234 patients. And our current follow-up, which has been really fantastic, we're now out past 10 years and you'll see a little of that, gives us the ability to ask. And these were the specific aims for our first NIH grant. And we were able to address those with the numbers. So some key principles of success. It's got to be an important clinical question or no one's going to want to jump on your bandwagon and no one's going to want to fund it. So that seems dumb, but it's got to be something that resonates with people and they know they can't answer it themselves. And a small group can't answer it. You've got to have driven coordinators. And Amanda Braun and Laurie Woodthrow are in the back of the room and they keep this thing going. I mean, they just, you know, you have to, it's herding cats and elephants all at the same time. You know, I mean, it's just like trying to keep everyone aligned. And you've got to have a champion or two. And I don't want it to sound like bragging or whatever, but you've got to have somebody crazy enough to want to write grants and kind of shepherd this and keep people excited. I think you've got to have, for a study of this type, it's got to be a low barrier to participation. We could not have all of these private practice people needing a research coordinator in their clinic. So this is basically, and you don't, you know, the most, so there were two or three people of us that did about 20 to 25 a year. And that was the three highest producers. So it's not, you know, it's not 20 of these a week. And so the surgeon or if they had somebody or a nurse could hand out, back then it was paper obviously, the questionnaire for the patient, you'd get them to sign the consent, and did their questionnaire, bundle it up, and FedEx it to St. Louis or to Nashville, and we collected it. So you could, almost anyone could do this. You just had to remember to sign the patients up. And they were outcomes that weren't that hard to obtain. We were able to, we really started out wanting to look at patient reported outcomes. Central follow-up. So Beth, who I didn't talk about, our follow-up person, she follows every patient in the cohort except for a couple spots, and I'll talk about that in a minute. And that was key. Don't expect 52 sites to do follow-up, because they're not as motivated as you are. That's where the champions have to win. And then we got some early victories and early things that kept everyone excited. We were able to get AJSM and JBJS to make every author PubMed searchable. Amanda would put out a newsletter, and orthopedic surgeons and sports people in particular, they like to compete, they keep score, and they want to win. So every month in the newsletter, it was a different angle on who enrolled. Who's enrolled the most so far? Who enrolled this month? Who's enrolled in the last week? Who's done the most in the last six months? Which centers have enrolled the most? And everybody wanted to open the newsletter, and so everyone got mentioned. Like she would pick a month, and everyone that enrolled anyone that month got their name in the newsletters having done it. And it was just like people wanted to win. They wanted to see their name in the newsletter. Then we got the O'Donoghue Award with the graph choice showing that Autograft was better and the Kappa Delta. So some of those wins really keep people excited. Challenges? Obviously, IRB at 52 sites. The single site is great in theory. We started before the single site, and if the NIH could force sites to accept single site and really leverage that and say, we won't fund you unless everyone accepts your single IRB, or something that would help to make all these institutions. They say, everyone says, I want to play with a single site IRB, but then they put roadblocks up anyway. And so it's a great concept. I wish it worked a little better when you're trying to do this. Mayo and West Point wouldn't let us follow their patients, and that really hurt. So we couldn't do central follow-up. They changed coordinators two or three times. West Point, fantastic center, but then everyone gets deployed, and I'll think about what's happened in the last 15 years. So that made it. And then we had 83 surgeons, and then I had no rules for what would get you kicked out. And so we had some people that really didn't. We knew they were doing revisions and not enrolling, or they didn't enroll a patient or whatever. And I didn't have it defined how to kick them out, or if they didn't read or edit the papers or whatever. That makes them not an author. You've got to do the things to make it an author. I should have had surgeon exclusion rules. Now, ironically, a couple of the people in that group were AOSSM presidents. And I said, okay, politically, okay, I'll let them stay in, because I may need them down the road for other reasons. Authorship, if you're going to run one of these, your biggest problem source will be authorship. I quickly went to corporate authorship, and I think I could hold the moral ground, because I was writing the most papers, so it cost me the most mastheads, so people couldn't really go, you know, whatever. But people would argue if you'd go six authors in the MARS group, the seventh author would cause you trouble. And then you'd hand data to, you know, one of Ben's fellows, and they would think, well, it should be, you know, their fellow in the MARS group, completely disregarding how much work the coordinators and all the other people had done for the past 10 years to keep it going. So I went strict. Everything is the MARS group. Everyone else is listed later in the paper, and everyone's PubMed searchable, and it saved me a lot of complaining and grief. And funding, it's hard to do this without a fair amount of money. Thank goodness we've been able to be successful with the NIH, and we're going to have to come in this next year, so hopefully we can continue our success. Wins, I say that if you can do one study in your life, write one paper that changes clinical practice, you've had a successful clinical research career. And you go, people go, Rick, you've written, like, all kinds of stuff, or done a lot of things. You go, yeah, but how many times do you truly change clinical practice? These kind of groups, with these kind of questions, can change clinical practice, and so it's really gratifying. Like Drew said, you know, it's a hobby, you've got to love it, but, you know, when you think, you know, I'm going to do oral exams this next week, and I'll be able to ask, you know, as part of it, like, you did a revision. How did you decide what graft you chose for your revision? This group defined that you should use autografts, so it's really good. Follow-up 949, at minimum five-year follow-up. There have only, prior to this study, been 200 patients in total, in all of the medical literature ever reported, five or six studies, less than 200 patients. So that's how strongly, massively impactful something like MARS can be. We just finished on-site follow-up. If you see a pandemic coming, don't write a grant that depends on on-site follow-up that starts in 2020. So we got a couple years behind, but Laurie and Amanda and Beth have, like, worked like fools the past year, and so we got 205 on-site at more than 10-year follow-up, and we're going to have some great findings from that. Stay tuned. We'll get those out this next year. So thanks.
Video Summary
The video transcript discusses the creation and progress of the MARS (Multicenter ACL Revision Study) cohort. The study aimed to analyze and collect data on patients who underwent anterior cruciate ligament (ACL) reconstruction surgery. The study involved over 83 surgeons from 52 sites, both academic and private practice. The study used a prospective longitudinal cohort design instead of a randomized trial, allowing for the examination of multiple variables. The challenges included obtaining IRB approval at multiple sites and maintaining surgeon participation. The study has led to significant contributions to clinical practice and has provided valuable insights into ACL reconstruction outcomes.
Asset Caption
Rick Wright
Keywords
MARS cohort
ACL reconstruction surgery
prospective longitudinal cohort design
surgeon participation
ACL reconstruction outcomes
×
Please select your language
1
English