false
Home
Spring 2020 Fellows Webinars
The Value of COHORTS and REGISTRY
The Value of COHORTS and REGISTRY
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I want to welcome everybody to the Multi-Institutional Sports Medicine Fellows Conference. Please keep your computer muted. This is being recorded and will be transferred to the OSSM playbook on the website. It will be on the learning management system and available for next week. These talks will be on next week. If you have any questions, please submit them on the chat function and we'll ask them at the end. I'll unmute the faculty as well so they can ask questions as well. It is very much my honor to introduce our next speaker in the series here, Kurt Spindler, who's been very kind to take the time to give us a lecture. He is the Vice Chairman of Orthopedic Research at the Orthopedic Rheumatologic Institute of the Cleveland Clinic. He's the Co-Director of Musculoskeletal Research Center and Director of Orthopedic Clinical Outcomes. Anybody who knows him knows he's really advanced our understanding and knowledge through his sheer willpower and drive. He got the moon group off the ground, the shoulder moon, he helped instrument getting going as well. The Mars are in large part because of Kurt as well. So we owe a huge debt of gratitude. And he's shown us to a great degree the value of these prospective cohorts and registries. And I know hopefully he'll talk about, I know he has a distinct, prefers to call it more of a prospective cohort than a registry. So hopefully he'll explain that to us as well. Very much my honor to present Kurt, who actually is Vice President of the OSSM. So he's soon to be President as well of the OSSM. So without further ado, it's truly my honor to have Kurt Spindler talk about the value of cohorts and registries in orthopedics and hopefully why you should do it and be involved when you get into practice. So Kurt, thank you very much. Thank you. Thank you. It's an honor to be here. Thank you all who are listening. And hopefully this will supplement your education in a way, in a unique way that none of us have had. So maybe this will be fun. So my disclosures, mostly NIH funding. My research group started out to the left-hand corner. I had a medical student who's now an orthopedic surgeon, my assistant and a research coordinator. What we know is that we have the greatest change in medicine that's occurring right now. And healthcare is shifting to value. The model of delivery and the payments that we're going to get in the future, that you're going to get in the future, are going to be driven by cost because the percent of GNP can't go up anymore. And it doesn't really matter who's in the White House, whether it's blue or red or green or purple. The fact is when you're 20% of GDP, they don't want it to go up more. Fortunately, orthopedics has very valid patient reported outcome measures that support the effectiveness of pain and function. Now one of the things we do as surgeons is we operate and we take an oath that says we do no harm. But in essence, when we operate on a patient, certainly when we do something like a total knee or certainly when we do an ACL or a rotator cuff, we effectively violate that assumption because we definitely hurt the patient in the beginning to get the patient better. And so I think that we really have to understand our outcomes on an ethical way just to understand what happens to our patients. And the diagram to the right happens to be the first year at Cleveland Clinic where we judged what percent of patients got better from all different operations. And so we really have to, until we collect the outcome, we're really going to be stuck in this cost minimization scheme that we're in right now, which is a disaster because cost minimization is a race to the bottom for healthcare systems and doctors. You can ask any of your business friends. So M.O.O.N.E. is interesting because, and we'll explain more, but the things that we learned in M.O.O.N.E. early on, it became evident back in 2010 and before that some of the things that we learned in M.O.O.N.E. could be translatable to orthopedics, and I'll show you that in the future. But think about this. How should government insurance and healthcare systems and society allocate a limited healthcare dollar in a way that's scientifically valid? I don't think you want Congress doing it, and I don't think you want the politicians doing it or the lawyers. And I have a bias. My bias is appropriately indicated, skillfully performed procedures, as listed, has some of the largest gains in patient-reported outcomes, and that's really a fact. The outcomes are real. So how do we take things that we learned in a cohort, an NIH-funded prospective cohort, is that a model for improving outcomes? And can we do this to improve orthopedics in the future? And I think we can. So I want to give you, there are five bullets here, but I'm going to give you four pieces of information here that you can take away that are practical. Number one is how to interpret published studies in your practice. Which ones do you throw out? Which ones should you read? And finally, two and three together, there is a fallacy in doing univariate analysis and looking at a single risk factor to guide clinical decisions, and I'll show you examples of that. We know that experienced clinicians don't like evidence-based medicine for good reason because they know that patients are unique, and patients have different combinations of risk factors, so it's their gestalt, a really experienced clinician has a great gestalt, and you can sort of duplicate that in a multivariable approach with a risk calculator. And finally, something simple we did that came from one of our fellows, we just did a simple cohort of how many pills you took after arthroscopy, and I'll show you the value of looking at that data, and you can determine for yourself. And finally, I hope for you that the future is we document our outcomes because that's how we're going to get paid and paid well, and I'll show you an example. So remember, we do clinical research, I do clinical research for one reason, we're trying to guide your care of your patient, we're trying to get to the best practice. So science only gives you truth. Second, the truth that you establish through science, is that a clinically meaningful difference? Meaning if I have something very expensive, and I improve you two points on a patient reported outcome measure, but the outcome measure itself is within the margin of error, that's not a clinically meaningful difference. And finally, can you afford to do it? Back in 2000, I published a couple papers on growth factors for healing MCLs. They actually healed them less than one week earlier, but they cost about $5,000, not practical for anyone except for an elite athlete. So the hierarchy studies, you know, cohort studies come right in, prospective cohort studies are level two, we really don't know where registries are. And I'll try to explain the difference between a registry and a cohort. And there are examples where registries are much more powerful, and are the biggest, more powerful than anything else in determining a decision. And then there are examples where cohorts are more powerful. And then there are obviously examples of randomized trials. So the clinical question really determines what study design, if I want to know who's going to get the prognosis, or who's going to do poorly after surgery, or who's going to get post-traumatic OA, that's not a randomized trial, that's a longitudinal cohort. If I want to know the best therapy, graft choice, and if I can get the risk factors right, I can do that with a randomized trial. If not, I can use a level two cohort. So finally, the strength of the evidence really depends upon the question, not the paper, the question. So we published a retrospective study on prospectively collected football players. Kirk McCullough, who's now an orthopedic surgeon, was a resident, and said, hey, hey, hey, Kirk, you know, we don't know what percentage of people return to football. What do you tell your patients? I said, I tell them 80 to 90%, of course, because that's what mine do. I said, where's the reference on that? I'm like, I don't know where the reference is, but you can go find it. We don't need to do the study. It wasn't any reference. So I ate my words. He looked it up, and then he retrospectively asked them three questions. Question one, did you return to play football in high school or college? That question can be subject to recall bias, but any of us that are athletes who know high school or college athletes, on their deathbed, they know whether they returned to play college or not. That question is probably very accurate. It doesn't matter when you ask it. But the other questions they asked was, how well did you play? That's a fish story. I mean, I play, every year I get older, I played much better. I was a superstar in high school, and I'm almost as good as Michael Jordan now in basketball, even though I never played. So how well someone performed is bad, and then they ask him a question about, why did you not return to play? That also is weaker. So even though that study is level three, the only thing I'm going to take home from that study that I trust is that whether they returned to play or not, not why or why not. And you have to choose the appropriate study design based upon the question. So if I'm determining a new efficacy of new technology, in practice, I need a randomized trial. If I think that bridge enhanced ACL repair by Martha Murray is a valid technique, instead of ACL reconstruction, you have to test in a randomized trial. If I want to tell everyone to go from single bundle to double bundle, I really need to do a randomized trial. And when they did that, it showed that double bundle did not have a lot of advantages. But there are situations where observational cohorts are preferred, and you can put registries in here too. There are natural experiments. When someone injures their ACL, there's some element of randomization or some element that nature says applies to the meniscus tear and the articular cartilage injury that probably influenced outcome. If I want to know the predictors of outcome, post-market surveillance, if I want to engage in shared decision making, who's going to get better or not, if I want to look at comparative effectiveness. And case control trials are really, really pretty rare. And one of the registries we've learned the most from is the Kaiser registry looking at ACL reconstruction failure. And even though I'll show you that Moon can identify that allografts have higher failures than autografts, especially in younger people, it's the Kaiser registry that has four times the number of ACLs in there that can tell you which types of allografts are really bad versus other allografts. So the fundamental cohorts is exposed to unexposed. You'd like to do a population cohort where you're prospectively evaluating multiple exposures of interest. And one of the biggest unique factors of a cohort versus a registry, a cohort has a defined time and it goes after, in a scientific way, a certain amount of outcome. Meaning patient-reported outcomes are tagged and you pick a year or two years and you go after 70, 80% of them. A registry doesn't have a mechanism to actively go after outcome. The reason why a registry is so powerful on failure is because if I'm in the Kaiser system, 90% of my patients have to come back. So I have a captured population by failure. If I'm in the European model, in the European registries, almost 95% of their patients have to have surgery in the national healthcare system. So you capture, but you're capturing failure. If I want to look at their pain and function, they'll have 50% or less. And Moon and Mars capture many exposures, looking at the whole gestalt of primary and revision ACLs. So longitudinal cohort, Framingham, Moon, Mars, remember smoking and lung cancer have never been done in a randomized trial. The weakness of a cohort is that you have to control for confounding and you can't control that as well as a randomized trial where you distribute them equally across. And patient expectation, things that you can't measure, patient expectations, some things about how patients react to things that you can't control and measure in a randomized trial, you can equally distribute them in a cohort you can't. So how do I interpret published studies in a guide? This is really, Jed Kuhn got me on this one. So if I look at observational studies versus randomized trials, and I look at New England Journal of Medicine at two papers and the Cochran Review in 14, there's not a lot of difference here. A well done observational study, particularly one that's prospective with sufficient multivariate analysis will give you a similar conclusion with almost the same effect size as a randomized trial. The trialists hate this study, but that's what the literature says. So if I'm interpreting, if I look at my practice, there's two types of studies I'm going to take home in my practice. One is an experimental randomized trial that has done well, that is appropriately powered and uses some sort of an intention to treat analysis. The other one I'm going to look at is a prospective cohort that has been properly, that is properly interpreted and has done good multivariate analysis. Those are the two studies I'm going to change my practice. Everything else, academic interest hypothesis generating, because they don't have the controls in order to be able to say, that may be the best we have, but those types of studies don't override some study that has been well done and scientifically vetted. And so most research is observational, some retrospective, some prospective. So the way that you look at designing when you're controlling, if you think about it, let's just take pain. Your outcome is pain. What affects pain? Your baseline levels, some of your joint disease, demographic profiles, age, gender, smoking, BMI, race, and socioeconomic status affect that. Your amount of arthritis, your meniscus, your oral labrum, and then what treatments to do, what surgery, medicines, and rehab, right? So if I'm publishing any study on this, if I have an abnormal distribution of these between one group and the other, that's not going to work, because basically, their outcomes are going to be better, not because of what you hypothesize, but what you lack to control for. And so we can just go through more modeling of how it's done. So what about the fallacy of having a univariate or single risk factor? Just taking a risk factor and say, you know what, you're too fat. Your BMI is over 40, and therefore, you shouldn't have a joint replacement, because you shouldn't do well. Well, guess what? If we do an analysis of 5,000 total knees, and if we use BMI as a cutoff, does not improve, because there's a big difference between someone with a BMI that is active and young versus someone who's a BMI who's sedentary and sick. And so you really can't look at that one factor as being that, and I'll show you another factor. The care of the patient, we must look at all the risk factors that come in, and that's sort of the clinician's gestalt, or the clinician's multiple things they look at to say, I think this patient do well or not do well. So in MOON, Chris Kading published this study, the red line is allograft, and the green line and black line are autograft. And if you look at it, and if I take an 18-year-old, if I take within that small group, if I take an 18-year-old, there's a 3 to 1 ratio of failure of allograft to autograft. It's about 20, 22% for allograft, and it's about 6% for autograft, 6 to 7%. Now, if I go at 40 years old, the odds ratio is still 3 to 1, but the absolute difference is 2.4 for allograft and 0.8 for autograft. So the absolute difference is 1.6. That's irrelevant. We never, in MOON, said allografts are bad. We never made that blanket statement. Allografts are bad if you're young, because if you're young, if having a 14% difference in failure, I've never met someone who wanted an allograft in a primary. But if I have a 1 or 2% difference in failure, you could argue that there are certain advantages for that. And so what happened is when we published the data, it got misquoted because society wants to say one thing. It doesn't. You have to take age into factor, is that some people like me started shifting their allograft use instead of being in the 30s to 40. And some people that did allografts in the 40s dropped it down to 35, because there's still only 2% difference. So you can look at that, interpret that. So an odds ratio, 3 to 1, does not determine how you make a decision clinically. It's the absolute differences that make a difference. And so we looked at that. The other thing that bothered us was those curves were diverging to green and black. And so we really didn't know, our data could not show us in that spectrum there that those curves were different. So what we did is we looked at, let's take the 14 to 22-year-olds. Let's just take that group of autographs. And we tried to go out two years, but there weren't enough failures to be able to differentiate between them. So we went out six years and said, is there a difference? And that's the best graph in preventing failure in this age group. They're all athletes now. So we had 839 primary ACLs. That's out of a population of the whole cohort of 3,500. So we looked at them at six years. We had 92% follow-up. And we looked at both sides' failure, and we did a regression model. And so the dominant factors of failure on the same side is high-grade knee laxity, something Bob Magnuson coined. High-grade knee laxity means I have a pivot lock, or I have a lock being greater than 10 degrees, 10 millimeters, sorry. Autograph type, hamstrings greater, and then younger people. If I look at the other side, the thing that dominates it is sport. And sport is not the fact that you had a BTB on the other side of the hamstring. It's really related to sport. You can look at graph type here, autograph type centered down here, and it doesn't make any difference. So if you have your cameras out, and I'll give you a minute or two, if that's a QR code, all the younger people know what it is, and some of the old people like me had to learn. If you just put your camera on that QR code, what what will show up on the top of your iPhone is a is a is a risk calculator that you can tap. And if you tap on that calculator, it will display to you the failure rate based upon the person's sport. Now we use we had we had four sports in Moon. We didn't have we didn't have a lot of skiing, we had football, basketball, soccer, and then we had other. And it wants to know their weight, you can easily figure out how active they are. It asks you four questions on a marks, and it'll tell you what your failure rate is between autographed hamstring, autographed BTV. It'll also tell you what your failure rate is if you know someone has high grade knee laxity. And if you complete that and look at that, you can also go on the bottom, it'll tell you what your other side is. So the bottom line is, is that if you ask me, what is the best graph for someone between 14 and 22? I can't answer that information without using the code. I can tell you that you can use it, I can tell you that you can use the BTV and you won't be wrong. But there are many circumstances where a hamstring tendon is going to be just as good as a BTV. And we have to caveat that because the hamstring tendons that we compared were three and four stranded hamstring tendons that were done between 2002 and 2010. And to say that, we know that our tunnels are in the right place. We've, we've, we've published papers on our tunnel positions. We've had all of us do cadavers, 72 cadavers, and we've actually done it on, on humans. So it's not a tunnel position problem. And I don't know whether a five stranded or six stranded can behave the same way. So you can look at that and play with that. Now, if you think about, you don't ever want to do a failure or infection study, because if, if your baseline rate is, is 5%, your risk, that's too high for infection. But if your baseline rate is 5% for failure, and you had a very effective treatment that reduced it in half to 2.5%, you'd only need to do a thousand, two thousand patients with nine, about a thousand in each group. It's impossible to do. So basically, if you want to look at failure and improve something, you need to have where the baseline failure rate's high. But even a study where the baseline failure rate's 20%, and you prevented half of them, which is a, which is a big, a huge difference, 50% reduction, you would still need to study of almost 500 patients. And that's why it's pretty, pretty darn hard to do in the randomized trial. So now I want to show you a simple thing. This Mike Scarcella is our fellow, and he came to me in the beginning of the year, since my job is to help them with research projects. And he came to me and said, you know, Mike's a unique guy, trained at the clinic, he spent four or five years in the army, and he came back to be our fellow. So I don't teach him anything. He thought he's going to teach me more than I'm ever going to teach him. He said, you know, there's a real problem with narcotics, and I've seen it happen. I've seen people get, get pills after surgery or pills for something else, and then someone else steals the other guy's narcotics in the barracks and overdoses on him. So I want to study what happens in narcotics. I said, well, why don't we do something simple? Why don't we just look at arthroscopy? So this is, this is 113, we now have 176. And what the graph shows you is that the number of patients is zero to 35, that, that took anywhere between zero and 10 pills. Now, the standard literature, and what I did as being not so smart, about two years ago, was give everyone 30 pills after arthroscopy. Look at the graph, 96 out of the 113. If I live, if I gave, if we gave six pills to them, we would cover, we would cover almost 85% of everyone would not need another prescription. And I would argue that people that took zero, 32, or one or two, which is even more than that, combined, I would argue they didn't need anything. So how many pills should we be giving out? And the problem with having, the problem with giving out too many pills is the fact that they're in your medicine cabinet. And I can tell you, anyone who has teenagers, they know everything in your medicine cabinet, guaranteed. I can, they can tell you everything there. You don't want narcotics laying around, and you don't want, and you don't want to dump them because that's, that's a problem. So you tell me how many pills we should give. We're going to, Latul and I are going to arm wrestle over how many, he'll win. But I think that we're probably going to give somewhere between four to six out and that's it. But think about that. Think if we gave out, think if we gave out four pills and we reduced the standard, I think is, is 30 and reduced it by 25. And there's a few hundred thousand arthroscopies done in the United States. How many pills are we taking off the market? It's huge. And so this is a simple cohort, nothing experimental. You don't need statistics to look at it. You can interpret it for yourself and say, we can make a big impact here. Now, if I wanted to prove, if I wanted to prove that I didn't need narcotics at all, then I would have to do a randomized trial and I'd have to use a placebo versus narcotics with some sort of rescue to see that we may do that. We may not do that. It depends upon how we can power the study, how difficult it is. But I think this, the cohort takes you in a long way. So when you go out and practice, if you want the final thing, I'll send it to you. You go out and practice, please think about what you're doing. Don't give 30 pills. I foolishly, with not looking at data or anything else and collected it, was giving out 30 pills up to a few years ago. And we all were. And that was wrong. And we really did not, we did a disservice to our patients and society. So you want to tell me what you want to use, that's fine. So finally, what's the future? This is for you, not for me. Because, you know, I'm in a cost minimization scheme. I'll just retire. I'll sit at my house in the Cape and eat lobster and hunt striped bass. But you're in a dogfight. You're limited resources. You're going to have to prove what works. Because if you can't prove what works, then you're going to be in a political arena and you're going to lose. Because it's going to go to dialysis, it's going to go to cardiac disease, it's going to go to cancer, it's going to go to pediatrics. It's not going to go to orthopedics. So I think you need to really think about this. So we set up a registry. One of the reasons I left Vanderbilt in 14 is I thought that we could have a moon-like structure, meaning that we could collect patient-reported outcomes before surgery, procedural details, and then we could do one-year follow-up. Because at one year, if you don't get better, you're not getting better. There's no data, there's not one data on primary procedures or revision procedures that says that you're going to get better from year two to year one, that you're going to improve. That doesn't happen. So we set up a cohort and basically I tricked the whole clinic. We have 99 surgeon providers at 16 sites and we capture all of knee, all of hip, all of shoulder. Now we're capturing hand and foot and ankle. And so what we do, this is knee, hip, and shoulder. We just set simplified questionnaires, not the whole thing, only certain subscales that we use. So it takes anywhere between six and ten minutes for the patients to complete for the VR-12. And then we collect things in the OR. We collect things like primary versus revision, what diagnosis, what approach, what surgeon, what implant numbers, what type of things we did. And if you look on the right-hand corner, you'll see a device called an IPAC. So in 2004, we got this bright idea, and you can blame Ned Amendola for that, that we would start collecting data. This is 2004. The iPhone started in 2007. That we were going to collect data captured at surgery on an IPAC. It was a total failure. The people who knew nothing about electronics, that's me, I handed it to the youngest person I found, the research coordinator, did a good job syncing him. The people who thought they knew electronics, everything failed. And we lost so much data because none of the electronic systems were ready. But we knew we had to capture it. So now with REDCap and now with an iPhone, we're able to capture this at the clinic all electronically. And when you do electronically, you can use a branching logic. So in Moon, there were 44 written pages that you'd have to turn in order to capture. We can collect that electronically now, 90% of that in about a minute and a half. You can't turn those pages in a minute and a half. So what we've done at the clinic is that we've collected, we've had almost, we have over 50,000 cases now. The refusal rate is 250 people refused, 0.5% out of almost 50,000. And we're able to capture a baseline 96%. It means the surgeons have completed everything and the patients have completed everything. It's hardwired into the clinic flow in the system. There's no individuals being paid to do that except for one person that monitors everything. And then we have an episode where we capture it, put it in REDCap, and we capture one-year follow-up. And so I'm showing you that at one year, in the first 20, almost 26,000 people we have, our overall follow-up rate is 74%. And I'm happy to publish on that. You're not going to get over 80%. The only reason why Moon and Mars gets over 80% is because their surgeons call about a third of their patients. That's not going to happen in the real world. And so you can see that in arthroplasty, we do really well. The older patients do well. They're almost 80%. The younger buggers in sports, knee scopes, ACLs, and shoulder scopes, those people do a little worse. We have trouble getting them back. I think that we have to, somehow we're not using the right emojis in a text message or something to get them, but they're a little harder to get. So we've created two NIH grants off of that for my younger partners, and we've published multiple papers on that. It is a registry. It's classified as a registry, a qualified clinical data registry by CMS. It is considered the standard of care and equality metric by Cleveland Clinic. And it's turned out we've done a lot of interesting studies on that. And so what we can do, we have our follow-up methodology is both an active mechanism by emails, charts, and text messages. And also that we do pay people to call and get our follow-up there. So what about a department performance? I showed you this before. Now we're going to redo this again on about 10,000 or 20,000. These are different procedures that you can see. Now, here's the scary part. We can do it by surgeon. So the way this looks like here is that I want to explain this to you. So what we do is we take the one-year patient reported outcome, and we use pain here. We subtract out the baseline. And so therefore, if nothing changed, you'd be zero. And anything in the red line, plus or minus 10 is within the margin of error or not a clinically relevant difference in pain. So anything in the red didn't improve. Anything above that improves. And so you can see there's really by these surgeons, and I don't know who they are, at least in total knees. And I don't want to know who they are. They've had a pretty outstanding improvement in scores there in that group. And if I look at total hips, even better. Total hips, everyone's getting it. You can't do anything in rehabilitation or in rehab to get a 50-point improvement in your scores in the total hip there. There's one outlier that got 27, and we'll have to see what that person does when we add an extra year in there. And then look at ACLs. Your margin improvement isn't as great. I do know where I am in there. The other thing that strikes me on ACLs, if I look at the bottom, I look at follow-up. And even if I take follow-up and I adjust it by age, so some people get 64 percent, 74, 56, 81, 79, 60, 59, 64. That's just too much variation there. And we have to look at trying to improve that follow-up. Remember, none of the staff is following their patients out. It's the same central site for all of them, so that there's room for improvement there by working with the staff. And so what I want to end you with in the time for questions is the concept of outcomes is not new. Einstein's physician Katzenstein in 1908 said long-term follow-up of a complete meniscectomy is going to give you arthritis if you follow long enough. Codman, who's the father of the end result, said basically every surgical procedure should be followed to his logical conclusion. That's just what we've done at the clinic. And Sandy Kirkley was a sports medicine member, and she's really the best researcher in sports medicine. She died at 41 in a tragic plane crash, but she determined eight randomized trials. And in fact, the first randomized trials that supported the arthroscopic fixation for instability versus rehab was her randomized trial published in the late 90s. So I have to thank the NIH for all the support that I've had through Moon and through the studies that we've done. And these companies helped us in the beginning generate it. And then finally, it's Kathy Derwin is my partner in the Musculoskeletal Research Center where we're trying to take clinical outcomes and match it with some sophisticated imaging to see exactly what's going on. And it's always a sunny day in Cleveland as Latula is showing you there, and I'm showing you here as well. So thank you for the opportunity. I hope that you've learned something, and I'm happy to take questions, and I'm happy for you to teach me more about what you need and what you need in your practice. Now we can help you. That's great, Kurt. Thank you so much. I'm going to unmute Latula, and we'll see what other faculty are here. But this is very helpful. One of the things that people talk about as we're going forward, you know, is going to be the value-based care and being able to prove what you're doing and your outcomes. And I know that we need to be able to – it's important for everybody to record or keep their outcomes, measures, and how their patients are doing. So the question is, you know, where do you see some of these things like the SOS, which is a company-based one, versus doing – you know, getting on board with, you know, either using Socrates or just keeping your own data, or versus being part of these larger potential national – you know, the academy's making their big push for all these national registries, the one for the joints and so forth. What's your view on that for the person who may not be going in academics but needs to probably record or keep their outcomes, measures? Well, I think my answer probably will surprise you. I don't – I think at this point in time, until they provide you some – until the patients are required to, or you're provided funding to get one-year outcome, you probably shouldn't collect anything. Because basically, if you don't have the one – if you don't have a valid – if you don't have a valid response rate at one year, then you have nothing, and you don't know anything. And the other factor is, unless you absolutely capture what the surgeon does – for example, this is not in sports, but this is an example in – let's talk about hip replacement. If you're a tertiary or a quaternary center, and I look at your length of stay, the metrics that they measure now – length of stay, infection rate, and readmission – every tertiary, quaternary center is going to do terrible. Because the CPT code for an implant is the same whether the implant was for a metastatic tumor, was for fracture, or was for garden variety OA, or was for post-traumatic OA, a pelvic fracture. And you know if that – you know that tumors aren't done in the community hospital, and if you're really sick, you're going to the tertiary care hospital, and that's – and so, when you look at their outcomes, they're terrible. And so, because – and you – but you can't control for that. You'd have to do a combination of ICD-10 codes with your CPT codes, which is almost impossible for anyone to do. But if you had a system like redo an OME, the surgeons are asked that, what did you do it for? And we can take all the tumors out in a nanosecond. We can compare all the garden variety of HIPPOA, and determine a fair comparison, and control for other things involved in that. So, I think that unless you have what – unless you're capturing what the surgeon does, there's no system that does that. Unless you capture – have one-year outcome, don't do it. What I would like you to do in the sports realm is participate in these studies that are – that are studies that are coming up, that whether it's a moon, or a Mars, or shoulder instability, or it's a hip arthroscopy, multi-center group, that's what you should participate in. Because that's – that has the mechanisms for capturing all these things to get an answer that's relevant. Now, I think in the future, you – hopefully, there'll be some incentives for a patient to fill it out at one year, which would be fine. The academy's position, you know, the academy's captured – they're doing a registry. They're looking at implant failure. Who cares? The implants fail less than 2% of the time at five years. So, basically, by doing a registry for implant failure, you're working for the companies with your own money. That's not the problem with total joints. The problem is, how well do they do from pain and function? We know that 20% of total needs aren't happy because they can't stair climb. So, I think you have to be careful. Right now, you have to be careful, and I think that in the future, maybe there'll be some incentives for – by the insurance company or the patients to actually complete something. So, Kurt, question for you. So, and we've discussed this a little bit, but it'd probably be a good conversation piece for the fellows online. So, you're a fellow, you're brand new in practice, and you have an interest in doing research. You're not necessarily at a big center with a bunch of resources. How do you break into research? Where should they start? Certainly, you may not be asked to contribute patients to these big multi-center studies if you don't have a name or if you don't have research behind you. How do they get started in terms of doing things whether it's clinical or basic science research and starting to build a reputation for themselves? I mean, one thing is to review papers for the journals because then you get to be able to see something from the other side, and that's an extremely valuable asset that we need, the peer review system. Volunteer to be on committees within the AOSSM. What I've told younger people that, you know, find out the centers. If you're interested in HIPS, find out who's doing something as a registering co-ordinator and say, look, I'm willing to do anything. I just want to watch what you do. Now, if you're interested in ACLs and say, hey, identify, talk to me, talk to Rick. What we're really proud of in MARS, which is a revision ACLs, is that there are 83 doctors, half are in private practice. Half are people that are just involved in day-to-day care and half are in academics. And I think that makes a very generalizable study. I think you can start collecting something that you want to do and do a good job on it. Not everything, but let's say I wanted to do, I don't know, scapular fractures or something unique or some unique population. I want to capture them and study them. That's one way. And the other way is to think about writing, you know, a systematic review or a review article on something that is unique. So there are basically two, if you want to be active, there's two diametrically opposed philosophies. One is Jed Kuhn's philosophy. His philosophy is let's pick something that no one knows anything about. We write a paper and review it, scapula, and then we become famous in that. Great. And then there's the blockhead approach, which is Dan the torpedoes. I'm going to do what the hell you want. That's my approach. I'm not suggesting you do that. I like ACLs. I'm going to study ACLs. It's going to take me 30 years to get the right talk on ACLs, but I'm going to study it. I'm going to do it because I like it. I think that it matches, you have to look at your personality. Jed's approach is better, easier, but I think you have to, if you're going to do research, you do it because you want to know the answer to a question that's important to you and that's important to everyone else. Let me ask, there's a question here from Jonathan Hughes, one of the fellows. What demographic data is collected on the patients in your registry and how is the HIPAA data named, date of birth, et cetera, incorporated into REDCap? Well, we capture all the things I showed you. We capture their age, their race, their gender, their sex, their years of education, and we also capture a mental health story. And it's just, it's in a direct field in REDCap with their name and everything else. REDCap's a secure database, we can capture it. We actually pull some of that information, is actually screened, there's a program that actually pulls it from Epic that sets up our encounter. But it's just, it's captured in the field. The only one that has access to REDCap is our database managers. So we have two databases, a senior and an assistant. They're the only ones that have direct access to patient information. I don't or anyone else don't, they control it. So it's funny because they talk, Epic talks about being secure. Epic, you know, Epic is ridiculous. It, once you get into Epic, you can see everything, right? You can go anywhere you want to. You can go in 1,000 patients, you can go in their finances, you can go everywhere. In REDCap, you have to specify, so if I had 100 columns in REDCap, I could let you into one column or I could let you into 100 columns. I could let you into one patient or I could let you into all the patients. So REDCap specifies very clearly by the manager of that thing, what you can see and what you can't see. And that's why REDCap is used for international studies and it's used for multi-center studies in some of the biggest institutions in the United States. Very secure platform. Okay, here's one for Meg Flynn. Thanks, Dr. Spindler. What are your recommendations regarding obtaining higher levels of follow-up? It seems as though people rarely answer their phones, especially from an unknown number. I use, by the way, just Meg, when I'm calling patients, I actually, you plug Kurt Spindler's number into it, so. Actually, the way that, there are a couple of things we've done. What we've done is we've actually sent out a letter to them that has the surgeon's picture on it and says, look, we're trying to see you at one year. Can you give us some information? We are using text messaging. So we text the patient and we say what it is. This is a research, this is Cleveland Clinic, and would you go to this link and complete this or would you like us to mail this to you or some form? So it's interesting that it's texting. That's the bottom line. You've got to text the patient because no one answers their phone. I don't answer their phone because of robo calls. You have to identify yourself as being from the physician's group or for someone else. And that's the only way that seems to work. Kurt, when I've looked at some of the, as you know, I've been looking at trying to do a hip national hip registry, I'm sorry, a national hip prospective cohort. And when I've looked at what they're doing in Denmark and what they've done in England, they all seem to have no better than 30, 40% follow-up after a year. I know you said you're talking about ways to incentivize, pay the patients to do it. I mean, what are some of the best tricks other than hounding after them and texting them, sending letters? I worry if I send pictures, letters with my picture on there they'd probably end up as dartboards at some pub or something like that. I'm already on Latul's dartboard. I know that. I'm sorry, Latul, I didn't mean to say that. I didn't mean to let it out. Oh yeah, you threw me a salt. Well, I don't, you know, I just showed you out of the, you know, at 25,000 patients, we have 74% follow-up and just published out of that. I think that, but that follow-up occurs, half of that follow-up occurs automatically, meaning through my chart, through emails that are directed and the other half comes to human making a phone call. And I've read your paper on that, but that's a one year, right? 75%? Right. And I know you said the majority of failures occur or people don't improve from one to two, but if you want to see how maybe the ACL with a hamstring is doing at five years compared to a patellar tendon, how are you doing that? That, again, is specific to the question. You have to go at two years. It's much more difficult. That two year follow-up is hard for us. We have struggled with that when we've asked a question that required two years. So we're asking a question about ACL failures because I want, because one of the things we did in the clinic is we actually looked at tunnel size for hamstrings and we looked at number of strands and we have a higher construct. So we have 1,000, we have 1,200 ACLs that we're trying to follow up that are primary over five years. We have a group of three people that are texting them and answering them basically, did you have surgery on your knee or not? And was that surgery an ACL reconstruction? We only asked two questions and we're struggling to get responses out of them and texting them. So it becomes more difficult, really more, it becomes a lot more difficult at two years and one year and it becomes really difficult to go at five years. So I think that unless there's a better engagement by the patient or there is some real incentives for the patient, then I think it's gonna be really hard to do, even having money to have people pay to try to find that. So it's interesting. I know the NHS has done, you know, in England where patients don't get their surgery reimbursed unless they fill out their form. Do you see that as any possibility? I mean, do insurance companies have to buy into it or does the government buy into it? And do they have any desire to buy into it to be able to get us to get to that next level? You know, I don't know. I think that, you know, the problem is that dealing with an insurance company is like trying to make a deal with the devil. It, you know, I may come to that and it may come to dealing with it through CMS. So one of the things that you have to remember, all these, I will call them outcome calculators. I mean, they are not one-to-one. They're not explaining 100% of the probability. They're explaining at most 50% of the variability. So they come down to being your best estimate is how you would truly interpret them. But I think there's a role. I think if you look at, if we look at the number of people that had total knees and even some people in ACLs don't improve, we can figure out why. I think if you think about it from CMS's point of view, if I had a calculator that said that 10% of my patients weren't going to improve, I'm not saying don't do the surgery. I'm saying that probably look at what are the risk factors you could optimize so they would improve. And just having that shared decision-making discussion with them probably could save a lot of, I want to say surgery that doesn't help the patient, which is ineffective. And I know people don't want to hear that, but if your duty is to optimize, the patient should improve. And I don't think any one of us ethically can say that we would operate on that patient if we knew that patient wouldn't improve because we hurt the patient, we assumed risk, and we assumed expense with the patient not getting better. So I think that we can use these things to optimize when someone's ready for a total knee or a total hip, or when someone's ready for an ACL, and then how to improve the outcomes. And I think that these are tools to be using shared decision-making. So I think if a company or a CMS says, you know what, if I can optimize, by optimizing my patients, I do 10% less patients a year, I save a bazillion dollars. I might as well somehow use that money to improve the whole healthcare system. All right, so I started to go down this path with another question before, but what is your thought? You know, you've got, there are some of these systems out there, either some that you pay for, that if you don't have your own infrastructure to do it, but you also have some that are being run by companies or companies specifically for, you know, that you can put all your outcome data into. What's your thought about that, about having kind of a company run the repository for outcomes data? If the company can benefit from the data, there's a lot of bias there. So if that company is collecting it, and then using it for marketing or using it for demographic profiling, heaven forbid they should use 20% followup to say their products are good. I have so many problems with that, that I can just, I can list them off. However, if a company has said that I wanna fund a registry or a cohort of a group of things, and the control, now this is the dicey part, the control of the analysis of that data, and the control of the publication of that data is within a research group, and I'm perfectly fine with that. Or if a company has a platform, if a company has a platform to collect outcomes, and that's their job is collecting outcomes, not benefiting from that, then I have less problems with that. But all these issues have, I mean, I've been embroiled in all these issues back and forth, but I like having the independence of the group of investigators that are running the cohort, try to determine, everyone has bias, to try to determine the least bias they have and look at the result. So I will bring up something that, so we had in Moon, we have a lot of failure data. And there's a company and a product that, I'll just tell you it's Bayer. So Bayer looked at, Bayer published a study on a randomized trial, two to one, 100 patients. And the big question, was their failure in line with what's expected or not? Well, the only way, the only match they have for that is really the Moon data, because the Moon data is specific by sport, activity, by age, by gender, all the things you can match it with. And it became a real discussion of, and it became a real discussion of whether Moon should give the data to the investigator and to the company and to the FDA, because the FDA has wanted it to see whether it matches. Should we give that to them? Should we sell it to them or should we do nothing? And I can tell you, it was a great debate and most people felt very uncomfortable giving the data, even though it was for the FDA to make a decision about whether these things, whether this was good or bad or not, because basically they valued, the people in Moon valued their independence, their complete independence from any company to say, this is how we interpret the data, not the right way, but this is how we think we interpret the data without trying to bias people in either direction or not. So it becomes really, it becomes, you have to be really careful and you have to be really transparent in what's going on with your data. That's interesting. So, I mean, that brings up, you've got a unique repository of data of high level surgeons that they, to be able to give a control for the other one, but it was, but it would be more of a case comparison series, right? So, but that's interesting. So, so you guys opted not to do it. No, we gave the data, but we didn't sell it. I mean, technically it could have been worth a lot of money. We didn't sell it because we felt that was untrue. We didn't like it. And we, but we gave it to the, we gave it for use in the FDA. Cause I think scientifically, scientifically, I mean, someone needed to evaluate, not us. We gave data, not, we didn't do the evaluation. Someone else decided, someone needed to decide whether this is safe or not safe for patients. And the only way to do that was to have a data set to compare to. But from an ethical and scientific point of view, we thought that was the right thing to do. Yeah, to me, that's the right thing I would think, but yeah, so. That's not, I mean, we did it, but there are a lot of people not comfortable with it because they wanted to be independent. But also I think, so these things get complicated about, you know, like what does Apple do with your data? What does, I mean, what are these, what is Facebook, everyone else? So I think being transparent, what happens with your data, I think is the right thing to do. Cool. See if there are any other questions. I don't see any on the chat and we're right up on time as a matter of fact, cause it's 4.57. So any, Latul, you have anything else from your standpoint? This is your last, this is your chance to get, you know, Curt right here on live TV, you know, so. That's right. Oh my. That was a great talk. Yeah. Curt, that is a great talk. I mean, I've heard you talk several times. I always learn more and more and I, you know, from you and I've learned also your, from reading a lot of your guys' work, you're doing some phenomenal work and really, I think transformed and brought some real science for how we're starting to evaluate stuff for the, from the knee, from the moon group, from the shoulder moon, from the Mars and all. So, you know, you deserve all the kudos you get for it. And then some, so, but you, I appreciate it. And I know you've helped so many of us in being able to counsel our patients. So I, I appreciate it. So thank you for giving the talk as well and taking the time out to talk with us. And I hope to see you soon in a non COVID time. Also, I just want you to know that we know what the weather's like in Cleveland because when the weather was nice, Latul would take the call, would take these conferences from outside and he'd be on his iPhone outside. When he, when the weather was bad, I'd see that, that background. So. It's just hot now, hot and humid. Well, thank you. Very kind to have me do that. And I look forward to working with everyone as we try to all get the best care for our patients. So thank you. Yeah. Thank you, Kurt. Appreciate it. You all have a good one. Take care. Thanks. Have a good one.
Video Summary
In this video, Dr. Kurt Spindler discusses the value of cohorts and registries in orthopedics. He begins by emphasizing the importance of collecting patient-reported outcome measures to support the effectiveness of pain and function. Dr. Spindler then provides an overview of different study designs, including observational cohorts, registries, and randomized trials, and explains how each can be used to answer different clinical questions. He highlights the strengths and limitations of cohort studies and emphasizes the need to control for confounding factors when interpreting the results. Dr. Spindler also presents findings from the MOON cohort study, which focuses on anterior cruciate ligament (ACL) injuries, and discusses the importance of age in determining the success of different graft types. He concludes by discussing the value of collecting outcomes data and the challenges of obtaining high follow-up rates. Dr. Spindler suggests that until there are incentives or requirements for patients to complete one-year follow-up, it may not be practical to collect outcomes data on a large scale. He also discusses the role of companies in collecting and analyzing outcomes data, expressing concerns about bias and the need for transparency. Overall, Dr. Spindler emphasizes the importance of using outcome data to optimize patient care and improve healthcare systems.
Asset Subtitle
May 27, 2020
Keywords
cohorts
registries
patient-reported outcome measures
observational cohorts
randomized trials
confounding factors
MOON cohort study
anterior cruciate ligament injuries
follow-up rates
×
Please select your language
1
English