false
Catalog
2021 AOSSM-AANA Combined Annual Meeting Recordings
Writing and Evaluating Systematic Reviews in Sport ...
Writing and Evaluating Systematic Reviews in Sports Medicine Research
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
true story before I start. About five years ago, I thought I had the credentials to apply to be a hospital president in the Mass General Brigham system. And so I went through several rounds of interviews. And the last round was going to be a PowerPoint presentation to the board of trustees. And it was on finance and strategy for the hospital going forward. So I waited patiently two days after the last interview and the PowerPoint presentation. And I got a call from the chairman of the board of trustees that I'd gotten to know over time. And he said to me, Tim, the board thought your PowerPoint presentation was the best presentation they have ever seen. So I start signaling to my wife, this is going to happen. And I said, that means I got the job. He goes, no, but it was the best PowerPoint presentation the board has ever seen. So with that, I hope this is a good PowerPoint presentation for you. And I also, if you would like to have these slides, I can certainly make them available to you. I have no disclosures. There's only one goal. And that goal is to provide reviewers with the knowledge and tools to help them analyze and critique systematic reviews and meta-analysis. There have only been three editors of AJSM. And when we first started review papers, we asked Dr. Houston his opinion about this. Now, the journal had transitioned over to Dr. Leach as the editor. But Dr. Houston was a man of immense integrity and science. And he wasn't quite sure if review papers belonged in a scientific journal. We finally wore Dr. Houston down. We began to publish review articles somewhere around 1992, 1993. And we started with narrative reviews. And the problem with narrative reviews is that it's expert opinion. So we can publish an article on elbow arthroscopy, but it's that author's opinion. And when that author searches the literature, they may be anchored to certain pieces of literature and exclude other pieces. A systematic review is different. A systematic review is just that. It's a systematic review of the literature that can be reproduced by another researcher. And a meta-analysis is simply a subset of systematic reviews where the data is aggregated. So just as we've heard, systematic reviews are more qualitative. Meta-analysis is more quantitative. And AJSM current concepts is over 25 years old. So systematic reviews and meta-analysis, they actually sit on top of the level of evidence. They sit above randomized control trials. If you're doing systematic reviews of randomized control trials, they sit at the very top. But why do a systematic review? Well, how long does it take to do a randomized control trial? If I said to you today, we're going to start on a randomized control trial, PRP, in knee osteoarthritis, how long is it going to take for you to, number one, have the inclusion-exclusion criteria, to enroll patients in the study, to follow those patients, to analyze the data, follow up for two years, let's say. We're talking about submission of that article probably in 2025, maybe 2026, to get that randomized control trial done. But if done properly, a systematic review can answer an important question and provide direction. And once again, it sits on top of the evidence-based medicine pyramid. So let's take a look. PRP used for the treatment of osteoarthritis. Systematic reviews and meta-analysis take these individual studies, and they synthesize it into one study. So here's the example. Is it a complex problem? Yeah, PRP in osteoarthritis is a complex problem. Are there discrepancies in the literature? Yes. Some are in favor of PRP. Some are not in favor of PRP. Can we get more precise by doing a meta-analysis? So I'll show you what that means. Usually in orthopedic surgery, the studies that we do, even studies that have hundreds of people, are much smaller than studies, for example, in internal medicine, where they'll enroll thousands of patients in an anti-cholesterol drug study. At best, we have hundreds of patients in studies. So can we get more precise by aggregating the data, by aggregating patients in a meta-analysis, and decrease those confidence intervals? That's a more precise study. And I just want to acknowledge that PRISMA. PRISMA has been around for the past 12 years. It has over 5,400 citations. It is a great guide for people who are doing systematic reviews and people who are reviewing systematic reviews as well. So how to do a systematic review, and how do we review a systematic review? It starts and it ends with a question. And I can't overemphasize the importance of this. Is it an important topic? Let's take PRP again. PRP and osteoarthritis, yeah, that's an important topic. Is it a timely topic? Yes, that's a timely topic. Does it answer a question that resolves a discrepancy in the literature or clinical practice? And I have a couple other rules about questions. It has to be one sentence. Can you give an elevator pitch? Not going to the 13th floor, can you give an elevator pitch between floor one and floor two about what the question is? And can you explain it to your fifth grader? This is my fifth grader. Can you explain it to your fifth grader? Maybe they don't understand the science, but they understand the question. And like the show Jeopardy, it should be phrased as a question as well. So here's an example. Platelet-rich plasma versus HA for knee osteoarthritis. Systematic review and meta-analysis. Randomized controlled trials. Jim? Good? You like this? I think it's good. They've described the patient population. They've described the intervention, the comparison, the outcome in one sentence. They did that in one sentence. PICO, population intervention comparison outcome. It's tight, but it's not too focused. We're not looking at left-handed baseball pitchers in Nashville, Tennessee using PRP for osteoarthritis. That's too tight. This is perfect. Did the authors answer the question? And as you go through the systematic review and review the systematic review and meta-analysis, you should be asking yourself this question over and over again. Are they truly answering the question? Are they answering it correctly? And if they're not, what has prevented them from answering the question correctly? Is there bias involved? We'll talk a little bit about this as well. And as we've already heard, inclusion and exclusion criteria, especially, remember, we're choosing studies. Inclusion and exclusion criteria are critical. The same diagnosis, the same treatment, minimum follow-up, this should all be described in the methods section. The outcome measures, is it a randomized control trial only? Exclusion criteria as well. They may exclude unpublished manuscripts. They may exclude systematic reviews as well. So here's an example outcome comparison of lat-dorsi transfer and pec major transfers for irreparable subscapularis. Here's their eligibility criteria. Very, very clear. The patients had to have a pec major transfer or a lat-dorsi transfer. That seems intuitive. The subscapularis had to be irreparable. Follow-up of 12 months. Post-operative functional outcomes, they actually describe the functional outcomes as well. And the exclusion criteria, they clearly state as well. There were no cadaveric or animal models, no case reports, pilot studies, systematic reviews, no unpublished manuscripts as well. So after you have an inclusion and exclusion criteria, you then do this search of the literature. In the search of the literature, you look at everything you can possibly look at. You have committed yourself to answering this question. So if you've committed yourself to answering this question, you have to answer it. So you need to go to all the sources that you can possibly get to answer this question. PubMed, Medline, Google Scholar, Cochrane Library has a list of systematic reviews. Embase is the European version of Medline. And the other thing that I like to see in a systematic review, did they go back and look at the citations from the articles that they selected? Did they go through the citation list of those articles to make sure that they didn't miss any articles as well? Did they look at smaller, lesser known journals that perhaps are not on Medline, not on PubMed? And what's their publication bias? This is what my desk at home looks like. And over on the right is a cardboard box with all the manuscripts that I've never submitted. Part of those are the manuscripts that have been rejected. But publication bias is the tendency to submit and for journals to accept studies for publication based upon the direction and strength of the study. Positive studies get published. Studies with a p-value that are greater than 0.5 are in the file catalog. They're back at home. They're not published. So what happens with that? What happens is that you overestimate the effect. Let's go back to PRP and osteoarthritis. If we never published those studies that showed no effect, how do you measure that in the literature? So you overestimate the effect of PRP and you underestimate the harm. So publication bias is real. And the way you would evaluate this is authors typically will present a funnel plot. This is called a funnel plot to you. Is the funnel plot symmetric? If the funnel plot is symmetric, it typically means there is no publication bias. If it looks like this, you can see it's asymmetric here, right? Those are all the studies sitting in my filing cabinet at home on PRP because they've never been published or they were not accepted or never submitted. So here's an article you may have read. If anybody took their board exam this year for the ABOS, majority of anterior cruciate ligament injuries can be prevented. They clearly state in their manuscript, they checked for publication bias. And look at this. They say asymmetry was not visible on the funnel plot. They clearly state this in their manuscript. You can also do mathematical tests. The most common one is the Eager test. You can do the Eager test as well, but as a reviewer, you don't need to know that. You need to look at the funnel plot. Is the funnel plot symmetric? So it is very clear in the literature that has studied this, positive effect studies get published. Those studies at home don't count. And those citations from those studies at home don't count either. So what can you do as a reviewer? You can recommend an improved search strategy. You can ask the authors for more explanation. Don't hesitate to do this. Don't hesitate to write in your review. I looked at the funnel plot. It looks like there's publication bias. Please explain that. You can also say, I don't think the search was wide enough. I don't think this is acceptable. The next step of the systematic review is data collection. So you begin to read the abstracts. The problem with reading the abstracts is if one person does it, they may have a bias. They may have a bias against an author. They may have a bias against a technique. So you would always have two people who are reading the manuscripts together and then making a decision together. Should that abstract, should we read the full manuscript of that abstract? And if they can't come to a consensus, the third reviewer, usually this is the senior reviewer. This is the senior author rather. The senior author would make this decision. We're gonna include this or we're going to exclude this. So the abstracts first, followed by the manuscripts. The methodology should be clearly stated in the manuscript. Almost every systematic review should have this. How did they find the literature? How did they decide what was included, what was excluded? You then aggregate the data, put it through the meat grinder, and you come up with a spreadsheet that looks something like this. So let's take a look. Failure rate after arthroscopic repair of bucket handle meniscal tears, a systematic review and meta-analysis. Take a look at their data. This is their aggregate data. They've looked at every study. They've looked at how many patients had a bucket handle tear, male versus female. They've looked at medial versus lateral. Was it done in conjunction with an ACL? Was it done without an ACL? Was it inside out, outside in, or all inside technique? Was it red, red, red, white, white, white? And they go through and aggregate that data. That should be clear in the manuscript. That table is always present in the manuscript. And then you start to do the data analytics as one of the last steps. So this is a meta-analysis, a Bankart repair versus Latter-Jay procedure. First, let's look at the question. Let's take a second and look at the question. Bankart repair versus Latter-Jay procedure for recurrent anterior shoulder instability, a systematic review and meta-analysis of over 3,000 shoulders. Is that a good question? Yeah, I think that's a really good question. It's very well stated. It's one sentence. Let's just look at one component of this paper. Let's look at the re-dislocation rate. I'm gonna flash this for five seconds, and then I'm gonna ask you about it a little bit later on. Here we go. Yeah, I want you to get everything you can from what I show you in five seconds. One, two, three, four, five. Error goes away. But we'll come back to it here. You need to get comfortable with forest plots as a reviewer, and forest plots, once you get comfortable with them, they're very easy to look at, very easy to review. So a forest plot typically has the study in the year. They typically have the risk ratio, which is the line of no difference. So if you look at the bottom, it says Favors-Latter-Jay. If you look to the left, it says Favors-Bancart, and right in the middle is the line of no difference. Usually it tells you what the study size was as well, the risk ratio, the confidence intervals, and then the weight. Papers are weighted based upon precision studies. So once again, precision studies are larger number of patients with smaller confidence intervals. Low studies get weighted, and they should be weighted. This is the way it should be done. So when you look at this, I'm sorry, I don't have a pointer. When you look at this, the study right in the middle with the big box, that's weighted heavily at 46%. The reason why that's weighted heavily is because there are more patients in that study. The confidence intervals are really skinny. That's a precision study. So the weight is done by a mathematical formula. It's typically done by the inverse of the variance. The variance formula is on the bottom. We're not gonna hold you to that, but just remember that variance is how far away from the mean each observation was. And if observations are really way away from the mean, that's gonna have a very wide confidence interval. They then come up with this diamond at the bottom. The diamond is the effect, and the confidence intervals for the effect are included in the diamond. Each corner of that diamond includes the confidence intervals. So you can see that we took, we did this meta-analysis, and we made even a more precise study because those confidence intervals in that diamond are so small compared to the wide confidence intervals. So it tells you at the bottom, favors orthoscopic Bankart, favors Latter-Jay. You can see this clearly favors Latter-Jay just for re-dislocation. What about heterogeneity? Let's talk about, I think this scares a lot of reviewers. Heterogeneity is a word, instead of using the word heterogeneity, use the word variance or variable. The basic purpose of a systematic review is to consider whether it's appropriate to combine studies in a meta-analysis. We wanna combine apples and apples. We wanna combine studies that compared the same thing. We don't wanna combine apples and oranges. Now, some of the apples may look a little bit different, but we're still comparing apples to apples. We're not comparing apples to oranges. That's what heterogeneity is. It's variation between studies. We wanna minimize. I asked some of my colleagues at home, is this good or bad? Is heterogeneity good or is it bad? And half of the people said, well, heterogeneity is good because you don't wanna have a study that's only the same population. And other people said, well, for systematic reviews, it's not good. For systematic reviews, it's not good having heterogeneity. So it's differences in studies that are not due to chance, not due to random chance. Well, there are a couple of different types of heterogeneity. There's always gonna be clinical heterogeneity. My patients are probably different from your patients. My technique for a double row trans-osseous equivalent rotator cuff repair is probably different from your repair technique. I work at an academic medical center. You may work at an ASC. So there are clinical differences and we accept that. We have to accept that. But we don't have to accept the statistical differences where individual studies have different results that are not consistent with each other. Some suggest failure. Some suggest that is a great success. I want you to be careful with orthobiologics, by the way. This is a really important point. When you're looking at systematic reviews, that PRP and osteoarthritis, if those PRP preparations are not the same, that's not gonna be a good study. You have to be really careful with things like that. So careful with the orthobiologics. So as a reviewer, how do I evaluate heterogeneity? There are three ways to do it. One is to look at the summary table. If you look right in the middle under intervention, every single study compares arthroscopic bank heart versus Latter Jay. So check. That's a good check right there. Number two, look at the forest plot. Is the forest plot symmetric? This is relatively symmetric. This is what I popped up before on the screen for five seconds. This is relatively symmetric. It favors Latter-J. You would expect the heterogeneity here to be relatively small in comparison. We'll talk about that in just a moment. And two, look for the statistical analysis. Look for the chi-square test, which is the most common test, but it's not the best test because it's underpowered. The null hypothesis for the chi-square, you'll typically get this as a p-value. The null hypothesis for the chi-square is that all studies being evaluated in the systematic review are evaluating the same effect. There is no heterogeneity. So if the p-value is less than 0.05, you have to reject the null hypothesis. If the p-value is less than 0.05, you have to reject the null hypothesis. So this is another way to look at it with the eyeball test. Take a look at this. This is the paper on bucket-handle minuscule tears. What do you think? Just look at the forest plot for a second. It's all over the map. You would expect this to have high heterogeneity, that we're comparing apples. We could be comparing apples and oranges here. And sure enough, we go down to the bottom, and it's telling you that the p-value is less than 0.01. So yeah, there's heterogeneity here. The I-squared test is a mathematical function, and you take Cochrane's chi-square distribution, which is not the p-value, and the degrees of freedom. Degrees of freedom, you can easily calculate how many rows are there minus 1, how many columns are there minus 1 in the data table. You get your degrees of freedom. As a reviewer, you should not be doing this at home. And by this point, you're probably saying, this is absolutely crazy. What is he talking about? I'm an orthopedic surgeon. I'm not a statistician. I get it. I-square was introduced by Cochrane to guide reviewers to evaluate the consistency of the results in studies and meta-analysis. So let's make it really easy here. Here's the I-square for this, for Bancart versus Latterchey. It's 30%. They're giving it to you. The I-square is 30%. So how do I interpret that? There is a 30% variation across studies that is due to heterogeneity, not due to random chance alone. Very easy. I-square 30%. 30% is due to heterogeneity, not random chance alone. So how about a scale? Well, low heterogeneity would be, in my book, 0 to 25%. I'm not sure if there's anybody else's book out there, but 0 to 25% would be low. 25 to 50 would be moderate, and then above 50 would be high heterogeneity. And then how do we-so we found heterogeneity, then how do we account for it? What do we do with that? And it's a fixed-effects model in their statistical model versus a random-effects model. And they give that to you here at the bottom. You can see the red circle. They're telling you they used a fixed-effect model for this. And simply, a fixed-effect model assumes that any difference between studies is simply due to random chance. There's no heterogeneity. Whereas a random-effects model includes that, plus it assumes heterogeneity is responsible for the differences, and it's a different statistical model. And that's what you need to know as a reviewer. So I-square, 0 to 25%. Low heterogeneity. I would expect people would use a fixed-effect model. I-square 25 to 50%, moderate heterogeneity. They would typically-they could use either or, fixed-effect or random-effects model. Anything above 50%, I would expect them to be using a random-effects statistical model. Simple, right? So in summary, there's more, so hang on. Did the authors ask an important question? Did they perform an exhaustive search? Did they evaluate and weigh the quality of studies? Were the studies scored? You can score studies. Are the search methods reproducible? If you went out today, could you repeat the search that they did? Are the results valid? Is there publication bias? Is there heterogeneity? Were the studies weighted? Do the results apply to my patients or left-handed baseball pitchers in Nashville who have knee osteoarthritis? Did the authors report all the results? And finally, did they truly answer the question? There are many papers I've read through, and I get to the end, I'm saying, geez, I'm not sure if they answered the question that they started with. And you're welcome to send that back to us and say, Tim, I don't-I'm not sure if they've answered the question here. Or, Tim, I'm worried about the heterogeneity here. And now you know all about that. This is a great website. It's the Center for Evidence-Based Medicine, essentially Oxford. If you look at the right and you click under resources, once again, you can have my slides. There's no worry here. It gives you-it gives you how to evaluate a systematic review. It's sort of a checklist, and it explains everything that I just explained. And so if I send a systematic review to you, you're welcome to use this and send it back to me. So the end is near, I promise. So four frequently asked questions, and then we're done, okay? So number one, should we consider systematic reviews of level four, level three, and level two evidence? And what I've heard is garbage in, garbage out, right? You have a level four study, you put it into the machine, you crank it up, and what you have is a systematic review with more garbage. And so first we have to-we have to know that systematic reviews are at the level of-level of evidence as the lowest study in the review. So if there's one level four paper in the study, it's a level four systematic review. It doesn't somehow magically become a randomized controlled trial. But to me, the question is, does it answer an important question that would take years for a randomized controlled trial to answer? Does it truly answer a question that we need to know? We'd like to have some direction now. Are there no randomized controlled trials? Because if there are randomized controlled trials, then we should be looking at those. Does the systematic review add precision? Remember, small confidence intervals. Is there bias? We talked about bias. And what about registry data? Our colleagues in arthroplasty get a lot of data from their arthroplasty registries. We're going to start having registries in sports medicine. We do have registries in sports medicine. All that is going to be retrospective data. Can we do a systematic review of that data? I pose that as a rhetorical question. And AJSM for years has published level four studies that were sort of built back in the 70s and 80s on level four evidence. There aren't many randomized controlled trials in orthopedic surgery. In fact, if you look at the average level of evidence for orthopedic surgery, it's level three evidence. So my answer would be, if it's a well done systematic review, if it answers a question, yes, we could use level four evidence. We're starting to see, I'm going to shift gears. We're starting to see network meta-analysis. So a network meta-analysis is an indirect comparison. I want to compare A to B, but there are no direct comparisons. But there is a comparison, A to C and C to B. So let's put something on this. Let's go back to osteoarthritis and PRP. So A is PRP treatment versus corticosteroids. There are 19 randomized controlled trials that show that PRP is better than corticosteroids. There are also 12 randomized controlled trials comparing HA to corticosteroids. And in this case, corticosteroids win. So therefore, follow me here, therefore, PRP must be better than HA. So it's an indirect statistical comparison that you're doing. We're seeing more of these being submitted every year. And it's completely legitimate to do this. And if it's done correctly, they once again, these are typically randomized controlled trials. They sit on top of the pyramid as well if they're done correctly. But there's this whole theory of transitivity. So transitivity is a common comparator. Just like I just showed you, there has to be a common comparator. Identical diagnosis, osteoarthritis, similar groups, similar effects, similar outcome measures, they have to be the same. So it's an indirect comparison once again, but the groups that you're comparing have to be very similar. You have to be very careful here. So network meta-analysis. Only two more to go, almost done. Statistical fragility, we're getting more and more papers on this. P value is less than 0.05. If I went to your hotel room tonight and I shook you and woke you up at one o'clock in the morning and said, what's the P value? You would say to me, it is 0.05, Tim. It's been 0.05 for my entire life. But the question is, is that really the light and the truth of the P value? Should it be 0.05? So let me show you something. Here's a Tennessee quarter. You see it? You see the banjo? It's a Tennessee quarter. Here we go. We're going to flip that quarter. And oh my gosh, we get 60 heads and 40 tails on that quarter. Well, you know what? There's only a 4.5% chance that would happen, 4.5%. So what's the P value? 0.45. It's statistically significant. And you say, well, wait a second, wait a second, Tim. There's something wrong with the quarter. So we weigh the quarter. We figure it's made by the US Mint. It is a completely good quarter. We're going to flip it again. So here we go. We flip it again. And what do we get? 59 heads and 41 tails. And the statistical chance of this happening is 7%. So it's 0.07. So it's not significant. That goes in my file drawer at home because that will never get published, right? But really, what's the difference? This is what statistical fragility is all about. Is there a real difference between those two scenarios? That's another rhetorical question. You have to determine that. So here's a manuscript that looked at statistical fragility. It's the minimum number of participants in a randomized controlled trial whose status would have to change to alter statistical significance. So my good friends and colleague from Boston, Paul Tornetta, who's never written a sports medicine paper, and Tiger Lee wrote this, and it's a really good paper. They found 19 randomized controlled trials looking at PRP and rotator cuff repairs, and only four patients would have to flip to change the status of that study and make it not significant. That's what statistical fragility is about, and we're getting more and more papers about this. So now you know. Now you know the rest of the story. So is the p-value the source of truth, or is it a continuous variable? You make the call. You make the call. The last thing I wanted to talk about was cluster effects. So we want to study anterior knee pain in female college basketball players, and there are a couple ways that we could actually do this. We could do a systematic approach to this, and we could say we're going to take all the college basketball players. We need 15 participants in the study. We've decided that's what our power is. We're going to take the 10th college basketball player every time until we get 15 players. That's a systematic way to do that, and I'm sure you're all familiar with doing studies like that. There's a cluster approach where you could say we're going to evaluate only ACC players. That's a cluster approach. Only ACC players, basketball players. So you look at this and you say we have half of the study participants are coming. I'm picking up Boston College because I'm one of the team docs. Half of the study participants come from Boston College. Virginia Tech had nobody. So read our participants. Virginia Tech had nobody in this study. Well, we're studying anterior knee pain, and does it make a difference that Boston College is a Jesuit school and they say a team prayer before every practice and they say a team prayer at halftime and before every game and after the game? Does it make a difference? Yeah. So that's what a cluster effect is, and so now you know just a little bit about cluster effects. There are more and more articles about cluster effects as well. And so my final word to reviewers, and the only reason why I put this up, because I want you to realize how uncomfortable I am. I would much rather talk about rotator cuff repairs. I was so uncomfortable with this, and I had a new job, and I had to make a lot of financial decisions for the hospital. It wasn't the president. And so I decided I was going to go back, and I got a master's degree from the Dartmouth Institute. It's pretty heavy on statistics. They look at a lot of variation in combination with the Tuck School of Business, but I was still often confused. I was still uncomfortable. So I went back to a place in Cambridge, MIT, where they actually use numbers compared to the other school in Cambridge, and worked with large data sets and earned a master's. And you know what? You know what the effect is? I'm still uncomfortable. Every day, I am still uncomfortable. But you know what I've learned? I've learned to be comfortable being uncomfortable. So I've heard from so many reviewers, I don't know the stats. This is way above me. I'm an orthopedic surgeon. You guys are just simply awesome. So you have made AJSM and arthroscopy the best sports medicine journals in the world. You don't have to know all the stats. That's something we can help you with, and that's something hopefully today has helped you with. I can't thank you enough for that. So I'll take questions. The last thing I'll say is, go easy. It's my 25th birthday. But actually today, my twin boys turned 15 years old back in Boston. So I wish I could be with them, but it's a pleasure to be here. So thank you so much. Oh, no. I said go easy on me. How do you handle two Bessiere patients a year apart who likely have a lot of the same patients in both studies? They would clearly state in their inclusion-exclusion criteria that there's not a crossover of participants in studies. That should be very clearly stated, Dan. So, you're absolutely right. Somebody could game the system by doing this. That should be in the exclusion criteria. And in some of the papers we saw, it was in the exclusion criteria. And should we also accept the statement about duplication of data? Yes. And they should describe the duplication? Yes, exactly. They should describe the duplication. Absolutely. Yes. Well, okay. Yes. One more question. I usually, when I am in the field, I require them to score, give a methodology score, and just apply this score in the methodology. Do you require that? Because often, there is, in our literature, there is a tendency to apply it in a score level. I prefer, then, to describe how to score the methodology, and then to present that out in the abstract. And we'll be getting a result because it shades the way that the person is doing it. We have not done that. So, I can talk to you offline about that. I think that's a wonderful idea. We do ask them to score the weight of the study that they're doing. Yes. If we want to reduce the number of erroneous studies that are published, shouldn't there be more positive studies than negative studies? Because when we're designing a study, we say, we have 80% that are detecting some sort of effect. So, we're accepting a 20% chance that it's a negative study. So, we're not going to do that. You're absolutely right, but it depends on what problem you're actually looking at. And I'm just bringing to your attention that publication bias is a real issue. Let's take the PRP papers. There are PRP papers that show no effect at all with PRP and osteoarthritis. And there are a lot of those that have been published. So, I'm just bringing to your attention that publication bias is a real issue. And I'm just bringing to your attention that publication bias is a real issue. And there are a lot of those that have not been published. So, it depends on what you're actually looking at. If you're studying whether the sky is blue or not, there should be a lot of papers to the right of the no difference line, right? All right. Thank you very much. Thank you, Tim. My head swelled a little bit during that presentation, which is a good thing, I think, because we should always stretch our capabilities. So, I'm really grateful to the arthroscopy family of journals that we could combine this session today. I'm grateful to all of you, our reviewers, for all you do for us. And if you're not a reviewer, as Dr. Lubowitz said, we're happy to always have new anxious and enthusiastic reviewers. Also, if you're a member of the editorial board of AJSM, OJSM, VJSM, or Sports Health, we look forward to seeing you at our editorial board dinner tonight. And if you're a member of the AJSM editorial board, we have a meeting in this very room to follow in about 15 minutes. So, please take a break. And AJSM editorial board members, please come back.
Video Summary
In this video, the speaker, Dr. Timothy Hewitt, discusses systematic reviews and meta-analysis in sports medicine research. He begins by sharing a personal anecdote about a job interview where he presented a PowerPoint on financial strategy for a hospital, but was ultimately not offered the job. He then dives into the importance of systematic reviews and meta-analysis in analyzing and critiquing research. He explains that systematic reviews are a comprehensive review of the literature that can be reproduced by another researcher, while meta-analysis is a subset of systematic reviews where data is aggregated. Dr. Hewitt emphasizes the significance of asking important and timely questions, as well as evaluating the quality of studies and the possibility of bias. He also touches on topics such as publication bias, heterogeneity, statistical fragility, cluster effects, and statistical models. Dr. Hewitt concludes by encouraging reviewers to ask important questions, and to provide constructive feedback to improve the quality of systematic reviews and meta-analysis studies. He also mentions the availability of additional resources for reviewers to use when evaluating research. Overall, Dr. Hewitt provides a comprehensive overview of systematic reviews and meta-analysis, offering insights and practical advice for reviewers in the field of sports medicine research.
Asset Caption
Timothy Foster, MD, MBA, MS
Keywords
systematic reviews
meta-analysis
sports medicine research
reproducible research
publication bias
statistical models
reviewer resources
×
Please select your language
1
English