false
Home
AOSSM 2023 Annual Meeting Recordings no CME
JLC AJSM PODS
JLC AJSM PODS
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So I think people are familiar with these studies, but there's one example I was involved with in 2006, and my interest was in the power rating, the outcomes, and I wanted to look at it. I thought return to sport was meaningful, but I thought return to performance would be even better, just like getting back to school is one thing, but how are your grades? And I wanted to look at this as a measure, an outcome measure, and look at its reliability and its responsiveness and its validity, and I wrote a note to the NFL and asked them for the ACL injuries because I thought it would help this study, and maybe my fancy football team, but they declined, and so I was forced to get the data in another way, and so this highlights the two sides of this. One is publicly available outcomes information, like their performance metrics, and the other one is how do you get the injury data, and so in this study, the ACL injury data was collected from NFL game summaries, play-by-play documents, weekly injury reports, and player profiles, and you could see that is maybe not the same quality of data as you'd have from an injury surveillance system. Fortunately, this systematic review was published online just a couple weeks ago by Dr. Matava and his group on the use of publicly obtained data in sports medicine research, and again, this systematic review focused on the injury side of it. That's where most of the problems are, the publicly available injury reports online especially, and focused on athletes on the professional, semi-professional, and collegiate level. This is the overall cumulative frequency of the publicly obtained data studies published between 2000 and 2022. I was actually a bit relieved. I didn't really think about it until I was preparing this, but I was a bit relieved that that study in 2006 wasn't the first, so that I was not responsible for the birth of this industry in any way. There was one actually three years before that on concussions in the National Hockey League. They looked at injury reports published in the Hockey News for 16 seasons and then reported on that. That systematic review is a nice appendix, and that's where this Table A-1 is from. It allowed me to find that. It's allowed me to pick out what has happened in AJSM between 2000 and 2023, and again, there's that one study in 2006 that I introduced you to, and then there's a bunch of studies between 2011 and 2019, about 25 of them, and there hasn't been any over the last few years. Then one of the questions I have is, well, why is that? I think one of the reasons was this article on the validity of the research based on public data in sports medicine. This again was Dr. Matava and a group of his co-authors, not all from his institution, but what it showed was that the publications vary in percentage of actual injuries identified through these PODs, the public data, with a mean of 66 percent, and it ranged from 31 percent to 90 percent. These POD studies over-represent injuries in skill athletes when compared with offensive and defensive linemen, and that the POD studies have potential for overestimating the severity of injury by reporting only those injuries resulting in lost time or surgery. Some other reasons maybe for the decline of POD publications in AJSM over the last few years. Each of the four major professional sports leagues in the U.S., as well as the NCAA, use standardized electronic injury surveillance systems. These involve ongoing standardized collection of injury data using standardized definitions, and the recent publications using this, especially with respect to injury data, are higher quality. So I think these POD publications require critical review, especially with respect to the injury data. That's usually what's problematic, and the questions to ask are, is a specific injury accurately captured by the media report? So in that ACL injury example, maybe some of the ACL data was, because they tend to be a bit binary anyway, like 0s or 1s, although you wouldn't capture the concomitant injuries, that's for sure. But something like a cartilage injury would be really hard with the media report. The location of the cartilage injury could vary, the size, the bony involvement. Is sampling sufficient? So you don't have to measure the height of everyone in the world to show that maybe people in the Netherlands are taller than people in Indonesia. You would just need to sample some people from each country, or sample some of the people with ACL injuries in my example. I know I didn't have all of the ACL injuries, but I didn't need all of them to look at that outcome metric. And then always ask, is a population appropriate for this sort of work? The outcomes data is usually all right, but some of the questions asked is, is reliability presented? Is responsiveness demonstrated? You know, if their performance is high, then with injury it should go low, then with treatment it should come back a little bit. Is it responsive? Is it valid? Does it correlate with ProBall selection or some other measure of doing well? And this kind of confirms that even the proprietary database studies from the leagues are in orange here. They'll use the publicly available sports-specific performance metric. So they'll use that as outcome measures. It's really the injury data that's in question. With respect to the specific question about automatic rejection or desk rejection of the POD manuscripts, I feel like sometimes these present novel ideas, and the POD researchers have freedom to study these ideas. It's something that we can discuss, of course. And I have thought about the Supreme Court justice when he said in a case involving potentially dangerous teaching, the remedy to be applied is more speech, not enforced silence. I think encouraging more articles from the leagues with their high-quality data is important, as long as their study designs are not susceptible to bias, as I'll go over on the next slide. And I don't think we should silence this extramural research. I think we have an editorial responsibility regarding studies from professional sports leagues. These are really studies of occupational health hazards. And unfortunately, in public health, there's a rich tradition of manipulation of evidence and analysis of occupational health hazards in order to maintain favorable conditions for industry. And these are $5 to $20 billion industries. Therefore, despite the high-quality data, manuscripts from the leagues also require critical review, just like the POD manuscripts. And there's some concerns for bias I touched on in an editorial on bias, but the healthy worker effect is really amplified in these professional sports studies. And we always have to stay vigilant and make sure the study design, the data analysis, the data presentation doesn't reflect a bias towards minimizing the apparent impact of injury. And there's some articles that really specifically address some of that risk. So I was wondering if you guys have any questions about this or any ideas about how to look at these studies in the future. is we want the best information out. So I agree that performance metrics are performance metrics. I mean, that data is clear, it's unbiased, it's numerical, it's nothing subjective about it. The league's complaint is, as you know, the objective nature of it. And how these things work is us as officers go to an individual league and say, we want to submit a study. And then Major League Baseball, or Gary Green, or Alan Sills from the NFL, they present it to their counterparts in their participating union. And the union has right of first refusal on everything. And the union's perspective is that any study that would impact the financial ability of their players, they believe they will outwardly reject. And so we have to sort of work within that. Now, I do agree that just an outright ban is wrong. Because I think that if we had an outright ban on silicosis in minors, we would say, well, probably, we don't know. They're biased, because they want a bigger judgment. The minds don't want to give up much. So I think it's our duty to sort of stand in the middles to be honest arbiters. But I think just pointing out the critical flaws that we can encounter as editors or reviewers to sort of help make those decisions. Yeah, I agree. Thank you. I think it's always critical that the authors are the most important paragraph in a paper is the limitations of power of air force. Sometimes it needs to be a subsection, several paragraphs. If the authors are mindful and acknowledge the limitations in drawing inferences on their observations, then I'm OK. I think a lot of times, they miss. Many authors have an agenda. And sometimes you may have layers of public data, such as you have online injury reports. And then you have a video analysis that purports to talk about the mechanisms of injury. But those mechanisms of injury are obtained from videotapes, which are more likely to happen in highly paid, famous individuals. So it's important for those authors to go back and validate that the video analyses are representative in all recognizable ways of the underlying sample that they're going to draw inferences on. So for me, it's the. Yeah. What I've seen in studies across my desk... Obviously, the writers of that article don't make a mistake in preparing for the election. I'm going to ask you a couple of questions, and then I'm going to turn it over to you. Now back down to the pro side. Paul's data, it's like 70% of injuries, I think they got, they captured 70%, if that's one member or a member. He tended to miss the lower profile players, not the new ones, so I think we're better than this. I agree, it's a new, more nuanced issue, you don't just have to project them all, but frankly, when I read those two recently from Paul, I said to myself, why aren't we publishing these? In fact, Russ and I talked, I said, he walked in, he said, why aren't we publishing these? So, I don't know, I would get on the side of saying, I think there's good points, especially to report on the epidemiology of injuries in a league with the publicly available data. You're going to be reporting on it suboptimally. But if you can, it's sort of like the pearl diver data. You're just, you have a distance from it. You just can't have that precision that you need. But there are some studies that can be done in novel ways that might be worth entertaining. We'll just have to kind of see if they come. There hasn't been one in the last few years. Well, thanks to everyone. I know we didn't quite get through the whole agenda, but we're running late here. I just wanted to bring up, we had an excellent presentation by Dr. Landy on AI and machine learning evaluation and studies, and there were a lot of questions about what was the journal policy on this and that, that we don't really have a policy on at this point. It's sort of a work in progress, and it made me think that maybe we should develop kind of a working group in the editorial board to hammer out a policy on some of these AI-related issues. So, I'll probably be tapping Dr. Landy to be involved in it, but not by himself. So I hope that if there are others of you in the audience or other members of the editorial board who couldn't be here this afternoon that would be interested in being part of a working group on AI, please write me and we'll kind of get it going, because it's a complicated issue and we're not going to just sort of decide it on the spur of the moment. There are sub-issues and everything. Having said that, I also wanted, I've neglected to point out Scott Rodio, our associate editor for biology and translational research, is there in the back. And having said that, you're all invited to our dinner tonight, which is usually very celebratory and we have great food and we give out awards. It's a time to really relax and have a lot of fun. If you haven't RSVP'd, you're still welcome. Judy Connors is sitting right here, and you just tell her and she'll add you to the group for tonight. We're quite flexible. It's in the Congress room here in the hotel at 7.30. We'll have cocktails and great food and awards and lots of friendship. So please come. And again, thanks for everything. And if there are no objections, I'm going to bring the meeting to a close so you can go to your other commitments. Thank you.
Video Summary
The video transcript discusses the challenges of using publicly obtained data in sports medicine research. The speaker shares their personal experience with a study on ACL injuries and the difficulties they encountered in obtaining injury data from the NFL. They highlight the limitations of using publicly available injury reports, which may not provide reliable or comprehensive data. The speaker references a recent systematic review on using publicly obtained data in sports medicine research, which emphasizes the need for critical review and highlights potential biases in the data. They also mention the use of standardized electronic injury surveillance systems by professional sports leagues, which collect higher quality injury data. The speaker encourages the publication of studies using high-quality data from leagues while cautioning against bias and manipulation. They suggest that the limitations of power and potential biases should be acknowledged in research articles. The speaker also briefly discusses developing a policy on AI-related issues in sports medicine research and invites participants to a dinner event.
Asset Caption
James Carey, MD, MPH
Keywords
publicly obtained data
sports medicine research
ACL injuries
injury data
limitations
×
Please select your language
1
English