false
Catalog
Essentials of the Successful Manuscript - AJSM Rev ...
Essentials of the Successful Manuscript
Essentials of the Successful Manuscript
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you, Dr. Ryder and Dr. Lubowitz, for your introduction and your leadership in both of these prestigious journals. And I also want to thank everybody who's in this room who's coming. As you know, both of these journals are peer-reviewed journals, and that's the critical ingredient to make these journals what they are. Today I'm going to talk about the essentials of the successful manuscript. We're going to look at the organization, its critical parts, common pitfalls, tools that will help all of you improve manuscripts, and how to become a top reviewer. Again, I'd like to acknowledge Dr. Lubowitz, our Editor-in-Chief, and Dr. Brand, my cohort as Assistant Editor-in-Chief. Debra Vannoy is our Managing Editor, and she is in the back of the room, and you'll meet her on your way out. I'd also like to acknowledge Dr. Scaglione, our Chairman of the Board of Trustees, and also Dr. Poehling, who is here somewhere, who is a giant in the realm of arthroscopy. I'd also like to acknowledge many of our Associate Editors, which is our top level of editors that are in the room. They are an international group and continue to be an international group that represents the world. In addition, we have multiple editors for our social impact, to include infographics, podcasts, social media, and visual abstract. Arthroscopy is really a family of journals. It's a triad. Our mothership is the Arthroscopy Journal, and included with that is Arthroscopy Techniques, our video journal, which is peer-reviewed, and our most recent, Arthroscopy, Sports Medicine, and Rehabilitation, which we refer to as ASMR, is our open access journal. Let's talk about the organization of the manuscript. In general, it's always about exactly the same, no matter what kind of journal that you have. It starts with the title and the abstract, the introduction, which is short and sweet, the methods and the results, which parallel each other in terms of data representation. This is followed by the discussion, which is terminated with a precise conclusion, and then, of course, the references, tables, figures, and legends. The introduction should be brief. It should identify a very specific controversy that the authors intend to answer. At the end of the introduction, the paragraph that terminates it should have a purpose sentence which matches that of the abstract, and it is ended with a hypothesis, and really nothing else. An example would be, does double-row rotator cuff repair improve outcome? The purpose may be to compare outcome after double-row versus single-row repair, and the hypothesis is whatever the authors think will be their answer before the study commenced. The methods should include the time frame of the study, very specific inclusion and exclusion criteria, which we call the selection criteria. This should be step-by-step, reproducible, and presented like a cookbook. For basic science studies, rationale for the experimental design should be given. This is all followed by statistical methods, and remember that all the methods should be reflected in the results, and vice versa. Methods are critical. They're the most important. Why is that? That's because the fatal flaws are in the methods and cannot be fixed. Researchers really need to get advice before the study starts in a prospective study, and before data extraction in a retrospective study. There are level of evidence that are important. Our most important type of study of the clinical studies is therapeutic, which is the most common, but we also have diagnostic, prognostic, and economic. Of course, our level one studies are randomized control trials, which are our best, but our fewest because of the amount of effort that needs to be put into that. Our comparative studies are level two and level three studies, whether they are prospective or retrospective, respectively. Our case series have no controls, and they're called our level four evidence, and our expert opinion, of course, are our level five. Some studies do not have level evidence. Those are our biomechanic, cadaver, and basic science studies, and they have what's called clinical relevance. It's important to assess bias in clinical trials. Selection and allocation bias is caused by treatment of groups that have different prognoses. In other words, you're comparing apples and oranges. For example, you study meniscus repair with an ACL in younger patients versus meniscectomy in older patients that didn't have an ACL reconstruction. You simply cannot compare them. To prevent selection bias, we have techniques like randomization, blinding, strict inclusion and exclusion criteria, and matched grouping. Selection and recording bias is caused by method of outcome recording differs between the two groups. This can be prevented by not influencing the patient or having the forms being completed in private, don't influence the physical exam by having the operative surgeon should not be performing the assessment, and the patients, of course, may also want to please the surgeon, so you have to take steps to mitigate that. Selection of reporting bias is done by selecting the appropriate and validated outcome measure, whether it be universal that allows comparison to similar studies such as PROMIS or condition-specific or general health measures like QOL, or surgeon versus patient reported, and this just needs to be specified very specifically. Transfer and exclusion bias deals with patients lost to follow-up. In other words, you're selecting out the patients that don't do well. Our goal, of course, is 80% follow-up at two years, which is our goal standard. Performance bias occurs with heterogeneity in the comparison group. Who performs the procedure? There's really no right answer to this. It just needs to be considered and divulged in the methods and, of course, in the limitations. Statistics. I'm going to give a big picture and common mistakes. I'm not going to go over all the specifics. There's just too much to cover. Which statistical tests should be used? Well, there are too many possibilities, but writers should consult a statistician, and we have tools to help reviewers and authors select which statistical test may be best, and I'll go over that a little bit later in our templates and checklists. Statistical significance is only a guide. Statistical significance cannot address clinical relevance, and with few patients, statistical significance can be frequently fragile, and that means because if you have different outcomes in a few patients, you really can change the results quite readily. Clinical significance is vastly more important, so we are asking our authors, of course, to utilize different techniques, such as MCID, minimal clinically important differences, which is what we call the floor or the smallest change detected. The past, the patient acceptable symptomatic state, is the feeling of being satisfied is kind of a middle ground. SCB, or significant clinical benefit, is really our ceiling or the patient's feeling substantially better. In addition, there's a newer term called maximal outcome improvement, which is really a ratio-based calculation based on their preoperative status. The one thing to know specifically as a common error is the thresholds. Remember, MCID, PASS, and SCB, these are thresholds, are patient-level, not group-level metrics, so you use a percentage of the patients reaching those thresholds, not the average of the group. Confidence intervals are more important than p-values. They give a degree of certainty, the 95%, and you do not want them to overlap. Overlapping confidence intervals may indicate lack of clinical significance, and that can be a problem. Multiple statistical testing is when authors will utilize lots of outcome measures, and eventually the researcher may get lucky, and the results may not be reproducible. The solution, of course, is an a priori power analysis done choosing a single primary outcome measure. A red flag is when conclusions are made that there is no difference. You have to check the power analysis. This can happen and result because of beta or type 2 error with two fewer patients, too small of a sample size, which yields not enough statistical power, and the results can be vastly wrong. Moving on to the results, they need to be organized in parallel with the methods. Everything in the methods must be reported and vice versa, and utilize tables and figures as much as possible so it's easily seen. Tables should be concisely reporting and summarizing the results, grouping the data logically, labeling the columns clearly, and provide a standalone message to include the number of patients, the mean, the confidence intervals, and with appropriate abbreviations. This is really important now because of the use of social media. They need to be standalone, and Twitter or other venues, these tables can stand alone and present a message. The discussion starts first with a single sentence that states the main findings, and then an effort needs to be made to compare and contrast the similar publications in trying to especially explain the contrasting conclusions. The limitations will wrap up the discussion and point out the weaknesses before the conclusion is given, and consider all the biases that we spoke about before. The conclusion should be specific and answer whether the hypothesis is supported or not, and these should be based on only the specific data of the study and not extrapolated to historical control, and really the conclusion should be word for word identical in the text and the abstract. With the figures, a picture of course paints a thousand words, and really labels and arrows are essential. They should include the side, the patient and viewing position, viewing portals, imaging view, appropriate labels and arrows, with a detailed description. It really needs to be standalone like we spoke about for the social media. And finally, and most importantly, the title. The goals of a journal article title is to inform the reader and attract attention. The attributes of a successful journal article title include that first and foremost it has to be accurate, it should be short without sacrificing keyword search, it should describe the results and conclusions of the study, and it should generalize the study's findings. Usually we try to avoid misleading statements and representations of course, but also things like hyphens, proper nouns, locations should be avoided, and in general, new and novel will let the readers decide if it's new or novel. The abstract is in parallel with the entire paper, with the purpose, the methods, results and the conclusions. For clinical studies, we use the level of evidence, as I spoke before, and the basic science will add clinical relevance. What is hot now, but really actually the last five or so years, are systematic reviews and meta-analyses, and we'll hear much more about this in a few minutes. Systematic reviews require strict inclusion and exclusion criteria to minimize the bias. Systematic reviews are qualitative synthesis of the data, and that's because they have low level of evidence, high risk of bias, and the studies are heterogeneous. It is critical to avoid improper pooling and summary estimates in the forest plots. Meta-analyses are generally quantitative synthesis of the data, and in general, we believe that they should be limited to randomized controlled trials with low heterogeneity as determined by I-square analysis, low risk of bias, and high level of evidence. Bias reports are out, with rare exception, with extremely new complications, and have really been supplanted by arthroscopy techniques, which is our video-oriented, peer-reviewed and open-access and PubMed-cited online journal. Moving on to tools to help improve scientific research, we have developed templates and checklists for both original article and systematic reviews and meta-analyses, and we have a laundry list of research pearls dealing with level of evidence, biomechanics, titles, clinical significant differences, Delphi, and the perils of pooling that a lot of the associate editors in the room have authored. This is an example of what our original article template looks like, and the checklists, they are organized in the form of an actual manuscript, so reviewers and authors both can access these to help improve the quality of their manuscripts. If you go to our website on the horizontal bars on the upper right, you can go to reviewers, and they'll be there for you to select and download at your leisure. In addition, you will go home with them today. In addition, there is a, under the collections tablet, there will be research pearls on the far right side that have all the articles at your access. How to become a top reviewer. We will appreciate confidential comments to the authors that helps us select the articles that are accepted, and what needs to be changed to make them a better manuscript, and then a line-by-line review to the authors. We understand that reviewers are not copy editors, but copy editors are not surgeons or scientists, so copy editors correct spelling and grammar, and the reviewers and professionals review confusing science. Ours is a merit-based promotion, so the best reviewers start young, the best reviewers join our editorial board, and those best editorial board members become associate editors. Three AMA Category 1 units can be earned for each review. Please join us on our social media, follow us on Twitter, like us on Facebook, join us on LinkedIn and YouTube, and YouTube has every one of our videos at Arthroscopy Techniques. Sign up to be a reviewer. You will greatly enjoy it and learn from it. I signed up when I was a fellow, and I'm still doing this, and it's absolutely rewarding. If you can go to editorialmanager.com forward slash ARTH, that will allow you to sign up, Deborah Van Ooy, who's in the back of the room, their podium is right outside the registration office, you can sign up with her. She will also give you a drive, which will include our reviewer templates, our checklists, and our research pearls, so that you can review those at your leisure. Please join us at arthroscopyjournal.org, and thank you so much for being a part of our journal. Thank you. If there's any questions, I'm happy to entertain them. Or afterwards I'll be available, yes. This is one of our previous emeritus associate editors, Merrick Wetzler, and he is just pointing out, if you didn't hear him, that as you become a good reviewer, you become a better researcher. Absolutely. Thank you. We get a lot of different qualities, some are spectacular, and some are awful, but is there any guideline for how to review an awful paper, in other words, do you feel bad doing Thank you, Dr. Weber. So Dr. Weber's question was, there is a wide range of different articles that are submitted, and what do you do if one paper is on the lower end of that scale? Well, your confidential comments are paramount, but we also think that if you can do a line-by-line review best as possible, the reason why that may be important is because we are now inviting some of these, they can be improved, and we can then perhaps shuttle them to our sister journal, Asmar. And so if we can improve the quality, we can educate our readers and our authors, there's really no downside to that. And so it does require a bit more effort, I understand it because I was a reviewer and I still do the same thing as an assistant editor-in-chief, I still go kind of line-by-line and figure out, well, how can they make this better? So we understand that it can be very challenging when the review or when the paper is a little bit subpar, and sometimes it may not be salvageable, but I always think there's something to learn, so a little bit of information can go a long way. Any other questions? Thank you very much. Thank you.
Video Summary
In this video, the speaker thanks Dr. Ryder and Dr. Lubowitz for their leadership in prestigious journals. They highlight the importance of peer-reviewed journals and discuss the essentials of a successful manuscript. The organization of a manuscript is described, including the title, abstract, introduction, methods, results, discussion, and references. The speaker emphasizes the importance of a concise and specific introduction, as well as the thoroughness and reproducibility of the methods. They also discuss levels of evidence and the assessment of bias in clinical trials. Statistical methods and common mistakes are briefly covered, with an emphasis on the importance of clinical significance over statistical significance. The speaker provides guidance on organizing and presenting results using tables and figures. They also discuss the structure and attributes of a successful journal article title. The importance of abstracts, systematic reviews, and meta-analyses is highlighted, along with tools available to improve scientific research, such as templates and checklists. The speaker encourages viewers to become top reviewers and offers resources and information on how to join the editorial board. The video concludes with gratitude for the viewers' participation in the journal.
Asset Caption
Michael Rossi, MD
Keywords
peer-reviewed journals
manuscript essentials
organization of a manuscript
levels of evidence
statistical methods
clinical significance
×
Please select your language
1
English