false
Catalog
How to Review a Delphi Consensus Statement - AJSM ...
How to Review a Delphi Consensus Statement
How to Review a Delphi Consensus Statement
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
to just acknowledge two of my colleagues in London, Ontario, Dr. Diane Bryant, who's a co-PI in many of my clinical studies, the clinical epidemiologist, and a PhD student, who's now actually a PhD, Gauras Nazari, who did a lot of the work to prepare for this presentation, which will become evident as we go through. My disclosures are not relevant for this particular talk, but my real disclosure is really that I have been involved in a number of consensus statements, and based on the work that I've done to prepare this, I now realize that probably the quality of those publications could have been a little bit better. So hopefully, with something that we've learned from this, it'll make life easier for us, not only those of us that are doing Delphi-type papers, but also for those of us that review them. From a point of view of an ancient history, well, the Delphi process, Delphi was an ancient town in central Greece, and of course, there was an oracle, and the oracle was a person through whom people thought a deity would believe to speak through. And so, people of the communities relied on the Delphic oracle's skills of interpretation and foresight. Now, the more recent history associated with the Delphi process really started in the 1950s. I think the Rand Corporation started this off with utilizing the techniques to forecast the impact of technology in warfare. And in medicine, really from the 1960s onwards, it was used as a method to try and enhance effective decision-making for treatment protocols. So what is the purpose of Delphi methodology as we know it? Well, it seeks experts' opinions in an iterative, planned fashion to develop consensus on best practice. It's often associated with topics that go beyond existing knowledge. We may have knowledge gaps. We don't really have the literature, the clear scientific evidence to guide us through that practice, and therefore, we need to utilize some of our experts to help us see the wood for the trees. And also, it provides a means of dealing with conflicting scientific evidence. There may be some evidence out there, but what is important to us, and how do we interpret that and utilize that in our patient population? This is probably one of the one consensus statements that I've been involved in in the last number of years where there was significant controversy in this area, and we tried to bring a group of people together to gain consensus, understand what has been done, but also identify knowledge gaps that we can then utilize and work towards future research. Now, the state of the evidence, well, I mean, we're trying to achieve consensus or dissensus on the topic, and I can tell you, having chaired the sessions that I did on that particular statement, there were a number of bears, not just two, that were fighting a lot in the auditorium, so it's an interesting process to go through. But we want to use appropriate methodology to assess certain research objectives, and I think the important take-home is that it's not a replacement for rigorously conducted comparative studies, so we need to do the appropriate research properly. And I think one thing that maybe that we're seeing in orthopedics, particularly in sport medicine, is that it's difficult in many ways to really get those extra 85% up to 90% outcomes without doing really large-scale multicenter trials, and that is becoming more and more difficult. Or maybe there's just questions where we can't do comparative studies for one reason or another, whether it's equipoise or other reasons, and this is why maybe the Delphi process is coming more and more into our armamentarium. And, of course, we don't necessarily always need to attain consensus or dissensus. We're not necessarily looking for the correct answer all the time, and one thing that we've certainly learned is that shared ignorance does not equal wisdom, so we can use these processes to really try and benefit. Why are we interested in this today? Well, certainly there is an increasing number of Delphi consensus papers that are appearing in the orthopedic literature, and there really is very little guidance on what constitutes a good paper, and that is something that we found as we started delving into this a little bit further. And, you know, to Michael's previous point, I think what we decided was to try and put a little bit of science to this. GORIS then did a systematic review, and we essentially searched over the 20 top orthopedic journals in the previous 10 years just to find out how many papers had been done, and we found that 88 Delphi consensus papers had been done in that 10-year period. The question then comes, well, how do you assess the methodological quality? And, you know, we love the reporting tools. Reporting tools are very helpful to us as reviewers. There is this one, the CREADS tool, that was developed from a palliative care study. It's not very orthopedic-specific, and there are many areas where it actually could be improved upon. And there was a systematic review in the Journal of Clinical Epidemiology which stated that there was a need to improve the reporting of Delphi studies along the lines of a consort-like guidance, a consort prism, all these different checklists. And so that's what we thought. We'll use the year of the pandemic where we haven't been able to do this presentation. Let's try and see if we can develop a reporting guideline. When we did the systematic review, there were lots of issues in the orthopedic literature, and I don't think that's much of a surprise to us. But, you know, there were many issues with many of these papers, including some of my own. So there was a clear need for a reporting guideline. So just to highlight some guidelines, of course, consort really helps us as researchers to make sure as we go through a process of doing a study that we report the information in an appropriate manner. And this is what we're looking for when it comes to something that would help us with a Delphi study. And so the purpose of this particular study was to propose a Delphi study reporting guideline. We used a number of reference materials. So that's looking at, so the Group Technique for Program Planning, the RAND Corporation Delphi Technique, the COSMIN Initiative, as well as then looking at other reporting guidelines such as PRISMA and consort to help us as well as the 20 papers that we had searched. And we came up with, Goris came up with this discipline statement. I'm just gonna run through this. And we're actually in the process now of, we submitted this to the Clinical Journal of Epidemiology. It's going through a second process. And we've actually had to go to do a Delphi consensus on the reporting guideline of the consensus. And we're at the third round of doing that. So hopefully we'll get this approved in the coming months. So the first thing you wanna look for is the title. Does the title actually state that this is a Delphi study? And I think that goes without saying. Abstract, pretty simply. It should be a structured abstract including objectives, methods, results, conclusions. I think these are all pretty standard. It's important that we state and indicate its context. So essentially highlighting the opposing, conflicting, controversial findings and indicating the knowledge gaps. Why we're actually, really the purpose of why we need to do this particular consensus to try and move things forward, move the needle. The reasons for using a Delphi study methodology may be that there is just a lack of published guidance. There's no evidence from gold standard comparative studies, RCTs. But we still have this conflicting issues that we need to be able to resolve to best treat our patient population. The objectives need to be stated clearly with explicit use of our statement of questions being addressed. And then obviously then based on the results of those how will the manner in which the information and how we answer those questions will influence decision making once the study is completed. I think this is actually, I don't think we really do this enough particularly in the Delphi world and maybe this may come about but it's obviously a little bit more work. But having a protocol published prior to doing the Delphi consensus I think is important because you're stating very clearly a priori what your consensus threshold is. It's very easy to do a study and if you can't find it you get consensus to change your goalposts. And we really want to try and avoid that so we state this a priori. Expert panel selection. This is becoming more and more of an issue going forward. We want to make sure that these are diverse panels that they include all members of the population with adequate expertise in this area. And as such it should be stated how we go about selecting those panel members. So it needs to be a diverse professional background and again permitting anonymous comparison. The initial questionnaire development and this is really where it comes into the nitty gritty of what a Delphi process is. It's the very first round of your Delphi is going to be developing your questionnaire. And essentially we need to make sure that in those papers that it's clearly stated how that questionnaire was developed and that would be administered in the first round of your Delphi study. And then after that we refine the questionnaire. So the first process is submitting the questionnaire and then refining those questions till we get consensus as to what questions we're going to ask our experts. And so the validation of that questionnaire is incredibly important and how we actually reach that. The next thing there's the second round and that's administering the questionnaire. And that's administering that questionnaire to our expert panel members and then being able to rate those particular questions. And you can obviously do that via numerical scales or like of scales really depends on your methodology that you develop upfront. And so really the difference between your first and your second rounds is that the first round is developing the questionnaire. The second round is ranking the items that you've included within your questionnaire. And then we come to the third round. We're now looking to see if we can get consensus on these particular topics. So the third round involves re-voting, ranking, and making sure that you state the level of agreement as you go and we can repeat those multiple rounds until we get to a state of consensus or of course dissensus. We want to make sure that we state what constituted a consensus in the Delphi study. So again, looking at, this is referring to the panelists and again, making sure that we've defined this a priori and having ideally quantitative thresholds that will be a consensus statement. And most of the time we're looking at anything between 67 to 80%. I think much over 80% is probably unreasonable. Steering committees are a great idea. This helps having a refined study questionnaire. We can aggregate and analyze expert panel responses and summarize votes and feedback for the following rounds. We again, we want to be able to provide information to describe the role of the steering committee. And then it's helpful for the purpose of a paper to have an expert panel flow diagram. And that's really just highlighting the number of experts that were involved, the responses that they were given, and whether or not any dropped out or withdrew. And in an ideal world, you want to be having the same experts doing the same rounds throughout that process. And then being able to present the characteristics in detail really, showing that these people have the appropriate expertise to be able to guide our treatment. In terms of the results, then again, we need to be able to present the results in a clear and formative manner. And this really will help us determine what level of consensus we achieve. And then ultimately, summarize the guidance which is developed on best practices. And so an example for that, again, back to that anterolateral complex consensus. At the end of that, we were able to provide our 13 statements that we achieved consensus on. We were able to provide the information that was required and the level of consensus that we achieved for each statement, and those that did not make it through. The summary of the findings will often, in the discussion, then contrast that with maybe earlier versions if they occurred. It's important, like any study, is to discuss limitations. There are limitations to this technique. It may be that we have not engaged in an appropriate number, or the appropriate background of our experts. And we may have low response rates, high attrition rates. I think the lack of a study protocol prior to does leave an option to change your goalposts, as I said. So all of these things should be addressed in the limitation. And then of course, funding and conflicts of interest. We need to make sure that this is highlighted very clearly, which I'm sure within the journals, that's obviously a big part. And then of course, the acknowledgement of all those that took part. So this is the discipline statement. I hope that this will get published in the Journal of Clinical Epidemiology. And we hope that it will at least give a blueprint for reviewers when faced with many studies coming forward, that this will help us in being able to determine what constitutes a good process. So just in summary, the DELPHI process that seeks expert opinions in an iterative plan fashion to develop consensus on best practice. There really is a lack of reporting guidelines for these studies. The discipline checklist, it's a 22-item evidence-based reporting guideline. And I hope that with time, we'll be able to utilize this going forward for future reviews. We've got some references. I can have this available to you afterwards because there's a lot of information in there. And thank you very much for your attention. These are my colleagues, Diane Bryant and Goris Bizarri. Please do reach out to us if you have any questions or comments, and I'd be happy to take any questions. Thanks very much for your attention. Thank you. Alan, I have a question. Could you go in a little more detail? How do you decide whether the panel is really expert enough to really take the DELPHI statement seriously? Yeah, you nearly need to have a consensus panel to determine the consensus panel, right? I mean, it's tough, but I mean, I think, you know, what we were able to do in past statements, and I think if you look at the panels that have been involved, and certainly the consensus statements that have been involved in, you know, it's usually people who have been prominent in the literature, have done some research in this area. If it's on a clinical issue that, you know, that they have a practice that sees that type of pathology and is treating that type of patient on a semi-regular basis. But I think that's the important part of when you're reviewing these papers, is to make sure that the authors have stated why those experts are involved. And I would say that the papers that I've been involved in, maybe we haven't done that well enough. And really, we should be much more explicit about why people are in there, why they were chosen, and why we think they are experts. Thanks. Liza. I think, and you said this, I just want to apologize for not understanding it, but so many of the authors I've found in that group all have a practice. And I think that that limits, well, I guess the question really is, how much of it is your opinion? They don't vote, without any other input from other people, or what you did like to do, the AAPLC was a discussion that you were right on, maybe your opinion, and maybe some of your opinion as well. So what do you think about those two practices going to a blend? Like, purely to classify evidence, that you can do with an electronic survey, or should it have some kind of face-to-face discussion among that group? I think there is, there are options for both. I think this year has taught us an awful lot about being able to do an awful lot more stuff on an online basis. But equally, and I have been involved with both in-person consensus discussions, and reaching consensus through going through that same iterative process. The process doesn't change. I would just argue that if you meet in person, you can do it a lot more quickly and efficiently than you maybe can do over email, for example, or through various platforms, such as Qualtrics or something. You know, some groups were doing first and second rounds through a app-based program, such as Qualtrics, and maybe the last round to really iron out that final, just getting over the edge just to make sure that you get to that consensus point, or to census, to have that in person. And I think it really depends on your research question and questions that you're trying to answer. Some are a little bit more straightforward. Others require more discussion. And it's an interesting process to go through. And I think those of us that have been involved in these, we learn so much even just going through the process of trying to gain consensus on these issues. Jim? Yeah, to that point, my understanding was that the proper method is anonymous, because otherwise the more persuasive people can bias the final consensus. No, no, no. Yes. And I also had a question. And this is a question that comes up not only for Delphi and there are some answers, but I just wondered on your approach, how do you distinguish who should be an author of the paper versus who should be acknowledged? I understand there's criteria from the International College of Journal Medical Editors, but in terms of Delphi-type papers, and in particular, it seems like you can have a large panel and then it can lead to a question. What are your thoughts? Just kind of want to ask the question to the editor as to what you want to accept. I mean, in many ways, I think in some of these sessions, there's an awful lot of input that comes from the whole group. And I think if they're getting involved with that and they're providing their input in an ideal world, I would like them to be, for certainty, if it's something that I'm running, I'd like them to be a co-author. Now, whether or not that's on the masthead or it's as part of a study group, which I think most of us probably do, and I'm seeing from the journals that more and more, you're accepting that approach. I think that's a great approach to acknowledge the work that a lot of these people are putting in to these types of endeavors. But I'm not sure, I mean, I think if you follow clear ICGME guidelines, a lot of the time, they maybe don't fulfill all of those specific guidelines. So I think that's maybe an issue more for the journal editorial boards to work out what's appropriate. Answer a question with a question. Yeah, I think the study group's a great option. Keith. Because I've only seen some that are one round, this is what we decided, and I've seen some that are two rounds that you described, which I felt was inappropriate. What do you do with the paper that you do not think they follow when you've done five? I think you've got to go back to the authors and say that you didn't go through that. I mean, the process is pretty clear. Development of a questionnaire, discuss that questionnaire, try and develop consensus, and then refine that, and that's a minimum of three rounds. And I think if you get, I mean, if you get it within three rounds, it's great. I mean, that's perfect, that's a nice, easy study, but if it often takes multiple rounds, or you realize you're never gonna get consensus on a particular question, that's also fine. You know, that you can. Right, right. And as a result, we'll see an awful lot of papers entitled blah, blah, blah, modified, you've done five methods. Right, that's it, yeah, yeah, yeah. That'll be sometimes solved. But I think as a reviewer, what's really important is cutting through the details to make sure that they're not just doing that as an easy way to try and come up with, you know, backing up their statement. You know, is it truly that you can't do that research, and why? And it should not be as a, it shouldn't be, instead of doing rigorously controlled studies. Go for it. I've got a question. So there are also papers where they talk about gradations of strength of consensus. You mentioned here sort of setting a threshold of 80% or 67% or what have you. But isn't there some value within the strength of consensus to be unanimous, 90, 80, 70, and have the results presented in that hands? Yeah, and you can do that for sure from a point of view of how you present your data. But if you set, you know, if you state a priori that your threshold is 80%, then you're looking to find out how many people actually, or how easy it was to reach that 80% level. If you get 100% consensus, then you should definitely state that because it's a much stronger recommendation. It's a bit like some of the best practice guidelines. You know, if you have 100% consensus, that really helps us determine what is our best approach going forward with those individual questions.
Video Summary
The video discusses the Delphi methodology, which seeks expert opinions in an iterative, planned fashion to develop consensus on best practices. It explains the history and purpose of the Delphi process, which began in the 1950s and has since been used in medicine to enhance decision-making for treatment protocols. The video highlights the increasing number of Delphi consensus papers in orthopedic literature and the lack of guidance on what constitutes a good paper. It introduces a proposed Delphi study reporting guideline, which aims to provide a blueprint for reviewers and improve the reporting of Delphi studies. The video covers various aspects of the guideline, including title, abstract, context, objectives, questionnaire development, validation, administration, consensus criteria, expert panel selection, results, limitations, and funding disclosures. It concludes by emphasizing the importance of the discipline statement and hopes that it will be published and used as a reference for future Delphi studies. The video credits Dr. Diane Bryant and Gauris Nazari for their contributions to the research and preparation of the presentation.
Asset Caption
Alan Getgood, MD, FRCS (Tr&Orth)
Keywords
Delphi methodology
consensus
Delphi process
medicine
treatment protocols
reporting guideline
×
Please select your language
1
English