false
Catalog
2021 AOSSM-AANA Combined Annual Meeting Recordings
How to Review a Delphi Consensus Statement
How to Review a Delphi Consensus Statement
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'd like to just acknowledge two of my colleagues in London, Ontario, Dr. Diane Bryant, who's a co-PI in many of my clinical studies, a clinical epidemiologist, and a PhD student who's now actually a PhD, Gauras Nazari, who did a lot of the work to prepare for this presentation which will become evident as we go through. My disclosures are not relevant for this particular talk, but my real disclosure is really that I have been involved in a number of consensus statements, and based on the work that I've done to prepare this, I now realize that probably the quality of those publications could have been a little bit better. So hopefully with something that we've learned from this, it'll make life easier for us, not only those of us that are doing Delphi-type papers, but also for those of us that review them. From a point of view of an ancient history, well, the Delphi process, Delphi was an ancient town in central Greece, and of course there was an oracle, and the oracle was a person through whom people thought a deity would believe to speak through. And so people of the communities relied on the Delphic oracle's skills of interpretation and foresight. Now, the more recent history associated with the Delphi process really started in the 1950s. I think the Rand Corporation started this off with utilizing the techniques to forecast the impact of technology in warfare. And in medicine, really from the 1960s onwards, it was used as a method to try and enhance effective decision-making for treatment protocols. So what is the purpose of Delphi methodology as we know it? Well, it seeks experts' opinions in an iterative planned fashion to develop consensus on best practice. It's often associated with topics that go beyond existing knowledge. We may have knowledge gaps, but we don't really have the literature, the clear scientific evidence to guide us through that practice, and therefore we need to utilize some of our experts to help us see the wood for the trees. And also, it provides a means of dealing with conflicting scientific evidence. There may be some evidence out there, but what is important to us, and how do we interpret that and utilize that in our patient population? This is probably one of the consensus statements that I've been involved in in the last number of years, where there was significant controversy in this area, and we tried to bring a group of people together to gain consensus, understand what has been done, but also identify knowledge gaps that we can then utilize and work towards future research. Now, the state of the evidence, well, I mean, we're trying to achieve consensus or dissensus on the topic, and I can tell you, having chaired the sessions that I did on that particular statement, there were a number of bears, not just two that were fighting a lot in the auditorium, so it's an interesting process to go through. But we want to use appropriate methodology to assess certain research objectives, and I think the important take-home is that it's not a replacement for rigorously conducted comparative studies. So we need to do the appropriate research properly, and I think one thing that maybe that we're seeing in orthopedics, and particularly in sport medicine, is that it's difficult in many ways to really get those extra 85 up to 90% outcomes without doing really large-scale multi-center trials, and that is becoming more and more difficult. Or maybe there's just questions where we can't do comparative studies for one reason or another, whether it's equipoise or other reasons, and this is why maybe the Delphi process is coming more and more into our armamentarium. And of course, we don't necessarily always need to attain consensus or dissensus. We're not necessarily looking for the correct answer all the time, and one thing that we've certainly learned is that shared ignorance does not equal wisdom. So we can use these processes to really try and benefit. Why are we interested in this today? Well, certainly there is an increasing number of Delphi consensus papers that are appearing in the orthopedic literature, and there really is very little guidance on what constitutes a good paper. And that is something that we found as we started delving into this a little bit further, and to Michael's previous point, I think what we decided was to try and put a little bit of science to this at GORUS, then did a systematic review. And we essentially searched over the 20 top orthopedic journals in the previous 10 years just to find out how many papers had been done, and we found that 88 Delphi consensus papers had been done in that 10-year period. The question then comes, well, how do you assess the methodological quality? And we love the reporting tools. Reporting tools are very helpful to us as reviewers. There is this one, the CREEDS tool, that was developed from a palliative care study. It's not very orthopedic-specific, and there are many areas where it actually could be improved upon. And there was a systematic review in the Journal of Clinical Epidemiology which stated that there is a need to improve the reporting of Delphi studies along the lines of a consort-like guidance, a consort prism, all these different checklists. And so that's what we thought, we'll use the year of the pandemic where we haven't been able to do this presentation, let's try and see if we can develop a reporting guideline. And we did the systematic review. There were lots of issues in the orthopedic literature, and I don't think that's much of a surprise to us. But there were many issues with many of these papers, including some of my own. So there was a clear need for a reporting guideline. So just to highlight some guidelines, of course, consort really helps us as researchers to make sure as we go through a process of doing a study that we report the information in an appropriate manner. And this is what we're looking for when it comes to something that would help us with a Delphi study. And so the purpose of this particular study was to propose a Delphi study reporting guideline. We used a number of reference materials, so that's looking at the group technique for program planning, the RAND Corporation Delphi technique, the Causman initiative, as well as then looking at other reporting guidelines such as prisma and consort to help us as well as the 20 papers that we had searched. And we came up with, Gaurus came up with this discipline statement. I'm just going to run through this. And we're actually in the process now of, we submitted this to the Clinical Journal of Epidemiology. It's going through a second process, and we've actually had to go to do a Delphi consensus on the reporting guideline of the consensus. And we're at the third round of doing that. So hopefully we'll get this approved in the coming months. So the first thing you want to look for is the title. Does the title actually state that this is a Delphi study? And I think that goes without saying. Abstract, pretty simply, it should be a structured abstract, including objectives, methods, results, conclusions. I think these are all pretty standard. It's important that we state and indicate its context. So essentially highlighting the opposing, conflicting, controversial findings, and indicating the knowledge gaps. Why we're actually, really the purpose of why we need to do this particular consensus to try and move things forward, move the needle. The reasons for using a Delphi study methodology may be that there is just a lack of published guidance. There's no evidence from gold standard comparative studies, RCTs. But we still have these conflicting issues that we need to be able to resolve to best treat our patient population. The objectives need to be stated clearly with explicit use of our statement of questions being addressed. And then obviously, then based on the results of those, how will the manner in which the information and how we answer those questions will influence decision making once the study is completed. I think this is actually, I don't think we really do this enough, particularly in the Delphi world. Maybe this may come about, but it's obviously a little bit more work. But having a protocol published prior to doing the Delphi consensus I think is important because you're stating very clearly a priori what your consensus threshold is. It's very easy to do a study, and if you can't find it, you get consensus to change your goalposts. And we really want to try and avoid that, so we state this a priori. Diverse panel selection, this is becoming more and more of an issue going forward. We want to make sure that these are diverse panels, that they include all members of the population with adequate expertise in this area. And as such, it should be stated how we go about selecting those panel members. So it needs to be a diverse professional background, and again, permitting anonymous comparison. The initial questionnaire development, and this is really where it comes into the nitty-gritty of what a Delphi process is. The first round of your Delphi is going to be developing your questionnaire. And essentially, we need to make sure that in those papers that it's clearly stated how that questionnaire was developed. And that would be administered in the first round of your Delphi study. And then after that, we refine the questionnaire. So the first process is submitting the questionnaire, and then refining those questions to get consensus as to what questions we're going to ask our experts. So the validation of that questionnaire is incredibly important, and how we actually reach that. The next thing is the second round, and that's administering the questionnaire. And that's administering that questionnaire to our expert panel members, and then being able to rate those particular questions, and you can obviously do that via numerical scales or Likert scales, really depends on your methodology that you develop up front. And so really, the difference between your first and your second rounds is that the first round is developing the questionnaire, the second round is ranking the items that you've included within your questionnaire. And then we come to the third round. We're now looking to see if we can get consensus on these particular topics. So the third round involves re-voting, ranking, and making sure that you state the level of agreement as you go, and we can repeat those multiple rounds until we get to a state of consensus or, of course, dissensus. We want to make sure that we state what constituted a consensus in a Delphi study. So again, looking at, this is referring to the panelists, and again, making sure that we've defined this a priori, and having, ideally, quantitative thresholds that will be a consensus statement. And most of the time, we're looking at anything between 67 to 80 percent. I think much over 80 percent is probably unreasonable. Steering committees are a great idea. This helps having a refined study questionnaire. We can aggregate and analyze expert panel responses and summarize votes and feedback for the following rounds. Again, we want to be able to provide information to describe the role of the steering committee. And then, it's helpful for the purpose of a paper to have an expert panel flow diagram, and that's really just highlighting the number of experts that were involved, the responses that they were given, and whether or not any dropped out or withdrew. And in an ideal world, you want to be having the same experts doing the same rounds throughout that process. And then, being able to present the characteristics in detail, really showing that these people are appropriate, have the appropriate expertise to be able to guide our treatment. In terms of the results, then again, we need to be able to present the results in a clear and formative manner. And this really will help us determine what level of consensus we achieve, and then ultimately summarize the guidance which is developed on best practices. And so, an example for that, again, back to that anterolateral complex consensus, at the end of that we were able to provide our 13 statements that we achieved consensus on. We were able to provide the information that was required and the level of consensus that we achieved for each statement, and those that did not make it through. The summary of the findings will often, in the discussion, then contrast that with maybe earlier versions, if they occurred. It's important, like any study, is to discuss limitations. There are limitations to this technique. It may be that we have not engaged in an appropriate number or the appropriate background of our experts. We may have low response rates, high attrition rates. I think the lack of a study protocol prior to does leave an option to change your goalposts, as I said. So, all of these things should be addressed in the limitation. And then, of course, funding and conflicts of interest. We need to make sure that this is highlighted very clearly, which I'm sure within the journals that's obviously a big part, and then, of course, the acknowledgement of all those that took part. So, this is the discipline statement. I hope that this will get published in the Journal of Clinical Epidemiology, and we hope that it will at least give a blueprint for reviewers when faced with many studies coming forward, that this will help us in being able to determine what constitutes a good process. So, just in summary, the DELPHI process, it seeks expert opinions in an iterative plan fashion to develop consensus on best practice. There really is a lack of reporting guidelines for these studies. The discipline checklist, it's a 22-item evidence-based reporting guideline. And I hope that with time, we'll be able to utilize this going forward for future reviews. We've got some references. I can have this available to you afterwards, because there's a lot of information in there. And thank you very much for your attention. These are my colleagues, Diane Bryant and Goris Bizarri. Please do reach out to us if you have any questions or comments. And we'd be happy to take any questions. Thanks very much for your attention. Alan, I have a question. Could you go into a little more detail? How do you decide whether the panel is really expert enough to really take the DELPHI statement seriously? Yeah, you nearly need to have a consensus panel to determine the consensus panel, right? It's tough, but I think what we were able to do in past statements, and I think if you look at the panels that have been involved, and certainly the consensus statements that have been involved in, it's usually people who have been prominent in the literature, have done some research in this area. If it's on a clinical issue, that they have a practice that sees that type of pathology and is treating that type of patient on a semi-regular basis. But I think that's the important part of when you're reviewing these papers, is to make sure that the authors have stated why those experts are involved. And I would say that the papers that have been involved in, maybe we haven't done that well enough. And really, we should be much more explicit about why people are in there, why they were chosen, and why we think they are experts. Liza? I think there are options for both. I think this year has taught us an awful lot about being able to do an awful lot more stuff on an online basis. But equally, and I have been involved with both in-person consensus discussions and reaching consensus through going through that same iterative process. The process doesn't change. I would just argue that if you meet in person, you can do it a lot more quickly and efficiently than you maybe can do over email, for example, or through various platforms such as Qualtrics or something. Some groups were doing first and second rounds through an app-based program such as Qualtrics and maybe the last round to really iron out that final, just getting over the edge to make sure that you get to that consensus point, or to census, to have that in person. And I think it really depends on your research question and questions that you're trying to answer. Some are a little bit more straightforward, others require more discussion, and it's an interesting process to go through. And I think those of us that have been involved in these, we learn so much even just going through the process of trying to gain consensus on these issues. Jim? Yeah. To that point, my understanding was that the proper method is anonymous, because otherwise the more persuasive people can bias the final consensus. And I also had a question. And this is a question that comes up not all the time. And there are some answers, but I just wondered, on your approach, how do you distinguish who should be an author of the paper versus who should be acknowledged? I understand there's criteria from the International College of General Medical Editors. But in terms of Delphi-type papers, and in particular, it seems like you can have a large panel, and then it can lead to a question. What are your thoughts? I just kind of want to ask the question to the editor as to what you want to accept. I mean, in many ways, I think in some of these sessions, there's an awful lot of input that comes from the whole group. And I think if they're getting involved with that, and they're providing their input in an ideal world, I would like them to be, for certainty, if it's something that I'm running, I'd like them to be a co-author. Now, whether or not that's on the masthead, or it's as part of a study group, which I think most of us probably do, and I'm seeing from the journals that more and more you're accepting that approach, I think that's a great approach to acknowledge the work that a lot of these people are putting into these types of endeavors. But I'm not sure, I mean, I think if you follow clear ICGME guidelines, a lot of the time they maybe don't fulfill all of those specific guidelines. So I think that's maybe an issue more for the journal editorial boards to work out whether it's appropriate. Answer a question with a question. Yeah, I think the study group's a great option. Keith? I really appreciate your talk. I've had the opportunity to review some of the Delphi papers and manuscripts, and I guess what I kind of have for you is there are a minimum number of rounds you have to do. Because I reviewed some were one round, this is what we decided, and I've seen some were one of those papers that you described, which I felt was inappropriate. What do you do with the paper that you do not think they followed when you dealt with Delphi? I think you've got to go back to the authors and say that you didn't go through that. I mean, the process is pretty clear. Development of a questionnaire, discuss that questionnaire, try and develop consensus, and then refine that. And that's a minimum of three rounds. And if you get it within three rounds, it's great. I mean, that's perfect. That's a nice, easy study. But if it often takes multiple rounds, or you realize you're never going to get consensus on a particular question, that's also fine. And as a result, we'll see an awful lot of papers entitled, blah, blah, blah, modified Delphi networks. That'll be sometimes solved. But I think as a reviewer, what's really important is cutting through the details to make sure that they're not just doing that as an easy way to try and come up with backing up their statement. Is it truly that you can't do that research, and why? And it should not be as a... It shouldn't be instead of doing rigorously controlled studies. Go for it. Yeah, and you can do that for sure from a point of view of how you present your data. But if you state a priori that your threshold is 80%, and you're looking to find out how many people actually... Or how easy it was to reach that 80% level, if you get 100% consensus, then you should definitely state that, because it's a much stronger recommendation. It's a bit like some of the best practice guidelines. If you have 100% consensus, that really helps us determine what is our best approach going forward with those individual questions.
Video Summary
In this video, the speaker discusses the Delphi methodology, which seeks expert opinions in an iterative planned fashion to develop consensus on best practice. The speaker highlights the lack of reporting guidelines for Delphi studies and proposes a 22-item evidence-based reporting guideline called the "discipline statement". The guideline covers various aspects of the Delphi process, such as the title, abstract, context, objectives, questionnaire development, validation, panel selection, consensus determination, steering committees, results, limitations, funding, and conflicts of interest. The speaker emphasizes the importance of clearly stating the purpose and methodology of the Delphi study, as well as the need for diverse and appropriate selection of expert panel members. The speaker also suggests that a study protocol be published prior to conducting the Delphi consensus. The aim of the proposed reporting guideline is to provide reviewers with a blueprint for assessing the methodological quality of Delphi studies.
Asset Caption
Alan Getgood, MD, FRCS (Tr&Orth)
Keywords
Delphi methodology
reporting guidelines
discipline statement
consensus development
expert panel members
methodological quality
×
Please select your language
1
English