– Crowdsourced ratings face three key barriers to providing transparency for interoperability and health IT products, according to a study published in the Journal of the American Medical Informatics Association (JAMIA).
From Yelp to Glassdoor, crowdsourced consumer ratings are a common and essential avenue to show transparency in product quality. However, in the healthcare industry, crowdsourced ratings have yet to see similar or substantial results.
That is about to change as developers become subject to new health IT policies and regulations.
Following the 21st Century Cures Act, health IT developers will now collect consumer performance data. The law specifically calls for limits on, gag clauses and rules against sharing screenshots and videos of product performance.
Then there’s the EHR reporting program, which asks EHR developers to submit data on the functions of their health IT products, thus contributing to the establishment of new EHR certification requirements. Gaining feedback and information from stakeholders is the next phase of the EHR reporting program development, and it allows the ONC to prioritize patient safety.
“This situation aptly describes the current state of electronic medical record (EHR) interoperability,” wrote Julia Adler-Milstein and Crishyashi Thao, lead study authors from the University of California, San Francisco. “Health systems, hospitals, physician practices, and other provider organizations experience high costs and inconsistent performance when they seek to connect their EHRs to share data.”
For health organizations to provide transparency and share their respective experiences with interoperability services from their EHR vendor or third-party vendor, the Office of the National Coordinator for Health IT (ONC) and researchers from University of California, San Francisco, teamed up to develop a crowdsourced rating site called, InteropSelect.
When the site launched, it consisted of buyers entering a rating, buyers viewing ratings, sellers viewing ratings, and buyers connecting with other buyers to learn more about products. Researchers decided to gather ratings of HL7 version 28 interfaces, the most commoditized interoperability service.
Although both ONC and University of California, San Francisco offered incentives for reviews and attempted to achieve awareness through separate avenues, the website only received 12 reviews from nine viewers over the first 15 months.
“While there is still the potential that InteropSelect will get used, particularly with a large-scale marketing and awareness effort (that was not included in the scope of the Cooperative Agreement), our experience suggests that future crowdsourced rating efforts in this domain will likely fail without solutions to 3 critical barriers,” wrote the study authors.
First, researchers said it is difficult to rate and review interoperability services due to customized implementations. Although the research team narrowed it down to an HL7 interface, many customers choose to customize the interface during and after implementation.
“Unless the industry moves toward more standardized implementation, the second-best option is to try to capture implementation decisions (“context”) and present them alongside the crowdsourced rating,” explained Adler-Milstein and Thao. “Without such contextual information, the individual rating and aggregation of ratings could be misleading.”
One customer told the researchers they highly value post-implementation support. However, another customer said they have their health IT team to assist with the post-implementation backing.
Overall, this initial hurdle disrupted consumer feedback.
Second, finding reviewers with appropriate credentials and knowledge was a difficult task, said the researchers.
Researchers attempted to find reviewers who had technical experience who could talk about implementation year and pre- and post-implementation details, while also talking about their respective health organization.
Once the reviewer is up to standards, researchers noted a lack of interoperability purchases as the next barrier. When a health organization is placing a significant investment into an interoperability solution, the organization is likely gathering data on its own before making the purchase.
The group also learned hospitals want incentives to take time to write a long review.
“Hospitals are not good self-service customers, they need significant prompts,” an anonymous health organization said to the researchers. “We built [company’s name] in a way that was self-service but then had a person who would call in and check in with hospitals on a monthly basis. We would provide teasers to keep hospitals engaged and moving. It’s difficult to keep their focus.”
Customers may also fear violating vendor contract terms through a non-disclosure agreement, by completing the review.
“To minimize this constraint, ratings were anonymous, but given the close, ongoing nature of the provider-vendor relationship, many buyers did not feel comfortable providing a review,” explained the study authors.
Third, vendors typically won’t engage with crowdsourced sites until it impacts their sales.
Suppliers said they would engage with the website if its clients routinely used it, but until then, it would not be at the top of their priorities.
“Vendors are selling so they are going to put the time in to anything that helps them sell more or if they don’t put the time in, they are losing sales, then they will pay attention to it,” described an anonymous vendor.
“So, if whatever is on the site has no impact on their sales either way, I don’t know if they are going to put any time into it. As a seller, it’s going to matter if it’s driving sales one way or the other. That’s going to get plenty of attention, if the site will be a determinant in the market then sellers will be very engaged.”
Overall, Adler-Milstein and Thao said they will continue to learn more about how to boost the transparency of health IT purchasing through crowdsourced ratings.
“With thought to these issues, our experiences with InteropSelect offer new insights into the challenges of crowdsourcing ratings of interoperability services and reveals that crowdsourcing does not fit the current reality of health IT performance assessment,” Adler-Milstein and Thao concluded.
This content was originally published here.