Meeting Tier 1 of NICE standards for eHealth with clinical User Experience (UX) research

Craig Newman
9 min readJan 31, 2019

Utilising UX research and service development methodologies to validate eHealth technology as credible, relevant, acceptable and accessible.

Image thanks to: rawpixel@rawpixel

NICE have provided their standards for evidence required, to guide the commissioning of eHealth technologies into NHS services. It’s a welcome contribution — drawing back the curtain for innovators and commissioning bodies. It also appears to be very succinctly written, providing a simple decision tree approach to knowing what type of evidence is needed for what type of intervention.

Of particular relevance to the patients / clinicians that will use the technology, is the requirement for all technology (no matter the function or risk level) to pass through Tier 1 of the evidence standards (the table is provided below, taken from the NICE paper). The tiers act as a pyramid of a sort, where the impact on patient care / risk level increases in synchrony with an increasing evidence requirement (moving from Tier1 to Tier 3b).

Tier 1, the foundation for all eHealth solutions, is primarily focused on the value, user experience, usability and reliability of the proposed technology:

NICE, 2018

The table shows two columns next to each category; ‘minimum evidence standard’ and ‘best practice standard’. The decision process around these is vague, where the risk (related to the vulnerability of the users and the seriousness of consequences if the technology fails) pushes the requirement towards ‘best practice’. In reality, there is a possibility that risk-averse commissioners will lean more towards the ‘best practice’ column. If you work towards the worst case scenario, you will be ready.

It strikes me that the NICE standards appear to lack some essential clarity, in defining what methods provide the ‘evidence’ around credibility, relevance, acceptability and engagement with accessibility issues. It seems left to the commissioners to make the judgement call. In reality, these commissioners are often pressured to focus on ‘financial year-in-savings’ or ‘efficacy data’, rather than the less dramatic data on engagement, appetite and implementation. Ironically, it is the user experience/usability that often gets the blame, when eHealth fails to be uptaken by clinics or patients.

Here, I propose an approach based on my interpretation of the standards — for development teams and those bringing their innovations to the NHS / UK and seeking to address Tier 1:

  1. Credibility with UK health and social care professionals.

This standard is intended to show that the DHT has a plausible mode of action and reflects current standard/best practice in the UK health and social care system, or provides an alternative to standard/best practice that is beneficial to users and the health and social care system.

Evidence may include a report signed by a named expert or experts, documenting their role in the design, development, testing or sign-off of the DHT.

I discuss the theme of identifying and articulating the ‘plausible mode of action’ in another article, where I refer to it as ‘problem identification’. Representation of national and regional priorities and service development opportunities/barriers in your business case is essential and usually quite accessible to a resourceful team. There are services available to support teams that feel very lost, including local Allied Health Science Networks and eHealth UX services such as ours.

Expert sign off is an interesting challenge and again very vague. Understanding who is a credibility expert is going to be a challenge to commissioners. Many eHealth solutions are either initiated or backed by clinicians, where regional expertise and personal investment in a project could go hand in hand.

I argue that ‘experts’ should be considered as a consortium of expert patients (patients with the condition or experienced in research review), clinical staff, a specialist clinician/consultant and a health UX research team. The latter can aid in providing a method of expert involvement to be sure that the correct UX research is undertaken — in such a way that the data can be reported / published to peer-review standards if later required (as in our example here).

2. Relevance to current care pathways in the UK health and social care system.

Meeting this standard shows that the DHT is relevant to the UK health and social care system. For the minimum evidence standard, evidence could include published or unpublished reports describing a successful trial of the DHT in a relevant UK setting showing benefit to users. The report should include a description of the DHT’s effect on the care pathway as well as any recorded user and resource benefits. For the best practice standard, evidence could include published or unpublished reports describing the successful implementation of the DHT showing benefits to users in the UK health and social care system.

It is important to remember that ‘benefit to patients’ need not be confined to health improvement data, which often panics small enterprises in early commissioning stages.

Tier 1 applies to all eHealth technology, from referral software up to heart monitors. Understanding what benefit you need to articulate is directly connected to how well you identified the problem you are seeking to solve. Patient benefit can include engagement, increased accessibility, improved education, improved satisfaction and many more. Your USP is defined by the problem you are solving and the ‘benefit’ metric derives from there also, they are in reality one and the same. Describing the benefit, where it is needed, how you solve it and how you can show it — that is credibility.

Evidenced implementation into the UK health service / social care system can range from a small-scale NHS service development pilot up to a randomised control trial. In some cases, a simulation study may suffice. For many solutions, very simple clinic pilots can provide strong evidence for predicted impact. As many teams know — without commissioning, it is impossible to get data to scale — and small scale projects don’t justify RCTs, hence the rational for providing these NICE standards. The commissioners need convincing evidence that can articulate the predictable success of implementation into existing care pathways, there is scope for being creative. Clinicians are skilled and experienced in drawing on new interventions, understanding how this is done and applying that to your narrative will be a powerful message.

3. Acceptability with users

Some evidence to show that potential users of the DHT have tested it and found it to be usable and useful will help to show that implementing the DHT may be successful. Evidence could include reports from user or user group testing, or showing that users have been consulted in the design and development process.

The NHS thrives on patient and public involvement (PPI). From its review process through to research design and delivery. PPI has informal methods through to formal board membership roles. Try to remember this as you build a case for acceptability — showing that PPI led your project from concept through to delivery will clear a lot of resistance and be a true patient centred model. Sometimes, the user is not a patient — but a clinician (as with our award-winning project, ACEmobile). Here, the clinicians need involvement from concept also. This may seem obvious to a design team — but very frequently, design teams and clinical teams develop feature-rich ideas that lose sight of their USP in an effort to build appeal.

Showing evidence of “consultation” in the design and development process seems a low standard, relative to the place users have in the R&D of any commercial project. Teams can aim for this low standard, but risk a poor ROI or lack of engagement if they take this path. In-depth UX research that is adapted for health service delivery will provide the evidence for NICE but, more importantly, will shape your project towards engagement and through the cultural barriers of healthcare innovation. Your report should be health service orientated: the methodology is key — strong quantitative and qualitative methods that match onto the project needs. If you don’t have the expertise, buy the training or hire a specialist. This is money well spent if you want a product that engages at a national scale and evidence that persuades commissioners that it is not a regional fluke with little generalisability.

The required standard for usability is, again, low. This is surprising, given the entire section dedicated to usability testing in the NHS digital assessment questionnaire (DAQ) which is a required step needed towards NHS health library recognition. The DAQ lists ISO standards for usability and sets the bar high. I shouldn’t need to make the case for high standards of usability research here, so I won’t — but I will add that very few health innovations publish their usability research, which is a lost opportunity to get valuable peer-reviewed outputs from small scale research. Our recently published clinic simulation usability study — the data demonstrated the potential impact on a national scale and supported the team winning a national prestigious HSJ award.

4. Equalities considerations

Consider whether the DHT helps to reduce any existing inequalities within the health and social care system. This could include factors such as digital exclusion, or use by hard-to-reach populations.

Indicate any equalities considerations needed when commissioning, adopting or implementing the DHT, particularly in reference to the Equality Act 2010.

It is a good idea to scope the literature at the very least, contacting local services at best, to identify areas of reported accessibility issues. These can be related to poverty, rurality, isolation, physical disability, race, learning disability, education etc.

Most commercial software is designed in ignorance of these accessibility themes. Accessibility is not only an aspirational win for eHealth projects but it serves a moral obligation, where technology can solve a problem we should seek out and provide the solution (within the scope of feasibility). I repeat myself, but problem identification work alongside a strong UX strategy will provide accessibility development opportunities and the solutions needed. Building for accessibility is at the core for all future health service delivery. Using these robust approaches will provide a convincing case to a reviewer, where you articulate your efforts, solutions and areas of continued need. Evidence of your product having been used, as a consequence of your efforts (in hard to reach places) is the best evidence.

5. Accurate and reliable measurements (if relevant) and 6. Accurate and reliable transmission of data (if relevant).

The scale of methodological complexity and data reporting in these areas is most definitely project specific. An App that measures whether users read the education relating to a diabetic diet versus a device that monitors blood sugar levels in diabetic patients are vastly different offerings. Both need accuracy, reliability and integrity of information, measurement and data transmission — but the scale of impact is not comparable. Due to this vast variance, I will not attempt to address this section in any depth.

Development teams should have their own in-house bug testing schedules, alongside mock data creation protocols. Beyond this, independent evaluation of technology may be required and often strengthens your case. In fact, for many devices — there will likely be a CE Mark requirement as the technology is considered a medical device. It is worth starting with the end in mind. Think ahead about data integrity and the types of trials you need to run in-house to confident in your tool. Commissioners will likely have information governance teams who will want to see your evidence, independently, before supporting that your product meets these criteria.

Charities, patient expert groups, specialist training clinicians and academics can all be approached to support the evaluation of your information — to be sure that it is up to date and accurate. Be sure to describe your update schedule, to show commissioners that your information will remain current using a robust update method. For example, the EpSMon app for people with epilepsy, has an expert panel who meet to review any new literature, feeding into tool updates — every 12 months. Be sure that you have demonstrated the acceptability and accessibility of the ‘information’ in your tool, again via PPI / UX research.

Summary

The NICE standards provide a skeleton outline for evidence. Here, I have provided a potential methodological approach to embed a blended technology development and service development methodology, pointing to the rigour needed to report your findings into a commissioning proposal. At the heart of this is UX and Usability research methods, informed by service delivery research approaches.

For many teams, this translates into a roadmap for their experienced team. For others, this identifies a capacity gap. For the latter, services like ours can provide support.

Good luck :)

--

--

Craig Newman

Multi-award winning Health Innovator, Clinical Psychologist & Leadership Coach. Founder of www.aim-you.com & www.getoutgetlove.com