Intro to UX design for the ChRIS Project – Part 1

(This blog post is part of a series; view the full series page here.)

What is ChRIS?

Something I’ve been working on for a while now at Red Hat is a project we’re collaborating on with Boston Children’s Hospital, the Massachusetts Open Cloud (MOC), and Boston University. It’s called the ChRIS Research Integration Service or just “ChRIS”.

Rudolph Pienaar (Boston Children’s), Ata Turk (MOC), and Dan McPherson (Red Hat) gave a pretty detailed talk about ChRIS at the Red Hat Summit this past summer. A video of the full presentation is available, and it’s a great overview of why ChRIS is an important project, what it does, and how it works. To summarize the plot: ChRIS is an open source project that provides a cloud-based computing platform for the processing and sharing of medical imaging within and across hospitals and other sites.

There’s a number of problems ChRIS seeks to solve that I’m pretty passionate about:

  • Using technology in new ways for good.Where would we all be if we could divert just a little bit of the resources we in the tech community collectively put towards analyzing the habits of humans and delivering advertising content to them? ChRIS applies cloud computing, container, and big data analysis towards good – helping researchers better understand medical conditions!
  • Making open source and free software technology usable and accessible to a larger population of users.A goal of ChRIS is to make accessible new tools that can be used in image processing but require a high level of technical expertise to even get up and running. ChRIS has a plugin system is container-based, providing a standardized way of running a diverse array of image processing applications. Creating a ChRIS plugin involves containerizing these tools and making them available via the ChRIS platform. (Resources on how to create a ChRIS plugin are available here.)We are working on a “ChRIS Store” web application to allow plugin developers to share their ready-to-go ChRIS plugins with ChRIS users so they can find and use these tools easily.
  • Giving users control of their data.One of the driving reasons for ChRIS’ creation was to allow for hospitals to own and control their own data without needing to give it up to the industry. How do you apply the latest cloud-based rapid data processing technology without giving your data to one of the big cloud companies? ChRIS has been built to interface with cloud providers such as the Massachusetts Open Cloud that have consortium-based data governance that allow for users to control their own data.

I want to emphasize the cloud-based computing piece here because it’s important – ChRIS allows you run image processing tools at scale in the cloud, so elaborate image processing that typically days, weeks, or months to complete could be completed in minutes. For a patient, this could enable a huge positive shift in their care  – rather than have to wait for days to get back results of an imaging procedure (like an MRI), they could be consulted by their doctor and make decisions about their care that day. The ChRIS project is working with developers who build image processing tools and helps them modify them and package them so they be parallelized to run across multiple computing nodes in order to gain those incredible speed increases. ChRIS as deployed today makes use of the Massachusetts Open Cloud for its compute resources; it’s a great resource, at a scale that many image processing developers previously never had access to.

ChRIS UX

A diagram showing a data source at left with images in it. The images move right into a ChRIS block, from where they are passed further right into compute environments on the right. Within the compute environment block at the right, there are individual compute nodes, each taking an input image passed from ChRIS, pushing it through a plugin from the ChRIS store, and creating an output. The outputs are pushed back to ChRIS. On top of ChRIS are several sibling blocks - the ChRIS UI (red), the Radiology Viewer (yellow), and a '...' block (blue) to represent other front ends that could run on top.

I have some – but little experience – with OpenShift as a user, and no experience with OpenStack or in image processing development. UX design, though – that I can do. I approached Dan McPherson to see if there was any way I could help with the ChRIS project on the UX front, and as it turned out, yes!

In fact, there are a lot of interesting UX problems around ChRIS, some I am sure analogous to other platforms / systems, but some are maybe a bit more unique! Let’s break down the human interface components of ChRIS, represented by the red, yellow, and blue components on the top of the following diagram:

The diagram above is a bit of a remix of the diagram Rudolph walks through at this point in the presentation; basically what I have added here are the UI / front end components on the top. Must-see, though, is the demo Rudolph gave that showed both of these user interfaces (radiology viewer and the ChRIS UI) in action:

During the demo you’ll see some back and forth between two different UIs. We’ll start by talking about the radiology viewer.

Radiology Viewer (and, what do we mean by images?)

Today, let’s talk about the radiology viewer (I’ll call it “Rav”) first. It’s the yellow component in the diagram above. Rav is a front end that can be run on top of ChRIS that allows you to explore medical images, in particular MRIs. You can check out a live version of the viewer that does not include the library component here: http://fnndsc.childrens.harvard.edu/rev/viewer/library-anon/

Through walking through the UX considerations of this kind of tool, we’ll also talk about some properties of the type of images ChRIS is meant to work with. This will help, I hope, to demonstrate the broader problem space of providing a user experience around medical imaging data.

Rav might be used by a researcher to explore MRI images. There’s a two main tasks they’ll do using this interface: locating the images they want to work with, then viewing and manipulating those images.

User tasks: Locate images to work with

A PACS (Picture Archiving and Communication System) server is what a lot of medical institutions use to store medical imaging data. It’s basically the ‘data source’ in the diagram at the top of this post. End users may need to go retrieve images they’d like to work with in rav from a PACS server – this involves using some metadata about the image(s), such as record number, date, etc. to find the image then adding them to a selection of images to work with. The PACS server itself needs to be configured as well (but hopefully that’ll be set up for users by an admin.)

A thing to note about a PACS server is you can assume it has a substantial number of images on it, so this image-finding / filtering-by-metadata first step is important so users don’t have to sift through a mountain of irrelevant data. The other thing to note – PACS is a type of storage, which based on implementation may suffer from some of the UX issues inherent in storage.

Below is a rough mockup showing how this interface might look. Note the interface has been split into two main tabs in this mockup – “Library” and “Explore.” The “Library” tab here is devoted to the location of images for building a selection to work with.

User Task: View and configure selected images

Once you have a set of images to work with, you need to actually examine them. To work with them, though, you have to understand what you’re looking at. First of all, one thing that can be hard to remember when looking at 2D representations of images like MRIs – these images of the same object along 3 different axes. From one scan, there may be hundreds of individual images that together represent a single object. It’s a bit more complex than your typical 3D view where you can represent an object from say a top, side, and front shot – you’ve got images that actually move inside the object, so there’s kind of a 4th dimension going on.

With that in mind, there’s a few types of image sets to be aware of:

Reference vs. Patient
  • Normative / Atlas – These are not images for the patient(s) at hand. These are images that serve as a reference for what the part of the body under study is expected to look like.
  • Patient – These are images that are being examined. They may need to be compared to the normative / atlas images to see if there are differences.
Registered vs. Unregistered
  • Unregistered images are standalone – they are basically the images positioned / aligned as they came from the imaging device.
  • Registered images have been manipulated to align with another image or images via a common coordinate system – scaled, rotated, re-positioned, etc. to line up with each other so they may be compared. A common operation would be to align a patient scan with a reference scan to be able to identify different structures in the patient scan as they were mapped out in the reference.
Processed vs. Unprocessed
  • You may have a set of images that are of the same exact patient, but some versions of them are the output of an image processing tool.
  • For example, the output may have been run through a tractography tool and look something like this.
  • Another example, the output may have been segmented using a tool (e.g., using image processing techniques to add metadata to the images to – for example – denote which areas are bone and which are tissue) and look something like this.
  • Yet another example – the output could be a mesh of a brain in 3D space. (More on meshes.)
  • The type of output the viewer is working with can dictate what needs to be shown in the UI to be able to understand the data.
Other Properties
  • You may have multiple images sets of the same patient taken at different times. Maybe you are tracking whether or not an area is healing or if a structure is growing over time.
  • You may have reference images or patient images taken at particular ages – structures in the body change over time based on age, so when choosing a reference / studying a set of images you need some awareness of the age of the references to be sure they are relevant to the patient / study at hand.
  • Each image has three main anatomical planes along which it may be viewed in 2D – sagittal (side-side), coronal / frontal (front-back), and transverse / axial (top-bottom).

Once a user understands these properties of the image sets sufficiently, they arrange them in a grid-based layout on what I’ll call the viewing table in the center. Once you have an image ‘on the table,’ you can use a mouse scroll wheel or the play button to view the image planes along the axis the images were taken. This sounds more complex than it is – imagine a deck of playing cards. If you’re looking at a set of images of a head from a sagittal view, the top card in the deck might show the person’s right ear, the 2nd card might show their right eye in cross-section, the 3rd card might show their nose in cross-section, the 4th card might show their left eye in cross-section, the 5th card might show their left ear… so on and so forth. Rinse and repeat for front-to-back, and top-to-bottom.

You can link two images together (for example, a patient image that is registered to a normative image) so that as you step along the axis the images were taken in a given image set, the linked image (perhaps a reference image) also steps along, so you can go slice-by-slice through two or more images at the same time and compare at that level.

Below is a mockup I made with some suggestions to the pre-existing UI last fall with some of these ideas in mind (some, I learned about in the back and forth and discussion afterwards. 🙂 )

A little more information about Rav’s development

Rav as a codebase right now isn’t in active development. It was written using a framework called Polymer, but due to various technical considerations, the team decided the road ahead will involve rewriting the viewer application in React.

An important component used in the viewer that continues to be developed is called amijs. This is the specific component that allows viewing of the image files in the Rav interface.

In terms of UX design, a future version of Rav will likely be implemented using the UX designs we worked on for Rav as it is today. There is a UX issues queue for Rav in the general ChRIS design repo. Rav-specific issues are tagged. You can look through those issues to see some interesting discussions around the UX for this tool

What’s next?

I’m hoping to become a regular blogger again. 🙂 I am planning to do another blog post in this series, and it will focus on the main UI of ChRIS itself (the red block in the diagram at the top of this post.) Specifically, I’ll go through some ideas I have for the concept model of the ChRIS UI, which is honestly not complete.

After that, I plan to do another post in the series about the ChRIS store UI, which my colleague Joe Caiani is working on now with design created by my UX intern this past summer Shania Ambros.

Questions, ideas, and feedback most welcome in the comments section!

 

7 Comments

  1. […] her blog, Máirín Duffy writes about her experiences helping design the “user experience” (UX) for the ChRIS project, […]

  2. This is very cool.

    Btw, the radiology viewer screenshot doesn’t work, which is a pity, because my computer doesn’t seem powerful enough to run the real thing.

    1. mairin says:

      Oh weird. Maybe I hotlinked it from github and they disabled it, I don’t remember. I’ll fix it so you can see!

      (btw what kind of computer do you have that can’t run the viewer, if you don’t mind sharing? We should fix performance issues, I think.)

    2. mairin says:

      OK it should be fixed now! I also added links to the two main discussion github issues for Rav, I think we had some cool ideas in those.

      1. Thank you.

        The sluggishness on my end was sort of expected — the machine is an aging X200 with 4G of RAM and it probably wasn’t even idling when I opened the viewer. Running Firefox.

        When I open it now, the file browser is rather snappy, but opening the actual viewer takes roughly 10 seconds with blank screen and no indication of progress. I initially thought it doesn’t work at all. But once it loads, things respond reasonably again.

  3. Amazing! Simply Brilliant!

  4. quiet fun, I worked more then 10 years ago on something similar

Leave a Reply to mairinCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.