How I learned to stop worrying and love the spreadsheet.
Excel is a wonderful tool for quickly modeling business and scientific data. Those of us familiar with Excel and especially the pivot table functionality know how easy it can be to investigate data and identify meaningful patterns in a complex hospital extract. Add in some knowledge of commands such as VLOOKUP and MATCH, and impressively powerful analytics can be performed with this well-known tool.
Many hospital strategy departments rely heavily on Excel and hire staff already familiar with Excel. Typically raw data is imported into a spreadsheet and after several intermediate worksheets with varying levels of complex mappings, an aggregated output is presented. With some additional effort, some basic interactive drilldown and configuration can be added.
So far, so good. However, we start to see Excel’s shortcomings when repeating a given analysis with a new month or quarter of data. We’ve met analysts who spend up to 1/4 or more of their time simply refreshing custom views of data. For example, a new data pull arrives on the 15th of the month and then it’s a tricky 37-step manual process to convert this raw data to the intended output.
These shortcomings are exacerbated once this process is transferred to another person. Excel has no easy way to automate the complex steps that transform raw data into an output. And each time this process is handed off to another person, more error and uncertainty is introduced. Meanwhile the underlying data may also change and unless the analyst is paying close attention, these discrepancies can easily go unnoticed and the results suffer quality fatigue.
Even at its best, Excel is typically a tool just for analysts. In practice, we rarely see the spreadsheets generated by an analyst used by anyone other than that analyst. The spreadsheet may be distributed by email or placed on a server, but if there’s a question that can be answered by data, the user-interface is a phone call to the analyst, not a peek at a spreadsheet.
The uncompromising solution is to use purpose-built software powered by relational databases and modern functional programming languages. This is the most reliable system for ensuring the same source data consistently produces the same output. Data can be reliably processed month after month with sensitive alerts configured to raise alarms if anything in the source data changes.
Functional programming languages can also model block utilization metrics that are simply too complex and have too many configuration options to be managed by a database query or report writing engine. Purpose-built cloud software can do so much more than just reports and dashboards once hospital data is loaded.
Excel is a victim of its own success — it is so good at so many things, it’s hard to know where to draw the line and recognize when the innovation it enables becomes a burden to maintain. We think that Excel is a wonderful prototyping tool but not something that is appropriate for regular enterprise-grade reporting or complex modelling of the impact of various changes on hospital operations.
Some of Hospital IQ’s most successful collaborations are when a hospital analyst or clinician with deep institutional knowledge works with us to operationalize an Excel spreadsheet in our cloud-based software. Excel is a prototyping platform for defining the data requirements and desired output. In the early phases of rollout, it is also used to validate results. But once the results are proven to be useful, we make sure this process can be repeated month after month.
Excel is where analysts can experiment with new ways of cutting data. Relational databases and modern functional programming languages are where analysts can get reliable performance month after month, allowing them to discover new dimensions of hospital data that in turn become new operationalized features on our platform.