Enhancing WHSE and safety reporting with NLG narratives

Relius Reports

By Simon Geragthy, Safety Reporting Advisor, Relius Reports
Tuesday, 06 October, 2020


Enhancing WHSE and safety reporting with NLG narratives

How can the use of Natural Language Generation (NLG), make it easier to generate custom WHSE reports where the data has been accurately analysed and interpreted?

If you have been working in the WHSE space for a while, you will understand how critical report writing is. The business and senior executives will always want to know how many injuries there are, what the trends are and whether the WHSE team is having an impact.

‍After all, there is no point in doing the work to improve outcomes if you can’t validate and share it in a meaningful way.

‍What’s more, reporting keeps the business, and its managers, informed on progress against targets and highlights positive achievements as well as ‘opportunities for continuous improvement’.

‍Informed managers are then better able to make the appropriate decisions to manage WHSE outcomes.

‍Reporting is usually time-critical and competes with other priorities. Besides, not everyone is an expert at extracting, collating and interpreting data.

‍This article looks at two of the critical processes in report writing — data interpretation and information checking.

“Report production through using NLG enables managers to automate insight creation and decision making which is a prerequisite to success in a digitalised economy.”

— Dr Arash Kordestani, Data Scientist and Assistant Professor @ Södertörn University, Stockholm

“If the thread is accurate and consistent then there is a greater chance of the messaging to be clear and any actions of remediation or improvement to be 100% relevant, rather than biased speculation.”

— Rob B Lowe, Chief Operating Officer and Principal WHS Practitioner @ Success Superpowers

Data interpretation (the first and most important step in report writing)

The problem — data interpretation is the process of obtaining meaning out of large amounts of information.

What does all that data say about WHSE outcomes? Is there a trend, and how significant is the trend? Was it the same last year?

Data interpretation can be a complicated and daunting task. Getting this critical step wrong will lead to incorrect conclusions which do not stand up to closer scrutiny.

Often WHSE report writing can involve the analysis and interpretation of thousands of records from a reporting system.‍

The aim is to identify trends — that is the tendency for data to change over time.‍

By identifying trends, it is possible to enable the understanding of the impact of WHS risk reduction programs or the significance of various risk factors (eg, manual handling across directorates or sites).

Getting the analysis and interpretation of the data correct is always the most critical part of the reporting process.

‍For example, suppose data is highly variable (variance), such as small numbers of incidents at a site. In that case, it may be impossible to identify trends due to the randomness of the data.

‍However, variability in data can be managed by looking at longer periods, larger datasets or using ‘moving averages’.

‍An example of this would be lost time injuries (LTIs). There are usually low numbers of these month by month, with high variability.

Looking at monthly trends would see these numbers bouncing around and therefore can be difficult to interpret. By averaging over 6 or 12 months, the results are smoothed out, and trends become more evident.

A current example of this is looking at COVID-19 results. Day-by-day numbers are variable, so a more accurate picture is obtained using a 3- or 5-day moving average.

Trends may also be cyclical — for example, there are fewer incidents over summer due to employees being on leave or less traffic on the roads. If we were to look only at the current summer results, it might give a false impression that the results are improving.

Instead, they use a larger dataset (over 13 months) that can reveal the long-term cyclical pattern and improve our interpretation of the data. These types of issues can make it difficult to interpret data correctly.

‍Besides, non-experts may perceive trends as significant where they are not, or incorrectly relate a trend to an unrelated cause.

‍In some cases, personal cognitive (or thinking) biases want to make causal links to a valued new program or training where the data cannot substantiate this.

The use of automated data analysis and narrative generation using NLG can assist in both the time required to interpret data and to reduce the potential for errors.

“We have an obligation to make the data meaningful to the audience AND ensure quality data inputs, as in the systems or persons adding the observations that collectively represent the data the NLG narratives utilises need to be consistent.”

— Dr Rebecca Michalak, Managing Director, Keynote Speaker and Author @ PsychSafe

Information checking

The problem — reports are often very complex, and they also need to have high accuracy. Therefore, data and commentary checking are an essential part of the reporting process.

However, it is also a manual process that is, itself, prone to error and very time-consuming.

How many times have you received or written a report that has not been adequately checked?

The problem may be incorrect values or charts added from the source data or inaccurate interpretation of the data.

A simple cut and paste from a BI interface or MS Excel into an MS Word report can lead to errors when the data changes but the charts are not updated, or the report can reference the incorrect chart.

Rigorous checking is necessary to ensure reports are accurate and consistent.

The process is complicated by the fact that the brain tends to ‘skim over’ and ‘summarise’ visual information.

This may create either ‘perceptual’ errors, where the mistake is not seen, or ‘cognitive’ errors where the error is seen but misinterpreted.

The more complex the report, the more likely this is to occur, particularly where data and commentary are repeated — say in an ‘executive summary’ and multiple places in the body of the report.

Proper data checking is, therefore, critical to sending clear and consistent messaging on WHS outcomes.

Ensuring accurate, well-verified reports is a severe source of anxiety, both for those writing the reports and for those presenting them — whether this is to executives, the board, stakeholders or the public.

One way to reduce the human error and burden of error checking is to automate the process. A report generated by a combination of BI tools and commentary generated by NLG will always have the correct commentary associated with a chart.

However, there is still flexibility to modify the outcome by exporting the report into MS Word and changing or adding commentary.

See some example safety reports that been generated using auto-generated narratives.

Impactful visualisations reduce complexities and make messages easy to understand.

Dynamic interpretations of data make boring data significantly easier to digest.

Conclusion

The use of automated report writing and NLG enables consistent commentary to be added throughout a report. This reduces or eliminates the need for error checking, giving the user confidence that the messaging to their stakeholders will be as accurate as possible.‍

*RadioGraphics, Oct 14, 2015, “Understanding and Confronting our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction”. M. A. Bruno, E.A. Walker, H.H. Abujudeh.

Image credit: ©stock.adobe.com/au/totojang1977

Related Articles

The countdown to Melbourne is on

The countdown to the Melbourne Workplace Health and Safety Show is on.

Tips for employers to enhance worker mental health

Mental health issues have serious implications in the workplace, particularly given the...

Psychosocial risks: the difference between work design and culture

A "toxic" workplace bullying prosecution in October 2023 highlighted the importance of...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd