How to compare data between employee surveys

Many of our clients run multiple employee surveys, which presents a real opportunity to build a bigger picture of what your employee experience survey data is telling you. TThis post is a case study example of how to compare data between employee surveys.

A bit of background

This case study is based on one of our NHS Trust clients who we have been working with for several years.

As with all NHS Trusts, retention, engagement and culture are huge people challenges.

We run their Exit Survey, New Starter (onboarding) Survey, and their NHS National Quarterly Pulse Survey (NQPS), which presents the opportunity to take a more holistic view of what is driving dis/engagement in employees across the whole employee lifecycle.

Start with a clear survey strategy

It is fair to say that this Trust, like many others, initially adopted a fairly ad-hoc approach to their surveys. Different functions had different surveys running, in different formats, often at conflicting times! This presented some of the challenges that we see all too commonly.

Having too many employee surveys running can create ‘survey fatigue’, where employees get fed up with constantly having to take part in surveys so they slowly stop taking part. 

Running too many surveys can also lead to inertia. It can take so long to understand what all the surveys are telling you that it becomes too difficult to define specific actions for change. For example, in the NHS it can take several months for Trusts to receive their survey results, digest the data and present it to senior management. Consequently, the outcome can often be ‘it’s nearly time to run the next survey so let’s wait for that before taking action’. 

Each of your surveys should have a clear business case and clear goals. It is often better to stop running some surveys, or combine them into fewer surveys and this is exactly what we did with this particular Trust. 

For example, when it came to designing the New Starter Survey we encountered a little resistance – “Oh no, not another survey!”. However, when we dug a little deeper we discovered that the Trust was already running three, yes three, new starter surveys – one to evaluate the recruitment process, one to evaluate the induction process and one for international starters. 

Therefore, as a first step it makes sense to stand back, define what you are trying to achieve, and then review what you already have in place. There may be gaps, but there may also be crossover and duplication. 

Align the design of each survey 

For the survey design process we followed the same process for how to design any effective employee experience survey, but with one big difference – we mapped the content of each survey to ensure it was aligned where appropriate. It is this that allows the data from each survey to be compared. 

In this case, the Trust already had an exit survey. The NQPS quarterly pulse is also pre-defined by the NHS so that content was fairly fixed. For the New Starter Survey, we already had a best practice survey that we could use. 

We worked with the Trust to define the core themes and questions that would allow us to explore levels of satisfaction and engagement across the surveys. We included the NQPS questions where relevant and we referred to the Trust’s latest NHS National Staff Survey (NSS) results to identify the other themes that we should explore. 

At this point we had the content for three surveys mapped out with consistent demographic questions and core engagement questions structured into themes (factors in our speak). 

What have been the key insights? 

Having a consistent survey structure is a good starting point for making data analysis easier, but where we really add value is through our survey platform. 

Our survey platform not only allows us to administer employee experience surveys, but it has an inbuilt web-based reporting dashboard that it makes it extremely easy for organisations to access their survey. 

Trust users were provided with access to their survey data to enable them to view it in real time. 

Our dashboard makes playing around with survey data highly intuitive and makes slicing and dicing data significantly quicker and easier than using Excel or other business intelligence software. 

The dashboard has inbuilt reporting functionality so users can create as many ad-hoc reports as they wish.

There are loads of insights from the data across three surveys so below are just a few.  

compare data between employee surveys

Insight 1 – new starters are more positive and engaged

Sounds blindingly obvious, but the data shows that new employees start their journey feeling more satisfied. Employees how have been there for a while (NQPS data) are less satisfied and leavers are the least satisfied.

Similarly, the eNPS score shows that people become gradually more disengaged over time. New starters have the highest eNPS, leavers have the lowest.

Insight 2 – engagement in new starters drops off within the first 6 months

Following on from above, if we explore level of engagement in new starters based on length of service it shows that engagement does indeed drop off.

Insight 3 – managers have a big impact on engagement

Again, no surprise, but we now have solid evidence of the impact of manager behaviour on employee engagement.

This chart shows how new starters feel after their first month compared with how people completing the NQPS feel compared with how leavers feel. The data shows that, whilst managers are perceived to be supportive during the first few weeks, they need to continuously check in with people, provide support and appreciate the effort people are putting in.

The heatmap below clearly shows that the leavers who are most dissatisfied overall are the ones who leave because of the relationship they have with their manager. 

Insight 4 – where people feel disempowered they are more likely to leave

Insight 5 – the engagement drivers are consistent across all employee surveys

One of the inbuilt analysis tools in our reporting dashboard is the ability to define engagement drivers. If we correlate how people respond to the question “I would recommend the organisation as a place to work” with how they then respond to all the other questions in the survey, we can see which survey questions are having the strongest statistical link with overall advocacy.

What is striking in this case is how consistent the engagement drivers are across all three survey. This enabled the organisation to focus its efforts on the survey questions that appear to be driving overall engagement. 

What action has been taken?

Well, it’s still early days, but this Trust has implemented a number of initiatives such as:

  • improving communication around how the Trust supports employee wellbeing. They already have lots of things in place, butawareness is low.
  • building core behaviours into leadership development programmes, especially around things such as holding one-to-ones,discussing individual development, providing support and involving people in generating ideas for improvement.
  • improving development and career planning. In fact one thing we have done is design and run an ad-hoc survey for the Trust togather further feedback on how employees perceive the L&D function’s services.

In conclusion

We tend to say this a lot, but surveys are a means to the end, it’s what you do with the data that counts. With a bit of thought and structure, aligning different surveys can enable you to compare data between employee surveys and have a real impact on employee engagement and retention.