How do you evaluate the impact of your L&D programmes?

This is a complex question for many HR departments. Indeed, one of the objectives of the deployment of a system is to have a satisfactory return on investment(ROI). This of course implies measuring its effectiveness, but above all, conducting this evaluation in the most rigorous way possible.

A resolutely scientific approach

The scientific community recommends the use of Randomized Controlled Trials( RCTs), simply because it is the best way to do things seriously.

A randomised controlled trial is a type of experimental design that aims to reveal a causal link between two variables. For example, a professional development programme (i.e. the variable of interest) on the adoption of new managerial behaviours (i.e. the measured variable).

It should be borne in mind that we always start from a hypothesis. Thus, in this example, we expect to observe a positive effect of the variable of interest on the variable being measured.

The randomised controlled trial uses a very rigorous research protocol, the objective of which is therefore to demonstrate an impact (or lack of impact...!) by minimising potential experimental bias. How is this type of protocol rigorous? And how does it work?

A controlled... trial

A randomised controlled trial brings together several groups that are "variants" of the variable of interest. For example, if our variable of interest is the training device, we will have at least one group that follows it, called the "experimental group", and one group that does not, calledthe "control group". These variants represent the two modalities of the variable of interest.

The idea of this type of experiment is to compare the results of the measured variable between the different groups.

In our example, the introduction of new managerial rituals.

A randomized... randomised

It is essential that participants are assigned to one of the two groups at random.

And, to avoid certain experimental biases, individuals ideally do not know which group they have been assigned to. This is known as a blind study.

It is necessary thata certain number of participants are allocated to each group. The larger the number, the more likely it is that the characteristics of the individuals in the two groups are representative of the same reference population. This is known as statistical equivalence, which is essential for the subsequent analyses.

If it is found, as expected, that the experimental group performs better than the control group and that statistical equivalence is well respected, then it can be concluded that this is due to following the skills development programme.

What we need to remember here is that the more the causes that can produce an effect on our results are controlled, the more it is possible to speak of a causal link between our variable of interest and our measured variable. In other words, the more rigorously the research protocol is designed, the more we will be able to interpret the results in terms of impact and therefore, in this context, to conclude that a system is effective.


Fifty offers an eDoing tool that helps employees take action during training, transformation or post-training. Sensitive to the notion of experimentation and scientific proof, Fifty tests the effectiveness of its eDoing solution by adopting a scientific approach. To measure the real impact of a system, talking about research is fine, but conducting experiments is better! Stop talking, start doing

dependson statistical power.

Similar articles