1. Conceptual framework
1.1. The state vocational training system in Spain
Lifelong learning is one of the keys to the development of an economy and the improvement of collective welfare (OIT, 2008; UE, 2008). In this context, training for employment is an equally important tool. In Spain, training for employment and vocational training were usually managed separately.
However, since the 1990s, legislation has been promoting the integration of all training for employment into a single system made up of three sub-systems (Laws, 1990, 2002, 2006; Royal Decree, 1046/2003): Regulated Vocational Training, for people who wish to acquire professional skills; Occupational Vocational Training, for unemployed people; and Continuing Vocational Training for employed people. In 2007 the objectives, initiatives and organisation for this training were defined, and its sub-system included three kinds of training initiatives: Training on demand, Training on offer, and Training alternating with employment (Article 1, Royal Decree 395/2007; Orden TAS/2307/2007).
Continuing Vocational Training (CVT) is a sub-system in the framework of training for employment; it is meant as a strategy to allow professional demands to converge with employed peoples’ needs (Pineda & Sarramona, 2006:706). Prior to 1992, CVT only existed in a few large corporations; however, the signing of the agreement between the state administration and Employer and Trade Union organisations led to state funding of on-the-job training being made available for all companies (Resolutions, 1993, 1997, 2001, 2006), and led to the creation in 2004 of the Tripartite Foundation for Employment Training (TFET). Its main function is to assist the Public State Employment Service in tasks concerning the organisation, management and evaluation of training initiatives. The creation of the TFET also implied the promotion of a new integrated model of CVT composed by four types of initiatives: CVT actions in corporations, individual training permits, training contract-programs and complementary and training measures for training. This paper evaluates the first two of these initiatives.
Within the context of training on demand, CVT actions are organized and managed by companies to improve the work of their employees and are financed by a training budget proportional to the size of their staff. Secondly, Individual Training Permits (ITP) include training that corresponds to the official diplomas or certificates of professional standards offered by the National Catalogue of Professional Qualifications (Organic Law, 2002). CVT actions and ITP initiatives are financed through training credits that are assigned by the TFET on an annual basis.
The TFET must carry out an evaluation of the training actions in the companies, both in what refers to detecting needs, defining goals and organising actions, as well as in the training’s effectiveness, efficiency and impact, with the ultimate aim of improving the functioning of the vocational training subsystem.
1.2. Assessing CVT: a definition and conceptual model
The CVT Assessment is the analysis of the total value of a system or training action in both social and financial terms, with the aim of obtaining information on the achievement of its objectives and on the global training cost/benefit ratio in order to assist decision-making (Pineda, 2002:250).
In the 1960s, Kirkpatrick laid the groundwork for what is known today as the assessment of CVT, based on his four-level model (1999:19-24): 1) Reaction; 2) Learning; 3) Attitude; and 4) Results.
The model has been expanded, updated and variations have been based upon it (Barzucchetti and Claude, 1995; Brinkerhoff, 2005; Hamblin, 1974; Kaufman and Keller, 1994; Meignant, 1997; Phillips, 1991). Others have pointed out the need to study alternatives to this model (Brinkerhoff, 1988, 1989, 2003; Bushnell, 1990; Holton, 1996; Kraiger, 2002; Leung, 2006; Pineda, 2002; Preskill & Torres, 1999; Russ-Eft & Preskill, 2005).
We discuss here the two most interesting levels of evaluation to determine training efficiency: training transfer and impact.
1.3. Training Transfer Assessment
Training transfer is defined as “the application of knowledge, skills and attitude acquired during a learning event back to the workplace, and to maintain them during a certain period of time” (Baldwin & Ford, 1988:63).
Training transfer is the centre point of all effective training actions (Yamnill & MacLean, 2001:196). Training Transfer Assessment is to detect whether or not the skills acquired through training are applied on the job and if they are maintained over time (Pineda, 2002: 266). Thus, two types of indicators can be evaluated: the use of the learning achieved in the training programme and the changes in workplace performance as a consequence of training (Warr, Allan & Birdi, 1999: 354).
1.4. Training Impact Assessment
Training impact is understood as “the repercussions that stem from training programs in an organisation in terms of training needs, problem solving and contributions to achieving said organisation’s strategic goals” (Pineda, 2000: 124).
Impact is conceptualised via two kinds of indicators: qualitative effects (workplace satisfaction, atmosphere, motivation) and quantitative effects (increase in production, decrease in delays) (Kirkpatrick, 1999; Phillips, 1997; Pineda, 2002; Wade, 1994). And the impact assessment is carried out by identifying objective and subjective indicators.
Objective indicators are observable effects that could translate into positive repercussions. Working conditions, salary, productivity and the organisation’s volume of business are particularly interesting. This information can be obtained either from the organisation’s own internal databases or from the participant reports. Subjective indicators are the perceptions of the people implied on the positive consequences of training. Thus, trainees are asked whether or not their training has improved their performance, the competitiveness of their company, or their employment prospects. The relationship between these two types provides a measurement of the global impact of training.
In this paper we present a part of the study carried out in 2010 to assess the initiative of training on demand in Spain, that is, the training carried out by organisations with state funding. The study was financed by the Tripartite Foundation for Employment Training and has been published on their website. The interest of this particular article lies in the presentation of the qualitative methodology used, as well as on the analysis of its usefulness to assess the effectiveness of a public employment training system. The qualitative focus has allowed us to detect many invisible factors in the system and has given a voice to various actors to point out both the positive and negative factors affecting the effectiveness of training. Throughout this paper we will present this methodology as well as the results we obtained in detail, as it has allowed us to analyse both the visible and invisible factors in the Spanish employment training system.
The study used different methodologies to analyse objective and subjective indicators and to achieve each of its aims:
a) The implementation, coverage of and access to training on demand: through descriptive, comparative and relational statistical analysis of figures from the corporate databases of the body that financed the study.
The effectiveness, efficiency and impact of training on demand: via a process of fieldwork in which quantitative and qualitative information gathering instruments and techniques were employed. Quantitative information was obtained via telephone interviews, thus giving a wide view of the evaluation results that can be generalised. On the other hand, in order to obtain qualitative information, we used in-depth interviews, discussion groups and the Delphi panel. The Delphi panel is a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts. The experts answer questions in two or more rounds. After each round, a facilitator provides a summary of the experts’ forecasts from the previous round as well as the reasons they provided for their judgments.
b)In order to carry out the assessment we designed an “Assessment Plan” (Annex 1) which contains the items we intend to assess as well as the information sources and methodology used to evaluate them. The dimensions identified in the Assessment Plan are the following:
1) Coverage and reach of the CVT actions and ITP.
2) Spread, visibility and access to training.
3) Characteristics of subsidised training (training planning, resources, participation,…).
4) Training funding.
5) Results and impact of the CVT actions and ITP.
Out of the two kinds of assessments carried out, this paper focuses on the qualitative fieldwork that is based on a multi-instrumental perspective. The use of different instruments to gather qualitative information allows for a greater level of breadth, depth and contrast for the information we obtained.
The qualitative fieldwork sample refers to the business sectors as well as information on companies, organisations and specific professional profiles. This sampling results from an intentional, programmed and consensual procedure between the evaluation team and the entity that financed the study. Our goal was to obtain complex and relevant information on the development of the training on demand initiative.
We followed two criteria to select the five business sectors: 1) we wanted the three classic sectors of the economy (primary, secondary and tertiary sectors) to be represented; 2) we took the volume of participation into account, selecting sectors with either a high or a low amount of companies participating in training on demand. Based on the information contained in the databases provided by the body that financed this study we chose the following sectors:
• Financial intermediation
When selecting the instruments we considered two factors: the specific characteristics of the information we were to collect and the specific sources we expected to obtain them from. Having studied these factors in the abovementioned Assessment Plan, they seemed to suggest the use of three types of instruments: in-depth interviews; discussion groups; and a Delphi panel.
It is worth pointing out that we obtained qualitative information from three different points of view: a) training on demand users; b) training on demand providers; and c) an outside perspective (Figure 1).
Figure 1. Instruments used in the qualitative fieldwork. Source: Authors.
i. Discussion groups
In this study we used the discussion group technique as defined by Ibáñez (1986), in which the session facilitator introduces subjects, either following the flow of conversation, once a specific subject is exhausted or because a subject has exceeded the maximum length allotted to it in order to be able to cover all subjects.
Based on the indicators in the Assessment Plan, we designed three different types of discussion groups.
a) Professionals belonging to the body that manages Employment Training (specifically, one group of managers and another group of specialists). The goal of these discussion groups was to get to know the main characteristics and uncertainties of employment training.
Members of the sector joint committees (Representatives of different business sectors that advise companies in these sectors to design the most appropriate training for their workers.)
b) The goal of these discussion groups was to compare and contrast the perspectives of all the sectors evaluated in this study.
c) Officials or representatives from the entities involved in organising training on demand. The goal of this discussion group was to explore the main characteristics and uncertainties of training on demand.
Three members of the assessment team were present in all discussion groups, where they carried out the following tasks:
• 1 session facilitator, who introduced the subjects to be discussed, facilitated discussion between the participants and kept track of time;
• 1 assistant, who supervised the treatment of all subjects according to the schedule and, additionally, provided depth or nuances in different subjects; and
• 1 second assistant who took notes on the most significant comments made by the group participants.
Methodologically, all the groups opened with a brief description of the assessment project and with a presentation of the members of the assessment team who were present. Next, the members of the discussion group were also introduced to each other and group dynamics were developed; these varied according to the spoken contributions of the members of each discussion group.
The number of participants in each discussion group varied between 8 and 12 people and the group dynamics lasted from 90 to 120 minutes.
It is worth pointing out that, in all cases, the participants in the discussion groups had previously been sent a diagram with the subjects that would be discussed, as per the Assessment Plan.
ii. In-depth interviews
An in-depth interview is a technique that allows customised visions of the subject of study to be obtained; it is defined by a low level of formality and discussing subjects based on non-standard questions. The development of an interview varies according to the flow of discourse with the interviewee. In this study, the in-depth interviews lasted between 60 and 120 minutes.
Based upon the indicators in our Assessment Plan we have designed 6 different types of in-depth interviews, emphasising different indicators based on the different interviewees:
a) Participants in training actions;
b) Participants in individual training permits;
c) Training managers from participating companies;
d) Training managers from non-participating companies;
e) Employees’ Legal Representatives (ELR);
In order to configure our intentional qualitative sample and select specific companies in which we would carry out our in-depth interviews, it was necessary to take the size of companies into account as another criterion. The figures we analysed revealed that this is a relevant factor when it comes to participating in training on demand programs. Based on this criterion, the composition of our sample of companies was the following:
• 1 large company in Madrid and 1 large company in Barcelona.
• 1 small company in Madrid and 2 small companies in Barcelona.
• 2 small companies that did not participate in training (during the study’s timeframe) in Madrid and 3 in Barcelona.
We chose Madrid and Barcelona as they are two of the largest cities in Spain with most entrepreneurial activity.
The selection of the specific companies and profiles to be interviewed were chosen at random from the databases; we contacted the participants by telephone. We chose the non-participant companies based on the criterion that they had not participated in training on demand in the years previous to this study.
Based on all these criteria the sample was composed of 26 interviewees.
The process we followed to design and validate each interview was the following:
I) To design every interview based on the indicators established in the Assessment Plan.
II) To review, modify and confirm each one of the questions to be asked related to each indicator.
III) Changing specific items based on the information obtained in the discussion groups.
IV) Making changes in the interview to the Trustees based on specific input from the technical team of the entity that financed this study.
iii. Delphi Panel
The Delphi method is used to generate predictive analytics and forecasting; this technique allows us to structure a group communication process in which various experts discuss a complex subject efficiently and effectively.
In this study the Delphi method was used in the shape of a virtual panel. In this panel, a group of experts in continuing training gave their opinion on a number of items, situations and problems related to training on demand (Assessment Plan). Based on their stances, they made proposals that could be implemented in the future to improve training initiatives.
The panel was developed in three rounds of questions to the group of experts. In the first round, we administered a questionnaire whose goal was to get to know the participants’ opinions on different aspects of training on demand. In the second round, the participants presented their arguments on the global group. In the third round, experts assessed and prioritised the proposals that emerged throughout the first two rounds and produce other proposals.
In order to organise the virtual Delphi panel sessions, we requested invited an initial group of 30 people from organisations involved in providing training and experts in continuing training to participate; our goal was to obtain between 10 and 15 participants. We made our requests to 20 organising bodies and 10 experts, the latter being proposed by the TFET.
2.4. Data analysis
The variety of sources and instruments used to collect relevant qualitative information on training on demand initiatives during the exercises we assessed require a systematic treatment that allows us to process them efficiently.
To do so, we established three specific perspectives related to the characteristics of the information we obtained. Each one of them responds to a specific subject of analysis and should be treated differently (Figure 2).
Figure 2. The qualitative data analysis process. Source: Authors.
In order to process the information we obtained systematically, we decided to use the list of indicators contained in the Assessment Plan itself as a guideline for each perspective. This procedure has allowed us two readings: one by informant typology -a vertical reading of information- and the other, by indicator -a horizontal reading of information-.
In summary, then, this study has allowed us to know the positive and negative factors in evaluating the Spanish system for funded vocational training. To do this, we have given voice to different stakeholders in an intentional sample of companies, from different sectors and sizes, from whom we have obtained interesting data, using some instruments of qualitative methodology (Discussion Groups, In-depth Interviews and Delphi Panel).
Now let us move to the results.
The qualitative methodology has allowed us to access information that was difficult to obtain through purely quantitative means, and to compare these results to the results from the quantitative phase through triangulation, thus enriching our analysis of the subject of enquiry. The qualitative methodology has also allowed us to formulate proposals for improvement that are much closer to reality.
The qualitative information we collected was related to the following dimensions: scope and coverage of training; spread, visibility and access to information, and training impact and results. We will now present the most noteworthy results. For more information, see the full report (in Spanish, Pineda et al, 2010).
In the case of factors that influence workers’ participation in these training events, training managers (responsible for human resources within the company) stressed the importance of the company recommending that they should attend the training. The workers’ representatives, in contrast, related it to career advancement opportunities and improvement of skills. Alternatively, participants in training related this especially with peoples’ sensibility towards training and their opinion on its usefulness, as well as logistical questions such as schedules.
On the other hand, an important result was related to the role of different actors in the training design process. In this case, the training managers and supervisors played a central role in influencing the detection of training needs. In contrast, the workers did not identify the process of detecting needs as such, which shows that they do not see it as an important part of training towards which the company is making efforts.
Nevertheless, there are differences between ITP participants, who played a more important role in detecting needs, and CVT actions participants. The workers’ representatives played a passive role in this process.
In what concerns the assessment of the training, we obtained, on the one hand, what is actually done in practice, basically, rating satisfaction -this information being provided by the training managers-, and, on the other hand, we obtained the experts’ point of view, as well as their recommendations on what is necessary to assess skill transfer and impact.
The information provided by different agents also allowed us to learn about some dysfunctions in the system, such as reimbursements of non-subsidised activities.
Furthermore, concerning the financing system, the qualitative results show how companies’ subsidy systems influenced participation, particularly due to the large gap between big companies (which receive more funding) and medium/small businesses (which receive less funding). When asked, the different agents all suggested that financing might not be balanced, and that the way it is managed can be a barrier towards participation.
Participants in this phase were also asked about the results in terms of learning, satisfaction, training transfer and impact; this allowed us to compare the results with those obtained in the quantitative phase. In particular, in what concerns those factors that may influence training transfer, the results we obtained in the qualitative phase helped explain the results obtained in the quantitative phase.
For instance, CVT actions workers pointed out that training transfer improved if the training was carried out on the workers’ own initiative, and that a lack of opportunities to apply the acquired skills in the workplace could act as a barrier to training transfer. On the other hand, ITP participants highlighted that the training was often not related to the company’s needs, thus reducing training transfer, and they explained that their own business managers could act as an obstacle to training transfer. The training managers also related it to the training’s suitability to the workplace, and the workers’ representatives related it with a lack of time.
These are some of the most relevant highlights from the results in the qualitative phase that give an overview of the diverse opinions and roles carried out by the different agents in the training system we have evaluated, and that point out the need for a comprehensive approach to the qualitative reality.
We can state that the training on demand initiative favours lifelong training for employed workers, although its impact is not very high: most participating workers only carry out one training action per year, and this training is not accessible to all, with a sizeable part of the workforce outside the system. On the other hand, workers with a high level of training are also those who participate most often in training, thereby diluting the training’s capacity to balance inequalities.
Training on demand provides workers with professional skills required by companies, as it has mostly been tailored to suit said companies’ needs. On average, training generates learning and training transfer to the workplace, which proves that the workers acquire and apply the required competences. Nevertheless, the impact of learning on demand on workers is limited, as it does not improve their working conditions substantially in terms of contracts, promotions or pay.
The system of training on demand has improved gradually to allow access to all companies, especially medium and small businesses. Even though there has been some improvement, participation by medium and small businesses and, especially, micro-businesses, is still limited. One possible strategy to remedy this situation could be to make the system more flexible, adapting it to different types of businesses.
Results show that companies that plan their training, that is, that detect needs, lay out a training plan and assess results, tend to obtain better results from training. Currently, training plans depend on the companies themselves and the public administration does not carry out any actions to stimulate them or learn about them. It would be wise to promote training planning in companies in order to improve the effectiveness of training on demand.
The qualitative methodology used in this study has allowed us to make proposals to improve the system that are closer to reality as, in many cases, the actors themselves suggested ideas based on their real problems. In a study that intends to influence continuing training policies, such as this one, it is important that proposals should be grounded in reality for if they are not they could easily be forgotten. In that case, the entire assessment would be meaningless, as any good assessment should help decision-making to carry out basic improvements and to optimise the system, which is, in our case, continuing training in companies financed by state funds.
Annex 1. Assessment Plan. Source: Authors.
The ticks in the boxes show the source of information used for each indicator. For instances, the first indicator ‘coverage and reach in companies’ has been analysed with information coming from the interviews to EFS and from the delphi groups.
Note: EFS: Entity that Financed this Study.
Baldwin, T.T., & Ford, J.K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1): 63-105.
Barzucchetti, S., & Claude, J.F. (1995). Évaluation de la formation et performance de l’entreprise. Rueil-Malmaison: Editions Liaisons.
Brinkerhoff, R.O. (1988). An integrated evaluation model for HRD. Training and Development Journal, 42(2): 66-68.
Brinkerhoff, R.O. (1989). Achieving Results from Training. San Francisco: Jossey-Bass.
Brinkerhoff, R.O. (2003). The success case method: Find out quickly what’s working and what’s not. San Francisco: Berrett-Koehler.
Brinkerhoff, R.O. (2005). The success case method: a strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7(1): 86-101.
Bushnell, D.S. (1990). Input, process, output: A model for evaluating training. Training and Development Journal, 42(3): 41-43.
Hamblin, A.C. (1974). Evaluation and control of training. London, England: McGraw-Hill.
Holton, E.F. III. (1996). The flawed four-level evaluation model. Human Resource Development Quarterly, 7: 5-21,
Ibáñez, J. (1986) Más allá de la sociología. El grupo de discusión; técnica y crítica. Madrid: Siglo XXI.
Kaufman, R. & Keller, J.M. (1994). Levels of evaluation: Beyond Kirkpatrick. Human Resource Development Quarterly, 5: 371-380.
Kirkpatrick, D.L. (1999). Evaluación de acciones formativas. Los cuatro niveles [Evaluating Training programs. The four levels.]. Barcelona: EPISE.
Kraiger, K. (2002). Decision-based evaluation. In A.K. Kraiger (Eds.), Improving training effectiveness in work organizations (pp.291-322). Mahwah, NJ: Lawrence Erlbaum.
Ley Orgánica 1/1990, de 3 de octubre, de Ordenación General del Sistema Educativo (BOE, 24-10-1990).
Ley Orgánica 5/2002, de 19 de junio, de las Cualificaciones y de la Formación Profesional (BOE, 20-6-2002).
Ley Orgánica 2/2006, de 3 de mayo, de Educación (BOE, 4-5-2006).
Leung, A. (2006). A conceptual model of information technology training leading to better outcomes. [Electronic version]. International Journal of Business and Information, 1(1): 74-95.
Meignant, A. (1997). Manager la formation (4a ed.). Rueil-Malmaison: Editions Liaisons.
OIT. (2008). Resoluciones adoptadas por la Conferencia Internacional del Trabajo en su 97.ª reunión. Retrieved from http://www.ilo.org/wcmsp5/groups/public/—ed_norm/—relconf/documents/meetingdocument/wcms_098020.pdf Accessed: 20/5/13.
Orden TAS/2307/2007, de 27 de julio, por la que se desarrolla parcialmente el Real Decreto 395/2007, de 23 de marzo, por el que se regula el subsistema de formación profesional para el empleo en materia de formación de demanda y su financiación, y se crea el correspondiente sistema telemático, así como los ficheros de datos personales de titularidad del Servicio Público de Empleo Estatal (BOE, 31-7-2007).
Phillips, J.J. (1991). Handbook of training evaluation and measurement methods (2ª ed.). Houston: Gulf Publishing.
Phillips, J.J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Boston: Butterworth-Heinemann.
Pineda, P. (2000). La evaluación de la formación en las organizaciones: Situación y perspectiva. Revista Española de Pedagogía, 216: 291-312.
Pineda, P. (2002). Gestión de la formación en las organizaciones. Barcelona: Ariel.
Pineda, P. & Sarramona, J. (2006). El nuevo modelo de formación continua en España: Balance de un año de cambios. Revista de Educación, 341: 705-736.
Pineda et al. (2010). Evaluación de la iniciativa de formación de demanda. Fundación Tripartita para la Formación en el Empleo. www.fundaciontripartita.org
Preskill, H.S. & Torres, R. (1999). Evaluative Inquiry for Learning Organizations. Thousand Oaks, CA: Sage.
Real Decreto 1046/2003, de 1 de agosto, por el que se regula el subsistema de formación profesional continua (BOE, 12-9-2003).
Real Decreto 395/2007, de 23 de marzo, por el que se regula el subsistema de formación profesional para el empleo (BOE, 11-4-2007).
Resolución de 16 de diciembre de 1992, por la que se establece el I Acuerdo Nacional para la Formación Continua (BOE, 10-3-1993).
Resolución de 14 de enero de 1997, de la Dirección General de Trabajo y Migraciones, por la que se dispone la inscripción en el Registro y posterior publicación del texto del II Acuerdo Nacional de Formación Continua (BOE, 1-2-1997).
Resolución de 1 de febrero de 2001, de la Subsecretaría, por la que se da publicidad al III Acuerdo Tripartito sobre Formación Continua (BOE, 15-2-2001).
Resolución de 3 de marzo de 2006, de la Dirección General de Trabajo, por la que se dispone la inscripción en el registro y publicación del IV Acuerdo Nacional de Formación (BOE, 27-3-2006).
Russ-Eft, D. & Preskill, H. (2005). In search of the holy grail: Return on investment evaluation in human resource development. Advances in Developing Human Resources, 7(1): 71-85.
UE. (2008). Recomendación del Parlamento Europeo y del Consejo, de 23 de abril de 2008, relativa a la creación del Marco Europeo de Cualificaciones para el aprendizaje permanente. Diario Oficial de la Unión Europea. Retrieved from http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2008:111:0001:0007:ES:PDF Accessed: 20/10/13.
Waagen, A. (1998). Fundamentos de la evaluación. Barcelona: EPISE.
Wade, P.A. (1994). Measuring the impact of training. London: Kogan Page.
Warr, P.B., Allan, C. & Birdi, K. (1999). Predicting three levels of training outcome. Journal of Occupational and Organizational Psychology, 72: 351-375.
Yamnill, S. & McLean, G.N. (2001). Theories supporting transfer of training. Human Resource Development Quarterly, 12(2): 195-208.