IRELAND 16 July 2018
Special Report 71: Driver Testing in the Road Safety Authority Summary of Findings
Long waiting lists and extended waiting times have been a feature of the Driver Testing Service for a considerable period of time. The driver testing system was operated by the Department of Transport (the Department) between 2002 and 2006. In September 2006, following the establishment of the Road Safety Authority (RSA), responsibility for driver testing transferred to that agency, subject to an oversight role by the Department.
An external contractor was employed between 2006 and 2009 to augment the service delivered by directly employed testers.
The examination focused on
- management by the RSA of the contractual arrangements for the provision of outsourced tests and
- the performance of the Driver Testing Service and the systems put in place by the RSA to evaluate testing performance and facilitate oversight by the Department.
The examination extended only to the systems, practices and procedures employed by the RSA to manage the delivery of testing during the period under review and, consequently, did not extend to the internal procedures of the contractor.
The RSA target is to deliver tests to all applicants within ten weeks of application. The timeliness of the provision of tests has radically improved. Although applications for driving tests doubled between 2005 and 2008, the RSA succeeded in dealing with the surge in applications as well as the historical backlogs. It increased the number of tests delivered through conducting a greater proportion of tests on overtime and by using an external contractor to deliver a large number of tests. By the end of 2008, the numbers on the waiting list and consequential waiting times had decreased significantly. By the end of 2008, average waiting time had fallen to 8.6 weeks and the longest a candidate had been waiting was eleven weeks at that time.
While outsourced test provision was a key element in delivering increased output, a supervision exercise conducted by the RSA at the beginning of 2007 to assess the quality of the outsourced tests identified potential problems with the outcome and marking of tests. In the case of over 2,000 contract tests supervised in the period between October 2007 and the end of 2008, the RSA supervisor and the contract tester disagreed on the test result in 7% of cases. Overall, the RSA invested significant additional resources by way of supervision and training of contract testers. Over the same period, but in a smaller number of supervised tests, where RSA supervisors supervised RSA testers there was no disagreement on test results.
A positive feature of the contract arrangement was the opportunity it offered to compare and, where appropriate, embrace practices and systems operated by the contractor. This has fed into the design of ICT systems which, when fully implemented at the end of 2009, should result in more efficient administrative processes and significantly improved management information.
The development and reporting of a core set of indicators covering all aspects of performance would be an important step in facilitating the evaluation of efficiency and effectiveness of the Driver Testing Service.
The RSA did not have a well-developed management information system. Measures and indicators such as the unit cost of tests delivered, the output of testers, customer satisfaction levels and results of supervised tests were not generated for the years 2007 and 2008. The introduction by the RSA of a new ICT system should facilitate the production of meaningful management information.
Driver Testing Output
The number of potential driving tests is a factor of the number of testers employed and the number of tests delivered by each tester on an annual basis. A target of 1,550 tests per tester per year had been adopted when the Driver Testing Service was operated by the Department (2002 to 2006). However, the RSA has not adopted a target for tester output and does not monitor this aspect of performance. Ongoing monitoring of output would help the RSA to better manage capacity and identify opportunities to increase efficiency. Figures compiled for the purpose of this examination showed that output per tester increased in 2006, coming close to the target of 1,550 tests per tester per year, but had fallen off in 2007 and 2008.
Consistency of Driver Testing
Analysis of pass rates in the course of the examination found that a high level of variation existed between the pass rates in those tests conducted by RSA testers compared with those in tests conducted by the contractor. The national average pass rate for drivers tested by RSA testers was 49% compared with a level of 62% for contract testers.
There was considerable variation in pass rates at test centres. Average pass rates by centre varied from 39% to 60% in relation to RSA test centres and from 51% to 77% for contractor centres. While variation between centres can arise as a result of the profile of candidates presenting for testing it is important that the RSA validates the pass rate occurring at centres. The RSA had not conducted such validation procedures.
There was also considerable variation between results determined by testers ? which ranged from average pass rates of 23% to 69% in relation to tests conducted by RSA testers and from 37% to 83% for contract tests. 33 of the 50 RSA test centres had testers whose pass rate varied by more than 10% from the average pass rate at the centre. This compared with 40 of the 50 contractor?s test centres. Review of individual tester results showed a pattern, where some testers consistently passed or failed significantly more candidates than other testers operating from the same centres.
Overall, this pattern of results has implications for the consistency of the tests conducted by the RSA.
The RSA could make better use of its existing management information and, by improving its analysis and performance reporting, use that information to identify differences in driver testing results. It would be particularly important to compare results across test routes and analyse the pattern of faults observed by individual testers, which could inform any follow up action required including increased supervision and training of individual testers.
The RSA policy on supervision of testers is that each RSA tester would have at least one supervised test per month and that at least 125 contract tests would be supervised in a month. The required level of supervision of RSA testers has not been achieved in recent years.
Driver tester training was low in 2007 as the RSA concentrated its efforts on tackling waiting lists and reducing waiting times. This was addressed towards the end of 2008 and early in 2009 when intensive refresher training was provided to all RSA driver testers.
Some recent measures taken by the RSA that have the potential to address the variations in pass rates include publication of standard procedures and guidelines in relation to the conduct of the test, publication of the up-to-date Rules of the Road and the introduction of an Approved Driving Instructor scheme.
The RSA does not monitor the unit cost of each test. The unit cost was estimated at ?88 per test for the purpose of this examination.
The driving test fee remained unchanged at ?38 per test from 1992 until April 2009 when it was increased to ?75. In 2008, ?42 million was incurred in conducting 470,000 tests. The revenue associated with the tests delivered was ?18.2 million resulting in a requirement for significant Exchequer funding to subsidise the difference.
Ongoing monitoring of the unit cost of tests delivered would be useful in gauging overall efficiency and tracking the difference between the fee levels and cost, as well as providing a comparator with other jurisdictions and with the cost of outsourced tests.
The Department has responsibility for oversight of the RSA. While the Road Safety Authority Act, 2006, provided for the development of a service agreement between the Department and the RSA, this has not been done.
The examination concluded that the RSA succeeded in reducing waiting lists and waiting times for driving tests. However, divergence in pass rates generally, and the extent of variation between the pass levels awarded by individual testers and the average rates for the centres from which they operated, raised concerns in relation to the consistency of testing. The generation and use of relevant management information could help focus a continuous quality improvement drive.
Overall, in any such drive, the challenge is to maintain a service that conducts a professional independent testing of drivers in each individual case, while at the same time pursuing testing consistency.