Transport Performance and the Data Clubs Approach Richard Anderson ESRC International Public Service Rankings 13 th December 2005
Presentation structure Introduction and history of pubic transport benchmarking The CoMET, Nova and Bus Benchmarking methodology and framework Key Performance Indicators development and use Keys to successful benchmarking 2
Imperial College London Benchmarking History of benchmarking at Imperial College London 1982 - Hamburg/London productivity comparisons 1994 - Group of Five heavy metros formed 1996 - Community of Metros (CoMET) founded 1998 - Success of CoMET leads to formation of Nova Group for small to medium-sized metros (now in eighth annual phase) 2000 - National railway benchmarking (Germany, Italy, Spain) 2003 UK Strategic Rail Authority Benchmarking 2004 - Bus Benchmarking group formed Phase 1: August 2004 July 2005 2005 - Bus Benchmarking group Phase 2: July 2005-3
Which metros participate in Nova & CoMET? Mexico City Toronto Montreal New York Glasgow Newcastle Dublin Berlin London Moscow Paris Lisbon Naples Madrid Shanghai Tokyo Taipei Hong Kong MTRC, KCRC Singapore Santiago Rio de Janeiro Sao Paulo Buenos Aires Nova metros CoMET metros 4
Clear purpose of benchmarking groups has led to their success Benchmarking is not merely a comparison of data or creation of league tables Essential to deliver benefits to all participants A forum to share experiences and exchange information Stimulates productive questions / identifies lines of inquiry Identifies best practices in operations and management Focus is on implementable results, performance improvement and strategy Information to support dialogue with government, regulators and other stakeholders. 5
Outline of the benchmarking process Benchmarking group owned and run by the participants Project Management, administration and analysis carried out by Imperial College London Annual cycle - long term approach to benchmarking President of group rotated on an annual basis Confidentiality agreement to allow full data and information exchange within the group but not externally overcomes political sensitivity Complementary to and supported by industry organisations 6
How does the benchmarking process work? Key Performance Indicators EFFICIENCY ASSET FINANCIAL UTILISATION EFFECTIVENESS SAFETY SERVICE QUALITY RELIABILITY 5 4 3 2 1 Gap analysis: importance of information Metro 1 Metro 2 Metro 3 Metro 4 Metro 5 Metro 6 Satisfaction Gap Case studies Asset Management Driver Productivity Fare Regulation Expert groups Networking Formal Informal Best Practice Implementation 7
A system of Metro Key Performance Indicators was developed, covering ALL dimensions of success Background B1 Network Size and Passenger Volumes B2 Operated Capacity km and Passenger Journeys B3 Car km and Network Route km Asset Utilisation A1 Capacity km / Route km A2 Passenger km / Capacity km a3 Passenger journeys / Station a4 Proportion of cars used in Peak Hour Reliability / Service Quality R1 Revenue operating car km between incidents R2 Car hours between incidents R3 Car hours / hour train delay r4 Passenger hours delay / passenger Journeys r6 Passenger journeys on time / Total passenger journeys r7 Trains on time / Total trains Efficiency E1 Passenger Journeys / Total Staff + Contractor hours E2 Revenue Car km / Total Staff + Contractor hours e3 Revenue Capacity km / Total Staff + Contractor hours e4 Number of Scheduled Trains / Year / Driver Financial F1 Total Commercial Revenue / Operating Cost F2 Total Cost / Car km F3 Service Operations Cost & Staff hours / Car km F4 Maintenance Cost & Staff hours / Car km F5 Administrative cost & Staff hours / Car km F6 Investment cost / Car km f7 Total Cost / Passenger Journey f8 Operations Cost / Passenger Journey f9 Fare Revenue / Passenger Journey f10 Average Operating Cost / Station Safety S1 Total Fatalities / Total Passenger Journeys s2 Suicides / Total Passenger Journeys s3 Medical Conditions / Total Passenger Journeys s4 Illegal Activity / Total Passenger Journeys s5 Accidents / Total Passenger Journeys 8
Bus Benchmarking KPI development process - Long term approach to benchmarking Existing Indicators Measured by Group Members Identification of Benchmarking Needs Experience from Other Benchmarking Balanced Scorecard Framework Draft KPI System International Standards and Guidelines Consultation with Group Members Development Process: 2 to 3 years Final KPI System Significant effort in harmonisation of definitions 9
Principles of KPI systems Comprehensive and yet concise. Internally consistent. Externally relevant for benchmarking purposes. Statistically reliable, with appropriate and reliable tolerance. Based on identified data sources. Well-structured, with the flexibility for change and evolution over time. Benefits of measurement greater than costs. 10
Purpose and use of KPI comparisons Structured KPI comparisons, used for: Direct comparisons better understand differences between operators Internal motivation set targets for improvement Identifying high priority problems External use with stakeholders (when anonymised) Lead to question why performance is high, low or improving Stimulates productive questions / identifies lines of inquiry Value in time-series database monitoring performance over time Supporting the pursuit of best practices 11
Use and Analysis of KPI Data Need to understand exogenous effects on performance and limitations of direct comparisons 800 600 400 Car km between incidents causing delay KPIs can be normalised to counter 200 for external influences (e.g. wages, 0 A B C D E F G H I J K L M traffic speeds, etc) 1994 1995 1996 1997 1998 1999 2000 2001 2002 Statistical analyses can be used to Efficiency 1.0 provide a greater understanding of 0.5 results: Regression analysis Capacity Utilisation 0.0-0.5-1.0 Cost Best Median Metro Non-Parametric e.g. Data Envelopment Analysis (DEA) Network Utilisation Reliability Service Quality 12
The different sizes of the metros means some form of normalisation is required. E.g. - output per passenger journey, or car kilometre 500 5 400 4 300 3 200 2 100 1 0 Eu Eu Eu Eu Am Eu NA SA SA NA As As As Eu Eu SA Eu As Eu Eu NA NA As Eu 0 Network length (km) Billion passenger journeys 13
Metro KPI reliability indicator: high variations 1000 800 600 Car km (thousands) between incidents causing a delay of 5 minutes or more (2004) Key Eu European NA North American SA South American As - Asian 1508 400 200 0 Eu Eu SA Eu Eu Eu Eu NA Eu NA NA Eu Eu NA Eu SA Eu SA As As As As As As As 14
Time series data allows for trends to be identified who is implementing good practices and what improvement is relatively achievable? 1000 Car km ( 000)between incidents causing delay (5 mins) A S E F R SQ 800? 600? 400 200? 0 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 15
Use of Purchasing Power Parity for comparison of international financial data eliminates market exchange rate volatility Operating Cost / Passenger Journey (USD) 2.5 2 1.5 1 0.5 Key Eu European NA North American SA South American As - Asian Operating Cost / Passenger Journey (2003) 0 SA Eu NA As As Eu SA As As SA NA NA Eu As Eu Eu Eu SA Eu NA Eu USD, PPP, 2003 Exchange Rates 2003 16
Benefits from Benchmarking Case Studies (Metros) Driver Productivity improvements by Singapore SMRT Reorganisation of drivers shifts 10% saved so far Station Management rationalisation by Hong Kong MTR 12% reduction in station staff in 2001 Controlling Fare Evasion in Montreal used to justify penalty fares to local media Line capacity study Station stop times improved, 6% increase in capacity on Victoria Line Metros use with government London Underground to justify performance to the Mayor of London Hong Kong MTR to argue for fare autonomy (2000, 2004) 17
A successful approach to benchmarking: setting expectations One-off benchmarking studies are rarely successful Long-term annual process helps achieve comparability, confidence and tangible benefits Achieving full comparability (KPIs) will take time Necessary to provide value to all participants Benchmarking has its limits 18
A successful approach to benchmarking The user steers the process Focus on implementable results Identify best practice / provide insights which add value Not overly theoretical Achieve confidence in comparability Confidentiality agreement Active involvement from members with supportive information and systems / Supported and understood by top-level management Indicators used to stimulate productive questions and identify lines of enquiry 19
Contact Details For further details please contact Richard Anderson Managing Associate Railway Technology Strategy Centre Imperial College London SW7 2BU Tel: +44 20 7594 6092 Fax: +44 20 7594 6107 Email: richard.anderson@imperial.ac.uk 20