Network News

X My Profile
View More Activity
Transportation Home  |  Discussions  |  Traffic  |  Columns  |  Q&A     |      Twitter |    Facebook   |  phone Alerts

A rider's 'vital signs' report?

Post reporter Ann Scott Tyson notes in her Wednesday story on frustration over Metro escalators that: The performance of Metro escalators has continued to fall, according to a Metro "vital signs" report issued this month. Escalator availability dropped to 89.6 percent, compared with the Metro goal of 93 percent. Let's look at the report, and what it says about Metro riders' experience.

Interim General Manager Richard Sarles created the monthly performance scorecards to increase Metro's transparency and accountability. This is certainly a step in the right direction toward two goals we all applaud. And Sarles, knowing he had a limited time to accomplish anything before Metro names a permanent general manager, did a good thing by establishing the report. But it's worth noting the report's limitations as a measure of the riders' daily experience.

What's in reports
They show you what Metro management views as top priorities and allow the riders to track changes in performance on those priorities from month to month. See the scorecard summary. Here are some highlights from the report presented at the Metro board's July 8 meeting:

Metrorail. From April to May, Metrorail's average on-time performance improved to 91 percent. The target is 95 percent. Metrorail defines "on time" during rush hour as a station arrival within two minutes of the scheduled headway between trains.

A Metro chart shows that on-time performance for the individual lines tends to be pretty similar. On time performance on the Blue Line tends to run slightly lower than the other lines, and that was true in May. On-time performance on the Red Line was way off last summer, but was matching that of the other lines by October, according to the Metro stats.

Metrobus. Three out of four Metrobuses were on schedule in May. Metro's target is to have eight of 10 buses on schedule. Metrobus defines "on schedule" as anything from two minutes early to seven minutes late. On-time performance is trending about as it was last year.

Escalators/elevators. Escalator and elevator availability declined slightly in May. In one of the scoreboard's more complex calculations, Metro says that an average of 527 out of 588 escalators were in operation systemwide in May, compared to an average of 528 units available in April.

Metro describes this summer as a time of transition as it makes changes in how it does maintenance. As of July 1, Metro has taken over maintenance of all escalators and elevators, and has taken these further steps: This month, rapid response teams will begin to focus on repairs to escalators and elevators that have special reliability problems and are heavily used. Five maintenance employees who recently received certification as master technicians will focus on inspections to identify maintenance issues that could lead to breakdowns. Metro will use an audit program to review the quality of maintenance work and to require employee retraining when needed.

Limits of overview
From a rider's point of view, there's the obvious problem with the transit authority defining what it wants to report on and then judging itself on those criteria. But there are other issues. These broad measures can capture only so much about the daily experience. Plus, we've moved on since May. One of the biggest complaints coming to us from riders is about the heat on some rail cars in June and July.

Also, the scorecard doesn't have a way of measuring the impact of persistent problems, such as the months-long experience riders have had with the two out of service escalators between the platform and the mezzanine at Bethesda. And they won't reflect the intensity of a bad experience such as Monday's debacle at Dupont Circle. Nor can the scorecard track how many problems one rider might experience in a single day.

How would you develop a scorecard based on rider impact? What measures would you use and how would you rate Metro's performance on those measures as of this summer?

[User photos: The D.C. Metro system has been plagued with broken escalators, overcrowded trains and faulty equipment. Send us photos of your Metro experience.]

By Robert Thomson  | July 14, 2010; 11:38 AM ET
Categories:  Metro  | Tags:  Dr. Gridlock, Metrobus, Metrorail  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: NY bus driver saves lives
Next: A new entrance for Rosslyn Metro

Comments

Thanks for posting this. My impression is that Metro excludes from its analysis the complaints that are phoned or e-mailed in to them.
I think Metro needs to publicize these data to see what the most frequent complaints are about (for example, drivers driving while using a cell phone)and to monitor how these complaints change over time. This would be another way to measure improvements or things getting worse.
The phonecalls are (presumably) documented as they are received, which is much better than relying on the survey results because they survey results require that people rely on their overall memory and impressions for the past year or 6 months of incidents. When this is done, people misremember or forget things, whereas the complaints that are phoned or e-mailed in would be relying on actual data instead of overall impressions.

Posted by: informedtraveller | July 14, 2010 1:02 PM | Report abuse

From Dr. Gridlock: informedtraveller, you're raising some great issues about using Frequently Made Complaints to measure service.

Metro does quarterly measures of Customer Satisfaction. They are small samples with results ranging from Extremely Satisfied to Extremely Dissatisfied. The most recent one, for the three months ending in March, shows a 73 percent satisfaction rate among Metrorail riders and a 77 percent rate among Metrobus riders. (I recall The Post's own survey showing quite high rates of satisfaction.)

Rates of complaints would be quite different and quite specific. And Metro would have a chance to check them out. So the result made available to the public need not be, How many people thought the Red Line was too hot last month, but rather how many air conditioning units on the Red Line were busted last month.

I wonder if it would be possible to overcome the problems of anecdotal reporting. Some people complain a lot and others not enough. For example, I think the air conditioner complaints would go under-reported. Not enough people complain directly to Metro about hot cars.

Posted by: rtthomson1 | July 14, 2010 1:40 PM | Report abuse

Dr. Gridlock: At the very least, why doesn't Metro report statistics on the number of each type of complaint received via their online "Metro Customer Comment Form"? They go to the trouble of making the user specify the 'incident type' yet I don't think I've ever seen any results of this information.

Posted by: mika_england | July 14, 2010 1:55 PM | Report abuse

From Dr. Gridlock: mika_england, that's not a bad idea about summarizing the online customer complaints. The Metro online form does allow for some fairly specific categories of complaints.

Still, to me, the most useful information on complaints would be refined to the point where we could at least tell if it was something Metro could fix. I'd at least want to filter out the vague ones that amount to "Metro, you stink." And ideally, it would be refined to major complaint categories that Metro could act on, like the broken air conditioners.

Otherwise, I think, we'd wind up looking at another survey of rider opinions about Metro conditions.

Posted by: rtthomson1 | July 14, 2010 2:47 PM | Report abuse

I wonder what "escalator availability" would look like if it included the escalators that are "secretly" out of service. For every escalator outage reported on the Metro website, there are 2-4 unacknowledged outages.

Posted by: jiji1 | July 14, 2010 3:47 PM | Report abuse

Metrorail riders want *reliability*. And Metro can measure that - it has data about every trip that riders take (start time and station, end time and station). It knows how long a trip should take *if* scheduled trains ran on time, and had enough space for everyone. So it can calculate percentage delays. For example, say that the average trip on Thursday, July 15, should have taken 20 minutes, and actually took 26. Then for scorecard purposes, the actual average travel time was 130% of scheduled trip time.

Posted by: JohnBroughton | July 15, 2010 5:13 PM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2010 The Washington Post Company