Posted: Wed Aug 25, 2010 12:08 am Post subject: Acceptable Number Of Emergency RFCs?
We all know that emergency/late notice RFCs are a bad thing and depending on your industry and environment will vary as a percentage of all changes, but generally speaking, is there a percentage figure above which alarm bells would start ringing? How high would your emergency change percentage figure have to go to cause alarm bells or OLA breach?
In a financial/nuclear environment the OLA may be no more than 2% to be emergency RFCs but in fast changing consumer led market a figure of 20% might be acceptable. Constant improvement should be drive down the number of emergency changes but eventually an acceptable figure will emerge. My environment has to be responsive to markets and business need and is somewhere between these 2 figures.
I'd be interested to understand how people have set their acceptable emergency RFCs thresholds.
Joined: Oct 26, 2007 Posts: 295 Location: Calgary, Canada
Posted: Wed Aug 25, 2010 1:45 am Post subject:
Hmm... it's a good question I think. First, though, I'd like to point out that there is a difference, at least there should be, between truly emergency and late notice RFCs. (you know, failure to plan on your part is not a problem of mine type of thing)
Anyway... I can share what we did with one company who used to struggle with a high volume of emergency changes. It was a long process because many other areas had to be involved, not just the change mngt process.
We looked at all the emergency change tickets for the previous 2 years to get an idea of frequency. With some investigative work we've figured out how much it has cost company to manage those emergency changes (overtime, off hours, equipment replacement, third party consulting, etc). The figure was fairly high. Through additional investigative work we have identified changes that were caused by poor training, improper execution of the procedures, circumventing process, etc... basically, all the preventable once. At this point we have arrived at a number that we started to use as a bench mark for the following year, with the improvements implemented in the trouble areas noted above. That gave us the breach number to at least monitor, but it should be reviewed every 6 months. I don't know how the company has progressed and whether they are holding up to that benchmark (no longer our client) but that's just an approach we took.
So to answer you question, I guess, it's probably hard to pick a number from somewhere because you wouldn't know whether it applies to your organization or not. Best is to do your own investigation and come to an agreed upon starting point and adjust from there on.
Joined: Mar 04, 2008 Posts: 1884 Location: Newcastle-under-Lyme
Posted: Wed Aug 25, 2010 1:50 am Post subject:
I would think that the acceptable figure would be at the point where the cost of reducing it outweighs the cost of tolerating it. I don't think dealing in percentages has any value here.
I don't fully understand the reference to OLA. Changes can come from a wide range of sources, many of whom will not be party to an OLA (e.g. customers, service management).
How does an OLA make meaningful commitments about emergency/late changes? Is it that your internal units don't raise change requests until after they have done the preparatory/development work. If so, then that is your issue: work done prior to change approval. _________________ "Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
Define what you regard as a 'True' Emergency Change:
e.g. as a result of either Service Loss, Imminent Service Loss, Possible Service Loss, Unacceptable Service Degradation, Security Vulnerability.
Then measure the 'true' value of these over the last 12 months and work like hell to reduce it. I have seen forum posters saying 0% is the only acceptable level of Emergency Change but in my opinion this is not practical as you will always have CI failure, Security Advisories etc.
As a personal guide if you over 15% I'd worry and not sleep much, over 10% I'd worry, under 10% there is still room for improvement, under 5% keep trying to reduce it but it will get harder to do.
Urgent Changes i.e. those that are simply submitted too late by a PM (poor planning!) and because of project deadlines cannot wait until the next scheduled CAB should be dealt with in other ways. Hint: Senior Management.
Reviewing the emergency change with the requestor's management has proven very successful, when done with the correct approach, as they ofter have knowledge about the change / service etc that we do not.
Thanks for all your comments, it's always assuring to see that other people's views are not too far away from your own.
On a slightly different point when producing statitics, do you include standard (routine or pre-approved) changes when calculating the emergency percentage figure?
For example if you perform 100 changes a week, 30 standard, 5 emergency and 65 normal (via CABs) would the emergency figure be 5%(5 out of 100) or 7.1% (5 out of 70)?
I suppose to some degree it depends what you want your figures to show.
Joined: Sep 16, 2006 Posts: 3348 Location: London, UK
Posted: Wed Oct 13, 2010 7:46 pm Post subject:
The default answer from the Forum is
you did answer the question
It actually depends on how you want to report the data
Personally, I would break up the Emergency Changes by areas and show them against the grand total of changes for that area
You are dealing with a specific application - Peoplesoft - for one
In a month, you have 200 changes - 120 are data fixes, 80 are code fixes
of the 120 d/f, 10 are emergency d/f while of the 80 2 are emergency changes
what does that tell you as a CM.. not much directly but you can infer that the education for the data input users is weak or the instructions are not clear as there are way to many d/f to correct data.
I can also infer that the code for this particular application is retty mature - ie all of the bugs worked out
If I had a chart for the application that showed the number of code fixes - both non and emergency increase after a new release and then drop off slowly, then I can infer that the testing of the new version is poor _________________ John Hardesty
ITSM Manager's Certificate (Red Badge)
Change Management is POWER & CONTROL. /....evil laughter
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum