Change KPIs

Discuss and debate ITIL Change Management issues
Post Reply
User avatar
viv121
ITIL Expert
ITIL Expert
Posts: 117
Joined: Fri Dec 14, 2007 7:00 pm

Tue Jan 10, 2012 6:58 am

Hello to All,

I returned to this forum almost after one year. Was quite a regular earlier but last one year gave me no opportunity to walk upto to you guys.

Am back with a couple of questions on Change KPIs as I am revisiting them in my organization ( new year resolution , bear me for a month)

Definition for a failed change - I currently have it as the changes which don't come good to Business expectations. However , am questioned if a change is implemented the way it should be implemented yet fails to meet Business deliverable , should it be a Failed change. For example - Business wants an Emergency change to enhance a service as per external compliance requirement , no UATs could be performed but all the steps as agreed with the Business were performed , the Business not happy with the outcome , should it be a failed change?

Ok , can't remember the second question I had !

Thanks much


regards,

Vivek
"the only statistics you can trust are those you falsified yourself"
Winston Churchill
User avatar
Diarmid
ITIL Expert
ITIL Expert
Posts: 1894
Joined: Mon Mar 03, 2008 7:00 pm
Location: Helensburgh

Tue Jan 10, 2012 9:14 am

Well I've just spent twenty minutes trying to locate the thread that dealt with this. I'm useless at finding things. So I'll just make up the answer again.

Because of the semantic problems involved, it is best not to have anything called "failed change". And it is doubly, triply and quadruply best not to included more than one concept of failure in it if you do - that just makes it totally meaningless.

It is much better to have something like:

- changes that fail to deliver their promise (not likely to be the responsibility of Change Management)
- changes that fail to implement (either Change Management or Release Management likely to be at fault)
- changes that exceed their time or cost parameters (lots of people could have a hand in that, including the customers in some cases, and certainly including anevent that it was reasonable to not anticipate)
- composite changes that deliver part of their result (also broken down between don't do what it says on the tin and didn't implement properly)
- changes that have to be backed out
- changes that adversely (and unexpectedly) impact other aspects of the service or other services
- changes rendered irrelevant by changing circumstances
- changes that lead to incidents
- changes that force other changes

Alright, these overlap and are not all particularly sensible or useful, but the idea is that if you say "we had three failed changes last quarter" you've told no one anything useful if one of them was because of a power failure (beyond your control, say), one was because you ran out of disc space (within your control, say) and the third was because a systems analyst had made a false assumption in the design (beyond your control unless it was the sort of thing that testing could find).

Basically if you want to measure two things then use two measurements.

PS
welcome back viv121
"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
User avatar
viv121
ITIL Expert
ITIL Expert
Posts: 117
Joined: Fri Dec 14, 2007 7:00 pm

Wed Jan 11, 2012 1:59 am

brilliant . thanks Diarmind
regards,

Vivek
"the only statistics you can trust are those you falsified yourself"
Winston Churchill
User avatar
viv121
ITIL Expert
ITIL Expert
Posts: 117
Joined: Fri Dec 14, 2007 7:00 pm

Thu Jan 12, 2012 2:39 am

Hi All,

Ok , I rememebr the second question that I had. We have an established set of KPIs based on certain research paper. We have put Emergency change targets, failed , disruptive change targets etc. I read in a CA paper that high performing organizations keep the Emergency change at 6% and disruptive/failed changes at 3%. Not sure if many would agree.

I would want to know if its a good practice to establish these targets across all services in an organization. There are services which are critical and others which are not. For example if I say that we should not have more than 3% disruptive changes for Staff Internet can we also say that its the same target for Customer ATMs ?

I believe it depends on me and my organizational needs on what targets do we want to put for critical and non-critical services. But does anyone know what's best industry targets?
regards,

Vivek
"the only statistics you can trust are those you falsified yourself"
Winston Churchill
User avatar
Diarmid
ITIL Expert
ITIL Expert
Posts: 1894
Joined: Mon Mar 03, 2008 7:00 pm
Location: Helensburgh

Thu Jan 12, 2012 7:17 am

Vivek,

these figures (6% and 3%) are nonsense as targets. You cannot derive detail level targets from what other organizations do unless you can prove that the other organizations are identical to your own.

So you want to make the number or percentage of emergency changes a key indicator of the performance of you service management?

Step 1: count them over the last three months (or some period that provides sufficient data to be meaningful)
Step 2: count them over the previous three months (or period)
Step 3: compare the two counts as a first step towards determining how consistent the pattern is
Step 4: analyse all the data for causes and sort them out into useful categories (like:
lack of forward planning
unexpected hardware or software fault
lack of trend monitoring
sudden change in business environment)

Your performance targets can be set from this. You may be able to make improvements in each category, but certainly not the same improvements and not to the same level of effectiveness.

The simple fact is that if you are in a volatile situation there will be more emergency changes than if you are in a stable situation. It is the nature of some businesses to be more volatile than others. If your business requires constant cutting edge IT then your systems will be less stable. Any organization going through a major transformation will be more likely to need emergency changes than at other times.

The criticality of a service is not a good guide to its need for emergency changes. but it can be a good idea to have seperate targets for different services.

On the other hand, and especially if you have a smaller scale organization, it might be best to analyse emergency changes (and disruptive changes) to identify those that need not have been so, and to aim at the improvements that will make them less frequent.

Something I hinted at earlier the incidence of emergency and disruptive changes is something largely outside the control of the change management function and therefore they cannot be part of the KPIs for change management, but rather for service management as a whole.

Finally, it is not wholly clear to me whether it is better to use percentages or numbers. Does the incidence of emergency (and disruptive) change increase in line with the overall incidence of change? I don't know. Nor do I know if there can be a theoretical statement to that effect or if it can be different in every organization. I'm inclined to favour numbers, because, in an extreme case, what if there were only two changes in a month and one was a genuine emergency and the other was inevitably disruptive?

PS (well I did say finally above)
It is also relevant how you define "emegency". For example, an organization in a permanently volatile state may have developed (invested in) very much more rapid response to change request than a more staid organization and thus could have fewer emergency changes.
"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
Post Reply