Search
Topics
  Create an account Home  ·  Topics  ·  Downloads  ·  Your Account  ·  Submit News  ·  Top 10  
Modules
· Home
· Content
· FAQ
· Feedback
· Forums
· Search
· Statistics
· Surveys
· Top
· Topics
· Web Links
· Your_Account

Current Membership

Latest: EVanDeVel
New Today: 1
New Yesterday: 66
Overall: 140041

People Online:
Visitors: 63
Members: 3
Total: 66 .

Languages
Select Interface Language:


Major ITIL Portals
For general information and resources, ITIL and ITSM World is the most well known for both ITIL and ITIL Books. A shorter snapshot approach can be found at ITIL Zone

Related Resources
Service related resources
Service Level Agreement
Outsourcing

Note: ITIL is a registered trademark of OGC. This portal is totally independent and is in no way related to them. See our Feedback Page for more information.


The Itil Community Forum: Forums

ITIL :: View topic - Change Risk Assessment
 Forum FAQForum FAQ   SearchSearch   UsergroupsUsergroups   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Change Risk Assessment

 
Post new topic   Reply to topic    ITIL Forum Index -> Change Management
View previous topic :: View next topic  
Author Message
mtbchangelady
Newbie
Newbie


Joined: Sep 27, 2007
Posts: 5
Location: Buffalo, NY

PostPosted: Sat Sep 29, 2007 12:38 am    Post subject: Change Risk Assessment Reply with quote

We implemented our new (ITIL compliant) Change Management system earlier this year and it seems to be well received. Although we're now finding that a larger than anticipated percentage of Changes is falling into the "low risk" range, and consequently do not require CAB review. In light of this, we are reassessing our Change Assessment questions to help the Change user obtain an appropriately risked Change.

We currently have 4 criteria that determine if the Change is low, medium or high risk. They are:

1. Application/Component being impacted
2. Duration of downtime
3. Answer to question: What is the planned impact of the Change?
4. Answer to question: What is the complexity of the Change?

The change user is asked to take into consideration the following questions when determining What is the planned impact of the Change?:

What is the impact to the staff?
Will a large amount of people/customers be impacted if the change fails?
Could there be a negative financial impact if the change fails?

The change user is asked to take into consideration the following questions when determining, What is the complexity of the Change?:

How many departments are dependent on supporting this Change?
How well has this been tested?
Is this a change to support a large project?
Will it be implemented in phases?
How long will it take to verify success of the implementation?

My struggle is to come up with better questions that can be answered individually and a better scoring system behind the scenes to (hopefully) accurately depict a High, Medium or Low risk Change.
Back to top
View user's profile
llondra
Newbie
Newbie


Joined: Jul 25, 2007
Posts: 7

PostPosted: Thu Oct 04, 2007 6:02 am    Post subject: Reply with quote

I'm struggling through the same thing right now - developing a change risk assessment. I used an example from the IEC site provided by Ameren of their questionnaire and worked with our IT managers to adapt it for our use.

Our questionnaire asks 7 questions and each question has various answers with an associated risk factor. After answering all questions and selecting a risk factor for each, all of the the risk factors are added for a total sum. The total sum of risk factors correlates to a final "Risk Rating" (Minimal, Low, Standard, Significant, Major).

We did some testing on implemented changes and using the ol' "yeah, that feels like a xxxx risk" method compared that to our results from the worksheet. The resulting risk rating seemed to hold up.

It's very basic, but it's supposed to be. We've never done any kind of formalized risk assessment, so I wanted very much something that was quick and easy. We've had it in a "pilot program" that is due to end next week - we'll see how it goes. I'm hopeful....

Here is our list of questions:

1. How many services rely on the device(s) that will be impacted by this change?
2. What is the criticality of the above service(s)?
3. How many production users could be affected?
4. How will implementing this change affect production availability?
5. How important is the impacted device to production users during the planned implementation time?
6. Is coordination or testing required among other groups (e.g. IT team, developer, end user, etc.) to implement the change?
7. What is the estimated time to back out the change?
Back to top
View user's profile
mtbchangelady
Newbie
Newbie


Joined: Sep 27, 2007
Posts: 5
Location: Buffalo, NY

PostPosted: Thu Oct 04, 2007 6:24 am    Post subject: Reply with quote

Thanks londra,

The questions you've listed are pretty much the same ones we're going with, but we've broken them into categories:

Planned Impact of Change
Potential Impact of Change
Complexity of Change

Thanks for your feedback, I'll let you know how it goes!
Back to top
View user's profile
JoePearson
Senior Itiler


Joined: Oct 13, 2006
Posts: 116
Location: South Africa

PostPosted: Thu Oct 04, 2007 9:51 pm    Post subject: Reply with quote

It sounds like you are concerned that these (larger than anticipated percentage) low risk changes are not all really low risk - the change users are being too optimistic. Is that right?

Developing more levels of questions around risk and impact can help honest and alert users get more realistic assessments. But you may (I won't say will!) have some who want to hide the risk or simply aren't aware enough to assess them properly.

One way around this is to certify or accredit the change users - someone with a good track record would be entitled to put through low-risk changes, others would have extra checks (possibly even requiring a CAB review) imposed until they became reliable. Attendance on an internal training session could be one prereq (and less sensitive than "we don't trust you yet").

I don't think I've seen an organisation apply this across all users, but I do know some that flag some change requesters/owners as being unreliable.
Back to top
View user's profile Visit poster's website
mtbchangelady
Newbie
Newbie


Joined: Sep 27, 2007
Posts: 5
Location: Buffalo, NY

PostPosted: Mon Oct 15, 2007 11:17 pm    Post subject: Reply with quote

Joe,

You're right, I am concerned about number of changes that are rated low risk in the eyes of the Owner, but CAB and I think otherwise.

"Flagging" the repeat offenders (or maybe "flogging" would be more appropriate!) and rewarding or "fast-tracking" the more conscientious Change Owners is a really interesting concept; one I probably already do subconsciously.

Thanks for the tip.
Back to top
View user's profile
Skinnera
Senior Itiler


Joined: May 07, 2005
Posts: 121
Location: UK

PostPosted: Fri Jan 04, 2008 7:20 pm    Post subject: Reply with quote

For my organisation we have defined the following;

Risk; the likelihood of a Change being unsuccessful and causing a service outage.
A number of different factors will need to be considered when assessing the RISK of a Change to determine its category. These include the complexity, scope, testing, recovery and timing associated with the Change, but general guidelines are as follows;

Low
    Recovery / Back Out plan known and tested
    Non-complex Change (e.g. server reboot, data purge)
    Change that replicates normal user activity
    Change tested successfully in an environment that fully replicates the live environment
    Significant history of successful implementation
    Successful implementation of Pilot & subsequent Phase 2 rollout (e.g. 10+) nodes

Medium
    Recovery/Back Out plan untested
    Change to a component that has a history of risk
    Change to a non-critical component
    Change tested successfully but where environment does not fully replicate live (e.g. reduced size etc)
    Successful implementation of Pilot & subsequent Phase 1 rollout (e.g. 0-10) nodes

High
    Recovery / Back Out plan not available
    Change validation dependant on client usage (i.e. cannot be checked until user load applied)
    Change across multiple sites, platforms, services or networks
    A single Change delivering multiple fixes/builds/updates
    Change involves input from multiple support areas.
    Change to a critical component (i.e. a component that the service will not work without)
    Change not tested (at all, or to the maximum level of testing available)
    Pilot or first time of application in live network
Back to top
View user's profile Send e-mail
Skinnera
Senior Itiler


Joined: May 07, 2005
Posts: 121
Location: UK

PostPosted: Fri Jan 04, 2008 7:26 pm    Post subject: Reply with quote

mtbchangelady wrote:
"Flagging" the repeat offenders (or maybe "flogging" would be more appropriate!) and rewarding or "fast-tracking" the more conscientious Change Owners is a really interesting concept; one I probably already do subconsciously.
We are developing a 'credit based' system which says that if you meet or better our Change KPIs as an individual Change Owner, you get a 'trusted status' and can bypass CAB (htough not my team's vetting of your Change), but if you don't beat them you come to CAB for every Change you raise.

The 'table' of who is good/bad (Smile) is updated monthly so you can go in and out of the trust list.

The idea has huge support across our organisation - now just need to make the darn thing work! Evil or Very Mad
Back to top
View user's profile Send e-mail
Display posts from previous:   
Post new topic   Reply to topic    ITIL Forum Index -> Change Management All times are GMT + 10 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

Powered by phpBB 2.0.8 © 2001 phpBB Group
phpBB port v2.1 based on Tom Nitzschner's phpbb2.0.6 upgraded to phpBB 2.0.4 standalone was developed and tested by:
ArtificialIntel, ChatServ, mikem,
sixonetonoffun and Paul Laudanski (aka Zhen-Xjell).

Version 2.1 by Nuke Cops 2003 http://www.nukecops.com

Forums ©

 

Logos/trademarks property of respective owner. Comments property of poster. Rest 2004 Itil Community for Service Management & Foundation Certification. SV
Site source copyright (c)2003, and is Free Software under the GNU / GPL licence. All Rights Are Reserved.