Joined: Mar 12, 2005 Posts: 255 Location: Melbourne, Australia
Posted: Wed Jul 13, 2005 8:52 pm Post subject:
A little reflection and common sense can go some way to clarifying the appropriate controls placed on changes.
You mentioned classification - of course one option is to reflect the change control scope in the classification system you use for your RFCs.
So how do you get to where you can classify a 'change' appropriately?
First consider that every change is effectively 'tested' - it's a question of whether you are going to do the testing yourself in an environment set up for that purpose, or effectively get your users and customers to do the testing for you by making the change in the live environment. If that test fails, you will know about it. So the risk to the business is a factor in establishing the scope of your formal testing. Obviously risk can be assessed a number of ways - but always in terms of potential service disruption or degradation.
So, for example, if you are swapping a component for which there is redundancy, if it fails the alternative component will do the job, and there will be no impact. Such a change should be controlled, but you could decide to install and see, rather than test first.
In other cases you may decide the risk is negligible if the change procedures exist and are followed.
In addition there will not be scope in your test environment to support a sweeping policy covering all changes - do you have two identical networks for example? - One for testing? Basic infrastructure changes will not always be 'tested' before making the change. No technician or engineer worth their pay would install something and walkaway without diagnostics - so 'testing' is simple a post-change health check.
Another case is swapping like for like in a simple break-fix situation: Replacing a monitor, or a blown switch, with an identical model. Does this require testing?
So I would say, not 'all' changes require testing. It would be prohibitive in terms of the cost of additional time and administrative overheads.
IMHO the question that follows from this: What changes should be tested? can only be answered by sufficient configuration management. You need to be able to bring the CIs that shouldn't be changed without testing under change control, and you can only do that effectively through status accounting - which is a key process in configuration management. Equally important is the identification and recording of the dependencies and relationships that collect CIs into systems (eg., Email) or technology domains (eg, the LAN), and then into service inputs.
When a CI is brought 'in scope' of the CMDB, one of the attributes of its entry record would indicate whether changes should be formally tested prior to being changed, and what constitutes a controlled change - ie. that a break-fix swap is allowed, but an upgrade, patch, or replacement must go through the CAB with a test plan.
Once you have a reasonable level of configuration management and status accounting in place you can apply another 'test' to deciding the appropriate level of change control (including testing). Ultimately it is CI's that are 'changed'. But a change to a specific CI may (or may not) propagate to effectively be a change to a system or service of which it is a component. Whether or not this 'propagation' occurs is another indicator of the appropriate scope of your change control.
This is very useful, because in the end it is about services to the business. One of the primary reasons for change control is to protect the business. If you can assess the scope of changes in terms of the risk to, and impact on the business, you will be in a position to set the boundaries that determine the level of control you bring those changes under.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum