We have a homegrown CMDB whose data will eventually be ported to the ever-being-pushed-out "real" Service Management tool.
In any case, we have a CI category of virtual server (VS), be it LPAR, Linux Guest, MS Virtual Machine, or whatever. Currently we have Linux Guests residing within LPARs. The VS CI has a physical relationship to the physical machine or LPAR within which it resides.
Our VSs don't move around too often (yet), but you might consider the creation of a CI category to comprise the physical servers between which a VS might move (similar to a CI category of Cluster) and associate the VS to that CI category.
Joined: Sep 21, 2005 Posts: 3 Location: Uk, North West
Posted: Mon Nov 26, 2007 11:24 pm Post subject: Virtual machines and cmdb
When my most recent client embarked upon a server virtualisation project, we decided to keep it simple with respects to how these CIs were tagged.
From a CI perspective both the physical server and virtual servers were represented by CIs within the CMDB. A relationship was then built between the physical box CI and the virtual server CI.
Clearly, you cannot attach a physical tag to a virtual entity, so we used a modifed version of the physical asset tag to tag the virtual box.
We did this because the most pragmatic way to identify a virtual box for us was by it's hostname. However, we had business rules in place that stated that every CI must have a tag number. We also had another business rule that stated that every CI name in the CMDB should be unique, so a modifed version of the physical CI tag seemed the way to go.
Here is an example of what we did:
Physical box - tag number 123456
Virtual server #1 - tag number 123456A
Virtual server #2 - tag number 123456B
Virtual server #3 - tag number 123456C
and so on...
I'm sure that this approach won't suit everyone, but it worked for us.
The workaround you have provided might work when their is no linkage between the discovery and CMDB. Once you get to the auditing of CI's in CMDB through discovery process you will have problem since these tags will not be used by discovery. We have found out through our extensive testing that each virtual instance has a serial number associated with it same as physical server. Use serial number as your primary key in CMDB so that once the discovery processes recognize that virtual machine it can automatically do the auditing based on serial number. Just my 2 cents.
Just my 2 cents. We are using VMWare for our virtualization of servers so we decided on the following:
The physical servers are each their own CI
The VMWare clusters are each their own CI
The virtual servers are each their own CI
We created a relationship between the physical servers in a particular VMWare cluster to that cluster.
We created a relationship between the virtual servers and the VMWare clusters they run on.
With this solution, if a virtual server moved from one physical server within the same cluster, we still had a correct relationship.
If a virtual server moved from one cluster to another, we would change the relationship for that virtual server from the original VMWare cluster to its new VMWare cluster.
This is the way we are working as well. It is efficient and it allows for VMWare to manage partitions dynamically. I am currently planning the use of a report that identifies which VM runs on which host, and either file that report with the cluster on a daily basis, or use it as a feed to automate the management of "runs on" supplemental relationships between a VM and its host. We are sometimes experiencing issues due to conflicting demand levels between VMs and keeping close track of what runs where is helpful. but the solution above, for me, is a must. _________________ BR,
Technology Consulting | Service Excellence
Red Badge Certified
Deleting any CI can (and will) make loads of related records into "orphans" and will destroy parts of your incident/service request/change history - which you need for trend analysis, amongst other things. Bad move, IMHO.
As to the original question, we´re still fumbling in the dark there, too..those VM:s can be a headache in more than one way.
My original idea was like this: Make anything that can break or fail a CI first. Important details go into the CI as attributes.
Don´t go over-granular. Make it easy to get a fast and correct helicopter view from different places/ITIL roles. So - use the KISS concept:
The services (from the catalogue) that are running on the virtual boxes are a CI. This helps the SD to tie incidents to services, not machines or infrastructure.
The virtual boxes are a CI, with attributes (service (from catalogue) running, OS, patch level, IP etc). They can move around, they can contain more than one service, hence = CI
The physical box is a CI with attributes (hardware, current virtual boxes onboard, storage area/SAN location/config, etc)
The storage area is a CI (with attributes - connected servers, firmware level etc)
Tie all these together with two-way relations in a sensible way to get the connections - physical and logical, and service-wise. Think "visual network diagram".
Doing this at the right granularity level will give you a clear view what goes on (down) when things break or changes are applied.
Still not done with the thinking, tho - this may or may not be our way to do it. And at the end of the day - can the toolset (to be used eventually) handle this way of thinking??
Cheers /Richard _________________ ---------------- bragging line -----------------
V2 Combined Practitioner (service support)
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum