Posted: Tue Apr 03, 2007 9:54 am Post subject: Re: Great discussion but did the original question get answe
. What I am finding out that the solutions available today all require a huge amount of customization and the hiring of a support team to perform the customization and on-going maintenance. Is this really the case?
Well most of the "big" tools on the market are as you describe, however there are a few solutions that are lighter in terms of costs, complexity and staff needs.
Based on some recommendations of "colleagues" I trust, I am currently considering the Infra tool as a solution for a customer of mine. I have not investigated deep enough to make any recommendation , but it seems interesting on several aspects.
you can find information there: http://www.infra.co.uk/
This is for information only and should not be considered as any advertisig or recommendation for the product . _________________ JP Gilles
Didn't see this posting of yours. A couple of things struck me that may be worth while considering. Given that your existing solution is covering infrastructure, networks and so on, rather than a traditional service desk.
a. have you thought of linking netViz (from CA) to your existing and new database. Its a technique my company uses to automate the drawing of diagrams such as WAN, LAN, SAN, rack, service, path and other visualisation techniques? Its how we draw service maps from Peregrine currently.
b. If you're wanting something which is more hardware centric, a standard service desk or ITIL CMDB is of limited use. We designed a physical level "CMDB" to do things of more interest to infrastructure guys. For instance, space, cabling, power, networks, spare ports, workflow, storage, vlan, ip addresses etc. We link it to netviz for automated rack layouts, network diagrams etc.
c. if you're wanting something to just do logical mapping, then a standard ITIL CMDB can help, but they are not well suited to networks and other systems where there is complexity and parallel paths. Hence the reason why we have software that analyses PSC and others to produce visualisations from the CMDB data. (We normally use access as an interface layer to PSC as PSC reporting is limited)
Trying to find an alternative to your custom database will be difficult as it is covering a number of technologies. Would be interested in feedback on what we have developed. Typically we work with 1000 server plus organisations
Posted: Fri Apr 06, 2007 2:19 am Post subject: Service Center 6 & CMDB module
Thank you everyone for your replies. I am trying to gather data from users who have experience implementing the CMDB module from Service Center 6. My company has purchased the module a while back and is know looking at moving our current CMDB which is a SQL database with an MS Access front-end into the Service Center CMDB. We are trying to get an idea of what the pain will be and if the out of the box CMDB module is useable.
Joined: Apr 17, 2007 Posts: 36 Location: Cape Town
Posted: Mon Apr 30, 2007 6:08 pm Post subject:
Dear all, my 5p's worth to the CMDB debate.
But first a disclaimer: I've never implemented a CMDB according to ITIL (although there is so much confusion over the CMDB that I suspect no-one has;-), so these are simply my observations based on 15 year's worth of ICT projects. BTW most of the projects I've done are for large organisations, i.e. 150,000+ seats in 40+ countries.
1) Monolithic dBases rarely work.
Every part of every organisation will maintain data in their own systems which will be fine tuned for running their processes. If you try and create a central dB it will simply end up copying most of that data, and then the data will be maintained in two or more places at once. Once that happens data will diverge, errors will multiply, and chaos will ensue.
Moreover it will be impossible to force everyone to use your monolithic dBase in all but the smallest organisations. Real-world corporate politics, interdepartmental fighting, and ongoing religous wars will thwart its implementation. HR and Finance are particularly impossible.
2) All databases have errors
Much of the data in dBases is simply incorrect. This might be due to administrative lag (i.e. a change happens, and the dBase is updated later), but mostly it is just incorrect. When you have the same data in multiple places, then you start getting different errors and the overall error rate is even higher.
3) Most data is incorrectly maintained
Once data gets into a database it is hardly ever thrown away, it rarely has an identified owner, and never has a lifetime. So it hangs around for ever, and will eventually start to smell bad.
4) Dealing with missing data
Whilst checking of data that is in the dBase is possible (albeit usually poorly done) checking what is not there is much harder. There are discovery systems that can find items but in my experience they are not very good (utter crap). They put far too much spurious data into the system and you end up spending as much time culling data as you would have entering it.
5) Most data models are too complex
Almost every data model I've seen have two functions: one to demonstrate how clever the designers are, and secondly to capture every possible last bit of data that someone might conceivably need, ever.
The result is that time to market is too slow, and most of the information is never entered, let alone used. Relationships are impossible to work out, and you become very tense and irritable.
I reviewed one database used for incident management and noticed that at least ten special fields had been created to deal with unusual occurances. These were filled in only 30 cases in over 3 million records!
They could have used a 'notes' fields for these cases, and used procedure to deal with the unusual incidents.
6) Cross-checking of data rarely happens
I have become convinced that the only way to ensure that you have an acceptable error rate in your dBase is for every piece of data to have some independant cross-check defined that allows you to not only check what is there, but also what is not.
Think hard about what checks you can do. For example if you have an outsourced MPLS network cross check your router database against the invoices from the supplier. In my experience suppliers can cock up all sorts of stuff, but their invoices are generally uncannily accurate.
So in summary
I have always thought the CMDB was meant to be a meta-database, i.e. it tell you where the data you want is held. So identify the minimum dataset you can get away with, find out where it lives, and who looks after it.
That said, much of the data you'll need is held in dBases that are not always remotely accessable e.g. spreadsheets. (Don't get me started on ***king spreadsheets) so you may need to import it - BUT DO NOT BE TEMPTED to start maintaining it your imported database. EVER!
Use the DNS model, i.e. there is only ever one authoratative source, which can be cached in many places, but is only ever changed in one place.
And finally: be kind to those poor souls who use spreadsheets to maintain data. Explain in a friendly way how spreadsheets are great for playing with data, reporting, graphing and all that good stuff, but point out that they are not databases. Anything, even Access, is better.
Note to above: if you find anyone who is building pseudo-relational databases by linking multiple spreadsheets call the men in white coats immediately. They are dangerous and should not be approached.
All times are GMT + 10 Hours Goto page Previous1, 2
Page 2 of 2
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum