Search
Topics
  Create an account Home  ·  Topics  ·  Downloads  ·  Your Account  ·  Submit News  ·  Top 10  
Modules
· Home
· Content
· FAQ
· Feedback
· Forums
· Search
· Statistics
· Surveys
· Top
· Topics
· Web Links
· Your_Account

Current Membership

Latest: LHawes
New Today: 32
New Yesterday: 42
Overall: 148443

People Online:
Visitors: 50
Members: 1
Total: 51 .

Languages
Select Interface Language:


Major ITIL Portals
For general information and resources, ITIL and ITSM World is the most well known for both ITIL and ITIL Books. A shorter snapshot approach can be found at ITIL Zone

Related Resources
Service related resources
Service Level Agreement
Outsourcing

Note: ITIL is a registered trademark of OGC. This portal is totally independent and is in no way related to them. See our Feedback Page for more information.


The Itil Community Forum: Forums

ITIL :: View topic - Auto Discovery
 Forum FAQForum FAQ   SearchSearch   UsergroupsUsergroups   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Auto Discovery
Goto page 1, 2  Next
 
Post new topic   Reply to topic    ITIL Forum Index -> Configuration Management
View previous topic :: View next topic  
Author Message
mnp123
Itiler


Joined: Dec 26, 2007
Posts: 24

PostPosted: Thu Jan 17, 2008 2:19 am    Post subject: Auto Discovery Reply with quote

Hello Everyone, Is anyone has implemented Auto Discovery tool in their environment? I am trying to find out how often Auto Discovery tool should scan CI's (Daily/Weekly/Monthly) etc.. We are having performance issues with Auto discovery tool to scan our environment on a daily basis. I do understand, performance issue can be because we might have given a small hardware or scanning too many CI's. But if I can find out from industry perspective is their any guideline w.r.t Auto Discovery tool?
Thanks
mnp123 Laughing
Back to top
View user's profile
Timo
Senior Itiler


Joined: Oct 26, 2007
Posts: 295
Location: Calgary, Canada

PostPosted: Thu Jan 17, 2008 2:32 am    Post subject: Reply with quote

Hi there,

when I just started working for my last company I was put on a project that was concerned with implementing config management. At the time it was decided that we are going to run auto discovery to find everything in our environment and then populate the CMDB with that data. Would you like to take a wild guess how that went? Smile Yeah, lots and lots of absolutely unusable data and an estimated 3 months to sip through all of it.

To me, you need to start from the top, i.e. your config mngt process, in order to determine requiremetns for auto discovery. Does your CMDB have provision for reconciliation, validation and synchronization of data? How do you ensure you don't end up with duplicates? Are these processess automated? For example, if you run your auto discovery daily, you will need to make sure that any discrepancies between what's in the CMDB and what's out there in the real world can be addressed immediately. That means you will need to have ability to identify gaps, properly escalate them, oversee the corrective action and validate corrections.

Point is, start with your process and see what your requirements are w.r.t. to CIs. Tool is irrelevant. There are planty of tools that can do data discovery. It's what you do with that information and how you use it, will determine how successful your auto discovery process is.

Thanks,

Michael
Back to top
View user's profile
UKVIKING
Senior Itiler


Joined: Sep 16, 2006
Posts: 3318
Location: London, UK

PostPosted: Thu Jan 17, 2008 6:21 am    Post subject: Reply with quote

In my not so humble opinion, auto discovery tools are a waste of time, money and effort

1 - You should a NMS tool already - network monitoring tool that monitors the devices thatt you have told it to monitor

2 - If you have IT equipment connected together - some one must manage, install, rack, de-rack, update, power on power off the equipment

Use them to do the physical inventory

compare that to the NMS inventory

The other thing about Auto discover tools, If you have systems and network devices, you SHOULD know where they are (physical) and how to get to them ... so why have a auto discovery tool

Also - if you set the A D wrong you could discover the interner
_________________
John Hardesty
ITSM Manager's Certificate (Red Badge)

Change Management is POWER & CONTROL. /....evil laughter
Back to top
View user's profile
dboylan
Senior Itiler


Joined: Jan 03, 2007
Posts: 189
Location: Redmond, WA

PostPosted: Thu Jan 17, 2008 12:24 pm    Post subject: Reply with quote

An auto discover tool can be used to initially populate a CMDB (good luck), but it shouldn't be used to then push further updates. Once you have established your CMDB, your environment should be in a Controlled State. All changes to attributes of CIs should be driven from approved Change Requests. Any change that occurs outside of a Change Request is something that needs to be investigated as a potential bypass of procedure.

You might want to use the auto discovery tool as an audit tool to verify that the Change Management process is being complied with, but that is all.

Don
Back to top
View user's profile
mnp123
Itiler


Joined: Dec 26, 2007
Posts: 24

PostPosted: Fri Jan 18, 2008 3:48 am    Post subject: Reply with quote

I really appreciate all of your feedback. I agree with all of you that start with process and not with tool. I also agree Auto discovery tools are not at their maturity level that it provides the data on a consistent basis especially for large networks. When I took over this project it was too late and auto discovery tool was sold as one stop solution instead of process. Auto discovery tools do provide good information but in complex environment they cause too much maintenance/monitoring of discoveries.
Do any of you using any auto discovery tool and how often you are scanning your network?
Back to top
View user's profile
Cotswolddave
Itiler


Joined: Mar 23, 2007
Posts: 35
Location: UK

PostPosted: Fri Jan 25, 2008 10:38 am    Post subject: Reply with quote

Size can be a problem - especially for discovery tools.

I've been working with some larger orgs with over 100,000 desktops and they find most auto-discovery tools unsuitable in practice. A couple of reasons (some which echo John Hardesty viewpoints).

1. If you allow an auto-discovery tool to populate a cmdb or asset system directly it will bypass any controls you have to validate data. ie. you may discover contractors PCs, demo kit and real devices, but fail to identify where your asset processes went wrong. So most mature companies only allow discovery tools to update a central repository where a device already exists, and has other manally entered data.

2. The technique of gathering everything into a discovery database, which you then query and use software lookup tables becomes unusable in big environments. The look up tables are often out of date and need to be manually set for bespoke apps. So current techniques are to only gather minimal data - name, os, mem and a few others and anything else is driven by you for a specific reason. So you decide what you want, then set a query for the discovery tool to run. In one place where I came across this they would run a query for say Oracle instances and get an answer across 85,000 desktops and servers in a day without killing the network or the host platform. A few weeks it was something else. They would compare the results with the asset database to work out what they didn't get a response from to work out a confidence level

3. Deleting entries from an auto-discovery tools database is often a manual process - so if you have nothing to compare the results with then your indicated population will always increase as with device replacements, with the result that it won't be believed. It can help you verify what you believe you should have, but if your job depended on giving an accurate answer for licencing or asset values to a boss - then start looking for another position now as autodiscovery can only ever be part of the asset management system

Dave
Back to top
View user's profile Visit poster's website
milligna
Itiler


Joined: Oct 27, 2008
Posts: 34

PostPosted: Tue Oct 28, 2008 2:38 pm    Post subject: Reply with quote

There's a great book by Larry Klosterboer from IBM about Configuration Management. He says that the first thing you should do when approaching a Configuration Management initiative is define the Scope (what types or categories of CIs to capture) the Span (what subset of those CI categories to capture) and the Granularity (what attributes of those CIs to capture).

When planning a Configuration Management System, there's no reason that the Span of CIs couldn't be limited to "systems that are discoverable" with the understanding that anything that isn't discoverable is going to be too costly to manage manually. That's not to say that you can't have manually managed items in your CMS, but you should be clear of how accurate those CI records are (when were they last audited for example). When your discovery capabilities increase (let's say a SOE was deployed to all the older desktops which allows them now to be seen accurately by the discovery system), then you can increase the Span of what CIs can be controlled via discovery tools.

My ideal CMS has a set of "bubbles" defining how controlled each subset of CI categories are controlled - lets say we are able to discover windows XP PCs in the environment and we are able to find out what software is installed on them via SMS, then we have a set of data that is being controlled. We take the discovered data, filter out any information that doesn't actually support the services we provide and introduce a "bare-bones" set of all discovered XP PCs into the CMS with a rating that says how much control we have over these CIs and how verifiable the information is (very much so in this case - the discovery tool tells us every day whether the PCs have changed or not - we get desktop support to investigate unexpected PCs appearing on the network, or we contact the last known user of a PC if it disappears from the CMS for a defined period of time.

Let's say discovery can't find Windows 2000 PCs, but we have done an audit of the w2k boxes, and we have a rough set of information. Again filter out that which is too costly to maintain (likely to be a LOT of information) and focus on that which supports the service. In a case like this you might only want serial number, location, PC name and owner. You also want to tag this CI as having a lower rate of accuracy and have different audit/verification procedures for these types of CIs (regular physical audits too expensive? Then you may only be able to rely on manual updates from your Desktop/Service Desk agents). Put a timer on your CIs - start the clock from the last time the CI data was verified and what the source was - do you trust Desktop more than Service Desk (coz they actually eyeball the thing)? Then set the timer for a shorter period of time. Plan additional verification activities around your CI validity counter.

When you have the ability to discover your w2k boxes, then expand your bubble to include these CIs. Your sphere of Configuration Management begins to expand as your capabilities expand.

One of the biggest problems I find is that CMDBs/CMSs are too densely populated and have no controls defined. It's great that you know all of this stuff about your environment, but how does having it listed in your CI help you perform support activities? Do you necessarily need to know exactly how many disk partitions there are at this point in time if we can't keep up in the CMDB and your engineers will just check on the server to confirm anyway?
Back to top
View user's profile
Diarmid
Senior Itiler


Joined: Mar 04, 2008
Posts: 1884
Location: Newcastle-under-Lyme

PostPosted: Tue Oct 28, 2008 6:56 pm    Post subject: Reply with quote

milligna,

While the mechanics of your proposal may be sound and the concept of control is very important, I can't help feeling that you have omitted the most important element and perhaps confused the issue.

The key to determining the scope, span and granularity and the first criterion to apply is importance to service management and thus to the business. That is a very good reason for not limiting your span to systems that are auto discoverable. (I take you to mean auto discoverable, because in the wider sense anything undiscoverable effectively does not exist.)

Everything of value to the business is too costly to fail to manage. If it is too costly to manage then it must not be ignored; it must be eliminated or replaced or fixed.

Even as a "you must start somewhere" approach, the only starting point is importance to the business, not ease of obtaining information. By all means build a discovery hierarchy, but subordinate it to a value hierarchy.
_________________
"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
Back to top
View user's profile Send e-mail
milligna
Itiler


Joined: Oct 27, 2008
Posts: 34

PostPosted: Thu Oct 30, 2008 9:42 am    Post subject: Reply with quote

Hi Diarmid,

I agree that the decision to implement control procedures should be focussed on the business need, and that there are often times where Configuration Management can't be done via Auto-discovery. However as a wise man once said "as soon as the data is in your CMDB it is out of date". Verification and control via auto discovery is a lot easier and cheaper to manage in the long term and reaps greater rewards in terms of supporting Service Management activities than those that are not able to be automated.

As you intimate, the question needs to be asked of the business - Here is the cost in time/effort/resources vs the risk of manual procedure failure vs the benefit of having somewhat accurate CI data. If the Business agrees to assume the cost and risk of manually managed CIs, then that's awesome, but my experience of organisational understanding of Configuration Management is that they expect the data to be introduced to the CMDB and that the effort in keeping this up-to-date in a controlled manner is going to take minimal effort. This may be attainable if appropriate control mechanisms and culture are enforced organisation-wide.

I also agree with using the metric of "number of CIs/attributes being managed manually" to drive service improvement. You can demonstrate the cost of Configuration Management control activities quite neatly by showing how many attributes or CIs your support staff are expected to verify outside of automated discovery activites. Verifying automated data is more a case of regularly auditing a random sample of the discovered data, whereas manually managed fields should be verified at every available opportunity - whenever a service desk agent takes a call, whenever a server engineer logs onto a server, during regular manual audits etc etc. These costs need to be demonstrated to the business.

Cheers,
Kev
Back to top
View user's profile
UKVIKING
Senior Itiler


Joined: Sep 16, 2006
Posts: 3318
Location: London, UK

PostPosted: Thu Oct 30, 2008 6:38 pm    Post subject: Reply with quote

My issue with AutoDiscovery tools is as follows

Scope - either too narrow or too wide
Agent / Agent less - Some AD tools require an agent on the devices
IT Gear specific - Some do Network equipment better than servers, PCs etc
Protocol / not protocol based - SNMP polling and MIBs for SNMP tools may be better at gathering some data etc
Firewall interference - some business networks close all ports except those necessary
IP ranges - RFC1918 address space issues and other network architecture rules

Data files and extraction - How easy is the data exported and compared to what is in the CMDB

I like AD tools as an aid in the verify of the environments - but that is it

For me, if I am tracking network devices from a CMDB point of view, my first point of contact regarding information on the network devices is.....

the team that is responsible for implementing new network gear -- for new gear
the team that is responsible for managing existing gear.....

Surely the network team knows what, where and about the kit they are managing
_________________
John Hardesty
ITSM Manager's Certificate (Red Badge)

Change Management is POWER & CONTROL. /....evil laughter
Back to top
View user's profile
Diarmid
Senior Itiler


Joined: Mar 04, 2008
Posts: 1884
Location: Newcastle-under-Lyme

PostPosted: Thu Oct 30, 2008 9:18 pm    Post subject: Reply with quote

Kev,

My point is that to say "there are often times where Configuration Management can't be done via Auto-discovery" is less appropriate than to say that sometimes auto-discovery can aid configuration management.

In other words, auto-discovery has nothing to do with determining what you need in your CMDB and should have little impact on how you design it.

The major processes around the quality of the content of your CMDB are the processes that interface with it and the application of audit to these processes and to the CMDB itself and it is only in the last of these that auto-discovery has a significant role. However, even here, you need much more as you need to verify that relationships are correct and these can involve mapping to services, user populations, service and support functions etc. little of which can be auto-discovered with confidence.



I fear you slightly misunderstood my reference to business. You do not ask the business to fund Configuration Management; you agree with the business what services they require and to what quality and at what cost. How you deliver is up to you. If you cannot meet the quality requirements within the cost constraints, then you negotiate (or market test your service).

For configuration management you determine what you need to do to meet these requirements and then you device cost effective and efficient ways to do this.

If it is cost effective to deploy auto-discover tools to help in this, then that is what you do. Irrespective, you have to balance risk and cost within known constraints.
_________________
"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
Back to top
View user's profile Send e-mail
milligna
Itiler


Joined: Oct 27, 2008
Posts: 34

PostPosted: Fri Oct 31, 2008 2:04 pm    Post subject: Reply with quote

Hi Diarmid,

I see where you're coming from. The only organisations I've worked for so far where I've been concerned with ITSM practices have been outsourcers with less focus on Config Mgmt in their agreements with their customers. I've not yet implemented a Config mgmt initiative from scratch - I've inherited existing infrastructure where the Config Mgmt documentation will say something like "all configuration items are 100% accurate", and when the data was populated they basically took everything that the customer gave them and threw it into the CMDB.

One example I'm thinking of is a set of network devices for which data wasn't fed to us when we took over support and for which data is not autodiscoverable, but due to the wording of the agreement the customer is putting pressure on us to "perform configuration management" on these devices. The problem is that the devices are spread around the state (one per regional site) and the customer is not going to pay us for an audit, nor can we really afford to wear the cost of an audit for these devices ourselves. How have other people here addressed that disparity in Configuration Management? How do you get the scope of your configuration management activities under control once a precedent has been set? Auto discovery is a good way of drawing boundaries around what can and can't be done by config mgmt in the absence of any further resources being spent on the config mgmt initiative.

I know if I were implementing from scratch, I would be very clear about what was and what wasn't under config management control to start with (using the concept of "the config management bubble" I spoke about earlier) and I would be very clear about what activites would be taking place to verify or update a particular category of CIs . If the cost of performing audits or automating discovery is out of the question for these network devices for example, then I'd probably say that the control procedures for these devices is limited to ad-hoc verification by the Service Desk when a call is logged or something along those lines. If there is some data that is going to be very manual and difficult to obtain, but it is vital to the business, then we'd allocate more resources to procedures for managing this data. That way the effort expended on configuration management activities is matched to the requirements and risks of the business when compared to the cost of maintainance.

Kev.
Back to top
View user's profile
UKVIKING
Senior Itiler


Joined: Sep 16, 2006
Posts: 3318
Location: London, UK

PostPosted: Fri Oct 31, 2008 7:03 pm    Post subject: Reply with quote

milligna

Let us take your network devices scenario

Who manages them ? If you do then you would have access to the machines from an admin point of view

the information about the devices can be gathered by the network team

start slow

device name
type
ip data
interfaces

surely you would have contact details for the physical location -

viola! you are on a start
_________________
John Hardesty
ITSM Manager's Certificate (Red Badge)

Change Management is POWER & CONTROL. /....evil laughter
Back to top
View user's profile
Diarmid
Senior Itiler


Joined: Mar 04, 2008
Posts: 1884
Location: Newcastle-under-Lyme

PostPosted: Fri Oct 31, 2008 8:13 pm    Post subject: Reply with quote

milligna wrote:
One example I'm thinking of is a set of network devices for which data wasn't fed to us when we took over support and for which data is not autodiscoverable, but due to the wording of the agreement the customer is putting pressure on us to "perform configuration management" on these devices. The problem is that the devices are spread around the state (one per regional site) and the customer is not going to pay us for an audit, nor can we really afford to wear the cost of an audit for these devices ourselves. How have other people here addressed that disparity in Configuration Management? How do you get the scope of your configuration management activities under control once a precedent has been set? Auto discovery is a good way of drawing boundaries around what can and can't be done by config mgmt in the absence of any further resources being spent on the config mgmt initiative.


Sounds like there was something lacking in due diligence when you took over the service. I don't quite understand what you mean by the customer requires you to do configuration management. The most obvious customer of Configuration Management is the Service Manager and his delegated Incident, Problem, Change, etc. managers; and of course the Infrastructure Manager can be an important customer.

If you are delivering services through these devices then you obviously need to manage their continuing effectiveness and reliability. Configuration Management helps with this by maintaining information as to their characteristics, maintenance and life expectancy, relationship to other components and to specific services. The cost of doing the configuration management is a cost to the service, especially including the provision and maintenance of the devices.

There is always risk of failure. This would raise costs in regard to repair or replacement as well as loss or reduction of service. The decision to make is what at what level of confidence does the risk balance the cost of inspection. From your present position, what is the risk through your lack of detailed information about the devices? How risk averse is your customer? How damaging to your contract is a breakdown? It is the same equations that led to the Wichita Line Man.

Like John says, you can make a start easily enough.
_________________
"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718
Back to top
View user's profile Send e-mail
milligna
Itiler


Joined: Oct 27, 2008
Posts: 34

PostPosted: Mon Nov 03, 2008 9:33 am    Post subject: Reply with quote

Hi guys,

The devices we're talking about in this example are 8 port hubs and the comms guys tell me that they can't be discovered with what we have available to us at the moment. We know the location and that there is a hub there. My understanding is that we don't expressly manage these hubs because they don't have remote access capabilities, but we COULD be managing these hubs if the customer upgraded them to something a bit smarter. We have been asked to include these as CIs in the CMDB despite not having details - I don't really like having generic "placeholder" CIs - HUB-LOCATIONNAME or whatever - I think CIs should be more unique and specific than that.

At the moment if one of these hubs fails then the remote site is left off the network until a replacement can be sourced - as we don't manage these "dumb" devices, then the risk to the service we operate is minimal - I don't believe we would get penalised for it. The risk to the customer is higher which is why they are pushing us to solve the problem for them.

Anyway, I think you're right about who the real customer is Diarmid - The people managing the Service would be the most appropriate people to decide what they need to know about and what they don't need to know about. Just including a CI because it exists without having the means to control or verify the data doesn't sit right with me... Especially if the customer is likely to turn around and complain about the validity of the data and say we're not "doing" Config Management "properly".

I'd much rather have a continuum of Configuration Management validity - I think some fields in the CI that contain details like when the CI was last verified and from which source would be a good start. Even better would be something which tracks the update source of each attribute - for example, a desktop support agent verifying a serial number is considered less trustworthy than an autodiscovered source, but more trustworthy than a service Desk agent on the phone.

I'd like to see CIs given a validity rating based on when the CI was last updated and from which source - that way the validity of CIs are given a measurement on a continuum and the various service management processes that use Config Data can be more informed when evaluating CIs.

One pain point for me in the past has been from Server Engineers not using server data in the CMDB because they don't trust the data there!

Kev
Back to top
View user's profile
Display posts from previous:   
Post new topic   Reply to topic    ITIL Forum Index -> Configuration Management All times are GMT + 10 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

Powered by phpBB 2.0.8 © 2001 phpBB Group
phpBB port v2.1 based on Tom Nitzschner's phpbb2.0.6 upgraded to phpBB 2.0.4 standalone was developed and tested by:
ArtificialIntel, ChatServ, mikem,
sixonetonoffun and Paul Laudanski (aka Zhen-Xjell).

Version 2.1 by Nuke Cops 2003 http://www.nukecops.com

Forums ©

 

Logos/trademarks property of respective owner. Comments property of poster. Rest 2004 Itil Community for Service Management & Foundation Certification. SV
Site source copyright (c)2003, and is Free Software under the GNU / GPL licence. All Rights Are Reserved.