Friday, February 27, 2015

Governance Series - How to create a Single Source of Truth in SharePoint

I was in the SharePoint Community site last night and someone asked about maintaining a single source of truth.  One way to do it is to have a central repository for documentation; in SharePoint we would use a record center.  But it brings up a good point of how you would distribute those documents out to the team sites where they are actually used and manipulated.  The answer is simple, we create a new type of document library that can only contain links to documents.  We template it and save it as part of all our site templates instead of a regular document library.  In addition, we could add form and workflow functionality that would allow the users to upload files, but they would automatically be routed to the document repository; but that is for another time.

So how do we do this?  First we are going to need two sites with two document libraries one we will use as the source, the other as the link. In my example I will start with the "Linking" site first, but it really doesn't matter which site is first, only that the document exists in the repository prior to creating a link.

Linking Site

Ok, so we created a new site, lets say it is a team site template.  With a team site and most sites you get a document library named "Documents",  I am going to modify that library and make it a "Link Library".  Now don't confuse this with the "Links" app, which is just a list of hyperlinks.  At the end, I will add the steps to make it into a "Link Library" template so it can be reused.

Creating a Link Library from a Document Library

  1. In the Document Library Click the Library tab then Click "Library Settings"
  2. In the Settings page, Click "Advanced Settings"
  3. In Advanced Settings, under "Content Types", "Allow management of content types?", Select "Yes", then scroll down and Click "OK"
  4. Under "Content Types", Click on "Add from existing site content types"
  5. Scroll through the list and Select the "Link to a Document" content type, Click "Add" and then Click "OK"
  6. Under "Content Types" Click "New Button Order and Default Content Type"
  7. Change "Link to a Document" from "2" to "1" and Click "OK"
  8. Under "Content Types" Click "Document"
  9. Under "Document" Click "Delete this content type"
  10. When Prompted, Click "OK" to delete the content type
Your Library is now configured to create links to documents rather than documents themselves, but I have not addressed the issue of the upload button, but it is simple enough to hide the button or if you want a complete solution you can create a workflow to move uploaded files to a designated location in the repository; to me that is a much better option.  One caveat I want to point out is that the "Link to a Document" uses a URL, not a navigation pane, so you need to know where the document you are linking to is located, the advantage to this is it allows you to cross site collections and farms, but does not provide a great user experience.

Repository Site

The Repository site, like the Linking site contains a Document Library, unlike the Linking site, however, we are not going to manipulate it in the same way.  Instead, we are going to configure it to make it fit for our purpose, which is the storage of documents.  Now I am not going to tell you have to do that, because everyone is different, but some things that you might look at doing include setting up folder containers to classify documents that are uploaded. creating retention and disposition workflows and of course version control.

Test out the solution and let me know what you think.  Based on writing this, stay tuned, I think I like the idea of showing you the Nintex workflows that would go along with ths.

Follow me on twitter @DavidRMcMillan and @DevFactoPortals.  Feedback and suggestions are appreciated and encouraged.

Saturday, February 21, 2015

A Tale of Two Governances - Part 1 Health Benchmark

When you read about governance, it is often focused on what I call the foundational governance.  In the case of information technology (see my definition) we focus on foundational information governance, or the way we intend to use our information within the organization.  This however is only one of two parts of the actual governance needed for the management of our information. The second portion which is often not considered as governance, encompasses the processes and procedures needed to maintain the systems that are used to manage and transmit information.  I refer to this governance as operational governance and it consists of the structure, policies and procedures to ensure a stable and consistent information management solution.

Operational Governance

The governance of the sustainment processes within an organization are typically hit and miss.  Most organizations will have some type of backup and recovery process, but how many have a process for the creation of sites in an ECM like SharePoint?  Now don't get me wrong, some organizations are very dutiful in creating what they perceive as needed for processes to maintain and administrate their systems, the problem is that many do not, and those that do, don't necessarily get everything they need. As a consultant I come into organizations that are experiencing pain, usually in the governance of their solutions, my job is to determine the gaps and remediate them.  Now one of the best ways to evaluate gaps in the operational governance of a solution (regardless of the technology) is to interview the administrators and key business users, perform a health assessment of the system and make recommendations on best practice based on the gaps; in some cases we would then move to remediate those gaps as a final step.  These steps serve to quickly identify what exists and what does not exist and helps me understand the technical skills of the administrative team.

In this first part we will walk through finding our current state, then in a future post we will look at the rest of the operational governance that should be considered to ensure a properly sustained environment.

The Interviews

The first steps in the process are the interviews, in a SharePoint solution I like to site down with the farm administrators, the site collection administrators and the service desk manager.  These three groups or persons can provide insight into pain and into items that take up a significant portion of their daily activity, here are a few questions I will typically ask and why I ask them.  It is also important that you are clear with them about your purpose, as a consultant coming in they may perceive you as critiquing them on their job, but we are there to help them be heard and to fix their pain.

In other solutions, you may have different roles, as long as you can extract the pain and issues for the solution, your interviews can be with whoever can best provide the answers.

Farm Administrators

Farm administrators are your best source of information when it comes to issues with operational governance.  They know the solution better than anyone else and have to deal with everything and anything that goes wrong.  Often it is easiest to just sit down over coffee and a notebook and ask them what is wrong with the solution and what they would fix, then sit back, let them vent and take notes; but I like to have a plan so I typically compile a list of questions to ask before hand (let me know if you have some good questions and I can add them).
  1. Do you have anything that maps out your daily routine? This is asked to first establish the existence of a "Run Book" or standard operating procedures (SOP).
  2. Do you have any tickets assigned to you that are more than 30 days old? If yes, what are those tickets and what is preventing you from closing them?  This will help identify not only gaps in knowledge, but also pain areas in architecture or process.  There is often an in depth conversation into cause and what they would like to see happen to help resolve these issues.
  3. Are there any issues that keep recurring or that never really go away?  This provides insight into pain areas where they may have a work around or an area where they have decided to perform something a specific way and it is not working.  This is another area we will have additional conversations about how they think it should be.
  4. Do you have any performance issues with the current farm?  if yes, do you know the cause? and have you researched a solution? performance issues identify issues with the farms architecture and/or configuration that may be hampering the solution and preventing it from performing as intended.  Also it helps gauge knowledge level and root cause problem solving capabilities.
  5. Which group or groups are the most active on your farm? This will identify who to interview from a site collection administrator perspective, concentrating on the site collections that are the most active and the most need of support.
  6. Do you have remote offices that access the farm?  how good is their connection? Do you get performance tickets from those offices? Often remote connectivity is an issue, identifying where these connections are occurring and if there are issues up front will save you time and effort.  Follow the premise that it is easier to ask the question than search for it, tools are great, but the farm administrator will have insight the tools can't provide.
Notice I didn't ask them questions like how many farms, the servers on the farm, the number of content databases and their size.  These can be asked, but typically you know those things before you begin the engagement and even if you don't, reports from SPRAP or any other health assessment tool will clearly give you all this information.  At my office we have developed our own health assessment tool to answer all the farm questions and to touch over 100 different areas in the farm.  I have included the areas in my post, What should I Check With a Health Assessment? and would love any feedback you have on the points and questions.  With your help I can make it the most complete health assessment list available.

Once we completed we can move on to the Site Collection Administrator questions.  Site Collection Administrators have less knowledge of the configuration, but provide a direct point of contact with your key stake holders.

Site Collection Administrators

Based on question 5 above, you should have an idea on which Site Collection Administrators are needed for this portion of the questions.  In smaller organizations, the Site Collection Administrators may be the Farm Administrators, you should be able to figure that out quickly when beginning the engagement.  The Site Collection are a SharePoint solutions first line of direct contact and problem solving in the business, they are the most likely to know what the users want changed and what issues are recurring the most from a User Experience perspective.

  1. Do you have anything that maps out your daily routine? This serves a different purpose than with Farm Administrators, here you are looking for what is taking up most of their day.  If they don't have it mapped out, you should sit down with them and ask what a typical day would look like.  They may have trouble providing it, so another approach is to ask them to do some logging activities for a couple days, recording what they are working on.  You can then review it and confirm if the tasks are typical or not.
  2. Do you have any requests from your business users you have not been able to fulfill?  If yes, what has prevented you from fulfilling them?  This will often identify issues with configuration, policy or knowledge level, use it as a sounding board to ensure the architecture meets the business needs.
  3. Are there any issues that keep recurring or that never really go away?  This provides insight into pain areas where they may have a work around or an area where they have decided to perform something a specific way and it is not working.  This is another area we will have additional conversations about how they think it should be.
  4. If you could change anything about the solution what would you change?  Site Collection Administrators often have good feedback on improvements specific to user experience and functionality, make note of the changes, then identify them as future state requests for remediation and road mapping.
Remember these are really meant to draw out the pain points and issues with the environment.  You may hear the same answer from many different people, that should raise the importance of the issue.  Some of the answers may be symptoms of a deeper problem, it will be your job to determine that before attempting to remediate it.

Service Desk Manager

The Service Desk Manager can provide you tangible numbers on where issues are occurring, open tickets and typical complaints that users have made about the system.  They are the support of what has been discussed with the farm and site collection administrators and will provide additional insight and numbers behind the importance of certain issues that have been identified.

  1. Can you provide a report of ticket opened for SharePoint in the last 6 months?  This should provide ticket count, time to close and total percentage of tickets for each category.
  2. What are the main complaints your team hears in regards to SharePoint?  The Service Desk is the first line for support, so they hear most of what the users like and dislike about the solution.
  3. What would you change about SharePoint if you could?  This is an open ended question and should elicit conversation on improvements and pain that they feel from their environment.
Remember the questions above are a starting point, you want to draw out their pain experience.  In some cases it might be better to talk directly to the business units, but always remember these are about insight into issues about the environment.

Health Assessment

As mentioned above the Health Assessment portion is usually done through a tool that compiles all the information about the environment.  I analyzes your solution and provides feedback on all areas that need to be considered.  Please refer to What should I Check With a Health Assessment? for actual check points and complete it in whatever manner you wish.


Report and Remediation
From the interviews and health assessment a report of gaps and issues with the design can be created and presented to organizational decision makers.  From the report you will also be able to identify the criticality and with discussion, the priority of the issues involved.  Use this information to build a remediation plan, that includes the issue, it's criticality, priority solution to the issue and the effort needed to resolve the issue, then sit down with the decision makers and work out the remediation plan to resolve the issues.  The plan should provide a timeline for each resolution and the resource allocation needed to resolve it.


Next Part
In the next part of this series, we will look at other parts of your operational governance and what it takes to ensure your environment has the operational governance it needs.  Feel free to read my other posts and follow me on Twitter: @DavidRMcMillan and @DevfactoPortals.


Tuesday, February 17, 2015

What should I check with a Health Assessment?

When you perform a health assessment of a SharePoint farm, you need to check everything you have and compare it to patterns and practices.  In some cases you may come across limits (supported maximums) and boundaries (hard limits) for certain settings, your goal should be to ensure you are well within any limits and to have a plan in place to maintain your settings within the standards and practices as they relate to your farms.


The purpose of this blog post is to give you a guide into the physical attributes for your solution and what you need to check.  I do not talk about tools in this blog, but suggest you employ a tool for your health assessment because it provides consistent, repeatable approach to your solutions health.


I will not be too verbose in this post, but rather will concentrate on the areas one of my cohorts, Kevin Cole (follow him on twitter at ), a Microsoft Certified Master of SharePoint 2010 and brilliant technical mind, and I came up with.  I have the areas broken down into 11 different sections and will briefly talk about what you need to know in each of the areas, so lets get to it.


The Check Points

As I mentioned you can check these things manually, but it will be time consuming, there are many tools available for you to perform these, we use PowerShell and it allows us to regularly and consistently create our reports for health.  I have not gone in depth into any of these, but I will add to this/modify it if you provide feedback.  This is a work in progress, but as far as I know the only check list that I have found to date that covers off the farm.


Servers

  1. Determine the servers being used in the farm: Server identification is needed to understand the resources you are working with and to identify gaps in architecture
  2. Determine the roles of each server in the farm: The role tells you what the server is doing and on which tier of the farm architecture the server resides.
  3. Draw the logical diagram of the farm: A list of servers and their roles is difficult for the average user to understand, a graphical representation makes it easier for everyone to understand.
  4. Gather the number of processors, type and if they are dedicated or shared (VM) for each server: Knowing the allocated processing power helps identify processing shortfalls that may cause performance issues.
  5. Gather the RAM and whether it is dedicated or shared (VM) for each server: Knowing the allocated RAM helps identify when disk caching will occur and identify performance issues.
  6. Gather the total and available storage for each server (Physical and SAN): Understanding your storage and any limitations will ensure you don't run into a situation that has you scrambling to add storage.  In addition, configuration of swap drives, etc. can affect performance.
  7. Gather the type, current capacity, allocated and maximum capacity of the SAN: Knowing the SAN capacity will help with determining current capacity and planned growth. The type of SAN will help identify any RBS provider issues or determine what is needed to implement RBS, if it has not been implemented.
  8. Determine the hardware lifecycle for server infrastructure: Understanding how old each server is and when it is planned to be replaced allows for a proper perspective when identifying which servers are underpowered for the current environment or for future growth.
  9. Determine the patch levels of the server OS and all dependent services: Identifying any outstanding patches will identify any risks to the stability of the OS and the services SharePoint relies upon and may identify possible security exploits.
  10. Determine patching schedule and outage windows for the solution: Patching Schedules and Outage windows are important to the health of the servers, allowing for proper maintenance of the servers without the risk of causing a disruption. Determine if and when patching is
    performed, when the outage window occurs and how long it lasts.
  11. Determine the SQL Server version and patch level: Knowing your SQL Server version and patch level will help you identify issues with performance and may identify security holes.  In addition, the SQL Server version affects some feature availability and limitations, depending on your farm.
  12. RBS SQL Server Configuration: Storing BLOBs in the database can consume large amounts of file space and expensive server resources. RBS efficiently transfers the BLOBs to a dedicated storage solution of your choosing, and stores references to them in the database. This frees server storage for structured data, and frees server resources for database operations.
  13. RBS BLOB Threshold: Setting the right size threshold will ensure a balance between processing needed to offload large files and your content database size.
  14. SAN Configuration: A misconfigured SAN can cause increased latency and other issues to RBS, SharePoint and SQL Server.
  15. Storage Provider Configuration: Using the correct storage provider (and correct version) for your SAN will improve performance. 
  16. SAN Capacity: Ensure your future storage needs do not exceed the current capacity, check for the current utilization and available storage as well as the ability to expand storage hardware if needed.
  17. SharePoint RBS Configuration: Ensure your farm is configured correctly for RBS.
  18. BLOB caching setup: Disk-based caching is extremely fast and eliminates the need for database round trips if it is configured properly.
  19. RAM Utilization: Ensure your farm servers are not over utilized.
  20. CPU Utilization: Ensure your farm servers are not over utilized.
  21. User Profile import filters:  Are service accounts and disabled accounts filtered out?
  22. User profile synchronization schedule: Find the right balance for the sync. 
  23. Portal super reader and super user accounts setup: Verify they are set properly and that the membership is correct. 
  24. Office web apps cache: It is recommended to isolate the content database used for the Office Web Apps cache, so that cached files do not contribute to size of the "main" content database(s) for the Web application.
  25. OWA service apps: Ensure the Apps are running on correct server roles.
  26. Web apps: Ensure Web apps are not running in ASP.NET debug mode in production.
  27. Farms: Record the number of Farms and purpose of each.
  28. Web Apps: Ensure Web apps are configured correctly.
  29. Content Databases: Ensure proper content database sizes and configuration.
  30. Site Collections: Ensure properly sized and organized site collections.
  31. Custom Features: Review and record the Custom Features, where they are used, their intended purpose and proper installation and activation.
  32. Custom Apps: Review and record all custom apps installed on the farm, their intended use and where they are being used.
  33. Custom Web Parts: Review and record where any custom web parts are being used and that they are working properly.
  34. Environments: Record and ensure the environments are synchronized and consistent with each other and that they are being used for their intended purpose.
  35. Environment Patching: Check environments for consistent patching (build numbers) between all environments
  36. SQL Naming: Ensure SQL Servers are using SQL Aliases, not computer names or CNAMES
  37. DNS: Ensure host records defined for the SQL Aliases
     

Platform

  1. Page File on a separate drive from the OS, SharePoint and Logs
  2. Does Storage meet the farms needs (current vs. projected)
  3. Are there large files being stored in document repositories
  4. Record number and size of files
  5. Is there a change management process involved?


Logs

  1. Check Application log for errors
  2. Check System log for errors
  3. Check ULS log for errors/ critical / warnings
  4. Check IIS logs for 503 error pages
  5. Check IIS logs for slow (>200ms) loading pages
  6. Check IIS logs for Active Directory Latency (304 not modified with excessive load times)
  7. Check IIS logs for dead links (404 errors)
  8. Check Requests per second count from IIS logs
  9. Check log locations (SharePoint/IIS should be on a secondary drive)
  10. Check for unrestricted growth
  11. Check log drive capacity/utilization


Solution Integrity

  1. Old SSP Site removed (for in place upgrades)
  2. Check Supported Limits for Managed path counts
  3. Check Supported Limits for Content DB sizes
  4. Check Supported Limits for List item counts
  5. Check for deleted pages in navigation
  6. Check for unused content sources in the search crawl
  7. Check Health Analyzer rules
  8. Check patch levels for all content databases
  9. Check for orphaned site collections
  10. Check for broken site collections
  11. Check for broken my sites
  12. Check for missing web part references (Error web part detected)
  13. Any Sites running in UI Compatibility Mode (2007 or 2010)
  14. Check code quality process for stress testing
  15. Check code quality process for load testing
  16. Check code quality process for security testing (each role)


Continuity

  1. Is backup being performed? 
  2. Review backup process
  3. Is the disaster recovery plan tested and reviewed annually? 
  4. Ensure Central Admin is redundant.
  5. Is disaster recovery farm on another site? 
  6. Virtual machines distributed properly across physical hosts for disaster protection?
  7.  Check for role redundancy for Web front ends
  8.  Check for role redundancy for Application Servers
  9.  Check for role redundancy for Database
  10.  Check for Service redundancy 

Security 

  1. Check for Extra ISA Firewall rules.
  2. Check SSL Use // IPSEC
  3. Are MySites hosted on a dedicated web application?
  4. Is the farm admin able to manage the service accounts?
  5. Ensure farm account is not be used for other services.
  6. Farm account should not be in local administrators group unless doing install or patch.
  7. Ensure external access uses SSL?
  8. Kerberos Configuration (SPN's configured properly)
  9. Ensure the proper number of service accounts:
    SP 2007: 3
    SP 2010: 5
    SP 2013: up to 16 service and 3 server.
  10. Ensure My Sites are configured with secondary site collection owners.
  11. Ensure farm admin and service accounts are not be permitted interactive logon.
  12. Ensure the proper service accounts are used for the proper services:

Database

  1. Check content databases within limits.
  2. Check transaction log sizes.
  3. Check for excessive free space. // shrink db
  4. Trim audit logs to reduce content db size.
  5. Check for maximum degree of parallelism.
  6. Ensure database auto growth sizes set properly.

Information Architecture

  1. Verify: universal site taxonomy.
  2. Check maximum site depth.
  3. Check maximum site width
  4. Check for a high number of role assignments on individual items.
  5. Check for a high number of unique permissions.
  6. Check content growth projections.
  7. Check for a high number of sites sharing a content database.

Branding

  1. Are there any custom master pages?
  2. Are the custom master pages or page layouts working properly?
  3. Are all images / styles / etc checked in and published?

Customization

  1. What WSP Solutions are deployed?
  2. Are any InfoPath forms deployed?
  3. Check for Invalid / missing Feature counts.
  4. Ensure assemblies are compiled in release mode not debug mode.
  5. Which solutions are 3rd party?
  6. Which solutions are in house?
  7. Check solution utilization (Where, activation locations, actual usage)

Search

  1. Check crawl logs for any errors or warnings.
  2. Check crawl schedules.
  3. Check crawl running time versus crawl interval.
  4. Check for successful crawls and crawl failures.
  5. Check search service account configuration.


I realize there may be some repetition above, but the purpose of this is to help you ensure a healthy environment.  If you have any questions, additions or modifications, please comment and I will make updates.  Please follow me on twitter @DavidRMcMillan and @DevFactoPortals.  I look forward to making this a resource any admin can use.

Thursday, February 5, 2015

Nintex Workflow Complexity

Some of the feedback I have received after my recent post Why would I buy Nintex? was in regards to how to gauge the complexity of your workflow.  Now one way is to actually map out the process and see how much work there is in automating the process.  The problem with mapping out the process is that now you have gone a long way down the road and are not even sure it is viable to re-engineer; although I recommend it even if you were not doing workflow automation.  The other way, which I want to go through today is a simple way to quickly evaluate it so you know whether it is a simple, medium or complex workflow.


The other question that users often have is, How do I know if there is value in automating a process?  I will go through each of these and hopefully provide you some insight that makes your life easier or at the least clarify something you had a question about.

Part I - Gauging Complexity

Now. like my previous post I am going to do some basic math that you can apply to calculate the level of complexity, first I want to define those levels and then you will know a ball park of the budget to map and create the workflow in question.


Before I get into defining the levels of complexity, I want to point out that the majority of effort in process automation is the actual mapping and redesign, not the programming, though the programming is the direct cost, while the other efforts are not. 


As I mentioned above, process mapping should be an existing function within your business units, it allows you to see your processes and quickly identify areas for improvement, if it is not being done regularly, you may have a lot of work ahead of you.  The good news is you can and should take an iterative approach to making this happen.  What I mean by that is that you can record it and come back to it as needed to refine the process; processes are often moving targets until they are mapped and set as a standard.  There is really only one rule you can rely on with an undocumented process and that is the process will change!, so map it as soon as you can, communicate it to everyone and make modifications as necessary.


A simple rule to use for gauging the amount of time to map and reengineer a process is typically 60-80% of the total time to get a final automated workflow that meets your needs.


Ok, so let's talk about complexity, as I mentioned above we can use some simple buckets (if you will) for the complexity of the process automation. A simple workflow is from 0-40 hours of effort to create, a medium complexity workflow takes from 41-120 hours of effort to create and a complex workflow takes more than 121 hours to create.  These numbers are based off the use of Nintex and you can multiply then by 2-3 times for C# development, depending on how good your developer is.


Complexity of the Workflow

Ok, so now that we have an idea on the size of the buckets, we can do our math that will help gauge which bucket our proposed workflow should fit in.  If we look at a process from a high level we can ask and answer a few questions that will help us gauge the complexity.  I will explain why it is important before we calculate.


  1. How many users in the organization does this workflow affect (meaning how many are going to use it)? N
  2. How many people/business units are going to need to interact with this process? I
  3. Roughly how many steps from beginning to end do you perceive? S
  4. How many possible results are possible (roughly)? R
  5. How many different systems are involved? V
Ok, so we have some variables and here is why they are important.


  1. N is a number that represents the number of employees (in multiples of 5000) utilizing the workflow, the larger the number the more impact on complexity, a good rule of thumb is that you can expect the workflow complexity to double for every 5000 users that interact with it, but anything less than 5000 should not affect it.
  2. I is the number of interactions, for each person/ business unit that interacts with the workflow you can expect roughly 5 additional actions in the workflow.
  3. S are the total number of perceived steps, for each of these steps you can expect roughly 5 actions in the workflow.
  4. R is the number of results, in a workflow these are different paths and it is easiest to think of it as a multiplier of the number of tasks where two or one is the base number of outcomes and the addition of another result will effectively double the actions.
  5. V is another complexity multiplier, when ever we interact with any system outside SharePoint, we are doubling the complexity.
So here is what the formula looks like:
  1. N = Answer to Q1/5000, Round up
  2. I = Answer to Q2 * 5
  3. S = Answer to Q3 * 5
  4. R = If Answer to Q4 is 1 or 2, then 1 otherwise Answer to Q4 - 1
  5. V = Answer to Q5 (there is always at least SharePoint)
X = (I+S)NRV


Now if we use an example of a company of 250 people wanting to do a vacation leave workflow.  The workflow needs approval from their direct Manager and notification sent to the requestor and HR of the result.  The vacation leave needs to poll the Accounting system to determine the number of days available for leave and then deduct the amount when approved.  From this scenario we can calculate the complexity as:


Q1. There are 250 People
Q2. There are 3 interactions (Requestor, Manager and HR)
Q3. There are 6 steps
    1. Make the request
    2. Poll the accounting system
    3. Send to the Manager
    4. Deduct the Leave
    5. Notify HR
    6. Notify the Requestor
Q4. There are 2 results.
Q5. There are 2 systems (SharePoint and Accounting)


I = 3*5 = 15
S = 6*5 = 30
N = 250/5000 = 0.05, Rounded up = 1
R = 1
V = 2


X = (15+30)*1*1*2 = 90


90 is rough hours to create your workflow using Nintex. If we put that in our bucket we would say this is a medium complexity workflow, which I would expect due to the interaction with the accounting system.  I personally would use the upper end of the bucket for medium and low complexity workflows and would perform a proper mapping for anything that seems to be complex.  Be prepared however for changes in interaction and re-evaluate any workflow each time there is a change in scope as it affect complexity, especially when dealing with the multipliers.

Part II - Gauging Value

The second part of this is how do I gauge the value of automating a workflow?  You can do time and motion studies and determine the actual time spent on tasks, or you can use rough numbers again.  when using the rough numbers in this case be optimistic on how quickly people are performing the process today, it will give you a pessimistic value for automation.


In part one we asked the question, how many users interact with the process, this number will be used again as a base multiplier.  We will then ask two new questions,
  1. How much time does someone currently spend doing this task? remember be optimistic, ask a small sample and take the lowest three numbers averaged as your result.
  2. How often do they need to do this task? again ask a small sample and take the three largest occurrence averaged out and represented as a yearly amount.
Now we simply calculate:


N = Number of people
T = Time spent currently (in Hours)
O = Task Occurrences/Year


X = NTO


The result is the number of Man/hours/year spent on this process.  You can then compare that to the estimated savings in time spend and the cost of development, here is a continuation of our above example.


As above, when asked the above questions we get the following answers: currently our people spend an average of a half hour filling out and submitting a leave request, then carrying it around for approvals and checking the system for available leave.  Based on the small sample people do this once a year.


So now let's do the math:


X = 250*0.5*1 = 125


So currently it costs the company 125 man hours for leave requests, we can expect the workflow to reduce the time of a leave request to roughly 15 minutes (note I am being pessimistic the other way now, taking the maximum time it should take).  If I calculate this I can then get a gauge on how long it will take before I make my money back.


X = 250*0.25*1 = 75


So we can expect to half the total number of man hours by automating this workflow, but the 90 hours we incurred above to develop it, means we will not positively affect the bottom line until the second year (90/75 = 1.2 years ROI).  Is that worth it, I guess it depends on you...


Feel free to follow me on twitter @DavidRMcMillan or @DevFactoPortals or become a member of this blog.  Thanks, feedback is appreciated and encouraged!



Tuesday, February 3, 2015

Properly Defining Information Technology

Information Technology

Not what you expected, or was it...

The definition of Information Technology has become corrupted over time as people have had different ideas on what it means.  If you looked up information Technology, you would think the definition would be something like,
"The management and distribution of information through the use of technology.",
after all that sounds right...


If you go to Wikipedia, Information Technology is defined as,
"Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.
The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services."



 This seems like a  good definition, though it seems a bit much for the management of information.  If we go to other sources for the definition, you will begin to realize how different the definitions have become.  merriam-webster.com defines Information Technology as,
"the technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data"


Now you may say that is not a lot different from Wikipedia or my original simplified definition, but lets exam it again.  If I were to say Library Sciences are a part of information technology (which in reality parts of it are) does it fit into these definitions?  If we look at my simplified definition, we can say "Yes", after all Library Sciences is about the organization of information and that organization in itself is an applied technology.


Now look at the other definitions, how did we move from technology to computer systems specifically? Simple, we applied the way it is done today... Is that correct? absolutely not.  The Typewriter was information technology, yet the definitions could never apply to it. Microfiche is another technology left out of the modern definition and there are many, many, many more.


Now can anyone tell me when the modern computer era began?  you could say 1983 (First PC) or you might say 1977 (Apple II) or even older with the kit computers.  Now let me ask you how old is the oldest Information Technology company? In that case you might say 135 years (IBM established in 1880) or you might say 138 years (Bell Telephone established in 1877), heck you might even say 3700+ years (first postal system) and in truth, you could even argue that.  Remember using technology to manage and distribute, well writing is technology, horse back riding is technology, heck shoes are technology, I still remember "sneaker net", running disks from one computer to another.


So how can we fix it?  Simple, remove any reference to the way we currently perform information technology, the term may have came with the computer revolution, but the definition doesn't need to be that narrow.


Follow me on twitter @DavidRMcMillan or @DevFactoPortals, my goal is to make information technology better, one person at a time.  This post was intended to support another post on Information Governance so stay tuned.