Saturday, March 21, 2015

Using Document Sets to manage document based processes

The right tool for the right job

In SharePoint, I use a couple tools to manage the processes within my organization.  In the case of process automation, my default is to use Nintex, automating the process where it makes sense and improving the user experience where ever possible.  When I have a process that is document based, I will employ document sets, managed metadata and content types to automate the document provisioning for the process and then employ Nintex for additional functionality.

Today, I want to write mainly about Document Sets and how we can use them to make Document management easier within your SharePoint ecosystem.  One of the greatest difficulties we have in document and records management is the classification and application of Metadata; it is the holy grail, so to speak, of document management and is an area where many third party tools exist to perform auto classification.  In reality with Document Sets, Managed Metadata and Content Types and the support of Nintex workflow and forms, we can do almost anything you need for document classification.

Content Types

I first want to talk about Content Types.  Why? Because a Document Set is a Content Type for one and also because part of the reason we use Document Sets is to manage Content Types within a container.  So, let's get to it...
What is a Content Type?  A Content Type is a piece of reusable content that has a predefined selection of attributes (Metadata) assigned to it.  Think of it like a document template, because in reality, that is what most people use content types for.  Something many users do not realize however, is that with content types, you can build a Taxonomy (Hierarchical Structure) of Content and Metadata and that the taxonomy is explicitly required.  Understand that when we deal with content in SharePoint, there is an understanding that an Information Architecture should be designed and created and you need to do it using Content Types and Metadata. So lets quickly go through Content Type creation so it makes a bit more sense, the following steps were created using Office 365, but they are the same for On Premise or SharePoint Online.

Creating a Content Type

  1. Go to Site Settings
  2. Choose Site content types
  3. In the Site content types screen, you will see all the content types applicable to the site, arranged into groups.  At the very top you will see the Create button, Click it.
  4. Create the Content Type
Notice when we created the Content Type, it asked us for the Parent Content Type?  That is the implicit Taxonomy I referring to.  We can choose any content type as a parent and like real parents, their attributes are passed on to their children and we can then choose what is important, making the information architecture and the result much easier to manage; that even includes Document Templates.

Now this blog is about Document sets, so that is all I am going to talk about Content types for now, but stay tuned in future blogs, I will provide more insight into Content Types and how to use them.

Document Sets

Now you may be wondering what a Document Set is? well simply put, it is a folder within SharePoint that is used to apply shared metadata to all the objects within that folder.  So for example, lets say you have a project, the project has a project name, a project manager, a project number, a sponsor, a status and perhaps a region or other metadata that tells you about the project.  Some of this information is information you would want to associate to your documents, like to project Name and Project Number. Other information, like the Project Manager and Sponsor are not that important at the document level and don't need to be associated.  A Document Set has the ability to decide what is assigned and what is not when you configure the document set.

Another feature of the Document Set is the ability to assign specific content types that can be created within it, much like you would do in a document library or list.  You can pick and choose what can and can not be created within the document set.  Each content type within the document set has two portions of metadata, the assigned metadata from the document set and the content type metadata that was assigned when you created the content type.  Now you may ask, should I add the document set metadata I want to share to each content type I am going to use?  The answer is No, they are independent and your Document set management will be easier if you don't do that extra work.

Finally Document Sets (and this is the reason for the name) allow you to automatically provision a group of documents as a set.  In other words, every time you say > [Document Set Name] it will create a set of however many documents you decide, all ready for collaboration and all with the document set metadata already assigned.  Okay, so enough about all the things we can do with a document set, we created one when looking at content types, lets see what it can do...

Using a Document Set

 I am going to pick up where we left off from the last example as we clicked OK, if you closed out of it, not to worry, you can get back into the document set by going to site settings, choosing Site content types and then clicking on the content type you originally created.

  1. Create all the metadata for the Document Set
    I will add three pieces of Metadata for the demo, Category, Status and Keywords, all from the Core Document Columns Group.
  2.  My intent is that the Category and Keywords are both Metadata fields that are important to the documents that exist in the document set, while the status is only important to the document set itself.  So now I need to go into the Document Set settings (under Settings) and modify the Document Set.
  3. Now to describe the Document Set Settings and what each section does, I have labelled an image and will go through each section, it's purpose and what you need to do.  The only one out of order is the Shared Columns, I want to go through that section first.
    1. Shared Columns -  As I mentioned, I wanted to have Category and Keywords included with all my documents in the set, this is where I do that.  Checked columns are shared to the contents of the document set, unchecked are not.
    2. Allowed Content Types - This is where you choose the content types that are available for creation under the button.
    3. Default Content - This is where you add the documents (not templates) that will automatically be added to every new document set created.
    4. Welcome Page Columns - Document Sets have a custom welcome page layout, at the top of the page is a folder icon, project name, description and any additional columns you want to add.
    5. Welcome Page - The Welcome page can be customized by you, allowing for additional web parts and content.
You now have enough information about Document Sets, start playing with them and you will quickly see way to use them in your organization.  Some additional functionality can also be realized through the use of Nintex forms and workflows and also through the use of Managed Metadata. Stay tuned for future post on how we can use Metadata to create taxonomies, folksonomies and vocabularies, to further add value and structure to our solutions through proper information architecture.

I would be happy to provide answers to this or anything else to do with SharePoint, Information Governance and Information Architecture.  Comments and feedback, good or bad are appreciated, if you like what you see, follow my blog or follow me on Twitter, LinkedIn and Facebook:

Monday, March 16, 2015

Quantum Power Transmission

Preamble

I was reading an article on NASA testing non-propellant drives for space flight;  which still requires power.  Now for Intra-solar system flight that shouldn't be a problem, we have the sun, which for all intents and purpose is a limitless power supply when using solar cells, but what about when you extend beyond the limits of solar power?  Do we use the power to accelerate to a point and then let it carry us to the next star system?  or do we use another source of power that can sustain the power indefinitely?
I have not seen anyone write about this idea before and I thought I would get it our there and hope someone brighter than me can either build it up or tear it down, either way as long as there is discussion and I get to hear it, I am happy.  I am not a physicist, I am a computer geek, so I am not privy to academia or the world of quantum physics, I am merely a person who thought I should write down an epiphany...

The Premise

The premise for Quantum power transmission is based off Quantum Entanglement, specifically the use of the disentangled quantum pairs.  Based on the experiments conducted, experimental physicists have been able to demonstrate Quantum entangled states on objects as large as buckyballs.



According to Maxwell's relations we know that when moving an electrically charge particle through an electromagnetic field, a charge is generated; this is the premise of an electrical generator and an electric motor (in reverse).  In an electric generator, mechanical power (from wind, water, hand crank, etc.) is converted to electrical power (whether direct or alternating current) by spinning a rotor within or around a Stator.  In electrical power generation this is typically done on a very large scale. 

The Application

Now imagine if you could create nano-generators and nano-motors where the rotors on both are entangled pairs of the other.  When we apply the electrical power to the motor, the rotor would spin causing the entangled pair to also spin, which is the rotor of the generator; this would create an electrical power transmission system across limitless space.  In addition, due to the nano-scaling of the system, it could be more flexible to design for a space craft and should be substantially more stable thanks to a large number of independent generators and motors operating in parallel. 

The Questions

  1. Has someone already thought of this?
  2. Would any of this cause the rotors to lose their quantum state?
  3. Is the theory sound?
  4. Can someone smarter than me expand on this?
Thanks for any and all that read this post, I realize it is nothing like any of my other posts, but I just had to write it down.  Please comment and provide feedback, whether good or bad, I appreciate it.  Follow me on twitter if you want to learn about SharePoint and Information Governance @DavidRMcMillan and @DevFactoPortals.

Wednesday, March 11, 2015

Office Delve, the answer to your unified search prayers

What is Office Delve?

The easiest way to explain Office Delve is to say it is a unified search center for Office 365, but in reality it is much more than that.  Delve is all about collaboration and it learns from your usage (and your organizations usage) to create a relevance hierarchy that works best for you. The more does in Delve, by viewing, editing and sharing each other's documents, the more useful Delve will be for all of you.  What you see in your views in Delve is different from what your colleagues see in theirs, because it is tailored specifically to you.

What does Delve look like?

One of the biggest advantages to Delve over traditional search is the user experience.  Delve is about allowing you to find what you are looking for and it is about making it easy for you to see the information.  Delve provides a user interface that is both intuitive and understandable for users.


Delve provides recent activity when you first load the browser and provides the ability to see, search, group and explore documentation, e-mails and content within your Office 365 ecosystem.

How can I and my team get the most out of Delve?

Remember Delve is adaptive, so the best way to get Delve working for you is by you working with Delve.  Delve doesn't modify access to anything, so you need to ensure that your documents are somewhere within the Office 365 ecosystem, like SharePoint online or One Drive for Business.  You also need to make sure documents you want to share are accessible to the people you intend to use them and finally, just like any search, you still need to concentrate on good metadata behind the scenes.  Delve is adaptive, but it can't read your mind, think of it as an extension of your standard search functionality, a good Information Architecture (IA) will make Delve work better, delve only makes the user experience (UX) portion easier to manage.

For additional information on getting the most out of Delve and making your documents accessible check out Microsoft Support article on it: http://devfac.to/1C7aByT

Update:

In an update this week (March 21, 2015), Delve has added additional content into the search results, you can now see Yammer and web search results in the result sets.  The web search results are based of what you and your coworkers have been navigating to.  It is quite nice to have it all consolidated.

Follow me on Twitter: @DavidRMcMillan and @DevfactoPortals or on Facebook at https://www.facebook.com/moss.adventures.  Feedback and sharing are always appreciated and encouraged.

 

Thursday, March 5, 2015

What is the difference between a hybrid cloud and a blended cloud?

The "Hybrid" Buzz

You know, when Microsoft release their Hybrid cloud offering for Azure, I thought that was great, after all hybrid has become a buzz word in recent years with the hybrid electric cars and such.  But as time went on I started to realize hybrid wasn't a good word for what they were offering with Azure on premise and cloud integration.  I am not saying the offering isn't good, far from it, it is a great approach to transitioning the enterprise to cloud infrastructure and provides something Amazon (AWS) is sorely lacking and has been slow in adopting.  Instead it is the terminology and what it means, the term hybrid is going to end up causing confusion as true cloud hybrid offerings are implemented, so let me explain.


What is a Hybrid?

Merriam-Webster.com defines "Hybrid" as
: an animal or plant that is produced from two animals or plants of different kinds
: something that is formed by combining two or more things


The second definition is true for vehicles and technology.  In short it is taking two "things" and combining them.  Just like the first definition, it is a single new entity from combining two similar things.  In the case of hybrid cars, we took technology from one type of car, a gas (or diesel) powered engine and combined it with the electric motor from an electric car.  The result is not gas and it is not electric, instead it uses both, as a single combined system.  The gas motor is smaller than the original and so is the electric and they are inseparable.


Why #hybridcloud is not hybrid

Now in the definition above we talk about combining two technologies to make a single result, but in reality that doesn't happen in a hybrid cloud solution.  Instead each component, cloud and on premise, remains intact and then additional functionality is added to make them appear seamless.  They are not a single entity, but instead a blending of the two systems.  Now blending is a good thing, it is not bad and in reality if we call it a blended environment, it will reduce confusion with actually hybrid cloud solutions.  Another reason I would consider it not a hybrid is because they are different technology bases.  Apples and Oranges can not be combined to form a hybrid, because they are not similar, but a Tiger and a Lion can be, creating a Liger, why because they are very similar and in the same family of animals.  Cloud and On Premise share some similarities, but they are too different to be combined into a hybrid.


There is a hybrid cloud!

Hybrid cloud solutions do exist and they are going to become more and more prevalent.  It is the reason why I want to differentiate now rather than later, when organization begin implementing private cloud solutions, they will still need some components to exist in the public domain (public cloud) these environments will not be intended to operate independently and will use parts from both "cloud" offerings making a hybrid environment.  This is the next stage in the transition to the cloud for large enterprises as it offers the control they still desire for the control of where data resides, but abstracts the servers from the hardware.


So what about this "Blended Cloud"?

Well a blended cloud is a seamless integration of cloud computing components into your on premise environment. The two exist independently and are combined using tools like ADFS and VPN to make it seem as seamless as possible.  In reality the only way you can tell the difference as a user is by the latency (time it takes to respond) you experience.  This latency can be eliminated however through the use of caching devices, like StorSimple, that analyze the traffic and cache the most commonly used information on the local area network (LAN).


Conclusion

Now I know a few of you out there are saying it is still hybrid (namely my friends at Microsoft) which is fine, I just feel differentiating between them will make for less headache as time goes on.  Thanks for reading, please provide your comments and feedback and follow me on Twitter @DavidRMcMillan and @DevFactoPortals.

Friday, February 27, 2015

Governance Series - How to create a Single Source of Truth in SharePoint

I was in the SharePoint Community site last night and someone asked about maintaining a single source of truth.  One way to do it is to have a central repository for documentation; in SharePoint we would use a record center.  But it brings up a good point of how you would distribute those documents out to the team sites where they are actually used and manipulated.  The answer is simple, we create a new type of document library that can only contain links to documents.  We template it and save it as part of all our site templates instead of a regular document library.  In addition, we could add form and workflow functionality that would allow the users to upload files, but they would automatically be routed to the document repository; but that is for another time.

So how do we do this?  First we are going to need two sites with two document libraries one we will use as the source, the other as the link. In my example I will start with the "Linking" site first, but it really doesn't matter which site is first, only that the document exists in the repository prior to creating a link.

Linking Site

Ok, so we created a new site, lets say it is a team site template.  With a team site and most sites you get a document library named "Documents",  I am going to modify that library and make it a "Link Library".  Now don't confuse this with the "Links" app, which is just a list of hyperlinks.  At the end, I will add the steps to make it into a "Link Library" template so it can be reused.

Creating a Link Library from a Document Library

  1. In the Document Library Click the Library tab then Click "Library Settings"
  2. In the Settings page, Click "Advanced Settings"
  3. In Advanced Settings, under "Content Types", "Allow management of content types?", Select "Yes", then scroll down and Click "OK"
  4. Under "Content Types", Click on "Add from existing site content types"
  5. Scroll through the list and Select the "Link to a Document" content type, Click "Add" and then Click "OK"
  6. Under "Content Types" Click "New Button Order and Default Content Type"
  7. Change "Link to a Document" from "2" to "1" and Click "OK"
  8. Under "Content Types" Click "Document"
  9. Under "Document" Click "Delete this content type"
  10. When Prompted, Click "OK" to delete the content type
Your Library is now configured to create links to documents rather than documents themselves, but I have not addressed the issue of the upload button, but it is simple enough to hide the button or if you want a complete solution you can create a workflow to move uploaded files to a designated location in the repository; to me that is a much better option.  One caveat I want to point out is that the "Link to a Document" uses a URL, not a navigation pane, so you need to know where the document you are linking to is located, the advantage to this is it allows you to cross site collections and farms, but does not provide a great user experience.

Repository Site

The Repository site, like the Linking site contains a Document Library, unlike the Linking site, however, we are not going to manipulate it in the same way.  Instead, we are going to configure it to make it fit for our purpose, which is the storage of documents.  Now I am not going to tell you have to do that, because everyone is different, but some things that you might look at doing include setting up folder containers to classify documents that are uploaded. creating retention and disposition workflows and of course version control.

Test out the solution and let me know what you think.  Based on writing this, stay tuned, I think I like the idea of showing you the Nintex workflows that would go along with ths.

Follow me on twitter @DavidRMcMillan and @DevFactoPortals.  Feedback and suggestions are appreciated and encouraged.

Saturday, February 21, 2015

A Tale of Two Governances - Part 1 Health Benchmark

When you read about governance, it is often focused on what I call the foundational governance.  In the case of information technology (see my definition) we focus on foundational information governance, or the way we intend to use our information within the organization.  This however is only one of two parts of the actual governance needed for the management of our information. The second portion which is often not considered as governance, encompasses the processes and procedures needed to maintain the systems that are used to manage and transmit information.  I refer to this governance as operational governance and it consists of the structure, policies and procedures to ensure a stable and consistent information management solution.

Operational Governance

The governance of the sustainment processes within an organization are typically hit and miss.  Most organizations will have some type of backup and recovery process, but how many have a process for the creation of sites in an ECM like SharePoint?  Now don't get me wrong, some organizations are very dutiful in creating what they perceive as needed for processes to maintain and administrate their systems, the problem is that many do not, and those that do, don't necessarily get everything they need. As a consultant I come into organizations that are experiencing pain, usually in the governance of their solutions, my job is to determine the gaps and remediate them.  Now one of the best ways to evaluate gaps in the operational governance of a solution (regardless of the technology) is to interview the administrators and key business users, perform a health assessment of the system and make recommendations on best practice based on the gaps; in some cases we would then move to remediate those gaps as a final step.  These steps serve to quickly identify what exists and what does not exist and helps me understand the technical skills of the administrative team.

In this first part we will walk through finding our current state, then in a future post we will look at the rest of the operational governance that should be considered to ensure a properly sustained environment.

The Interviews

The first steps in the process are the interviews, in a SharePoint solution I like to site down with the farm administrators, the site collection administrators and the service desk manager.  These three groups or persons can provide insight into pain and into items that take up a significant portion of their daily activity, here are a few questions I will typically ask and why I ask them.  It is also important that you are clear with them about your purpose, as a consultant coming in they may perceive you as critiquing them on their job, but we are there to help them be heard and to fix their pain.

In other solutions, you may have different roles, as long as you can extract the pain and issues for the solution, your interviews can be with whoever can best provide the answers.

Farm Administrators

Farm administrators are your best source of information when it comes to issues with operational governance.  They know the solution better than anyone else and have to deal with everything and anything that goes wrong.  Often it is easiest to just sit down over coffee and a notebook and ask them what is wrong with the solution and what they would fix, then sit back, let them vent and take notes; but I like to have a plan so I typically compile a list of questions to ask before hand (let me know if you have some good questions and I can add them).
  1. Do you have anything that maps out your daily routine? This is asked to first establish the existence of a "Run Book" or standard operating procedures (SOP).
  2. Do you have any tickets assigned to you that are more than 30 days old? If yes, what are those tickets and what is preventing you from closing them?  This will help identify not only gaps in knowledge, but also pain areas in architecture or process.  There is often an in depth conversation into cause and what they would like to see happen to help resolve these issues.
  3. Are there any issues that keep recurring or that never really go away?  This provides insight into pain areas where they may have a work around or an area where they have decided to perform something a specific way and it is not working.  This is another area we will have additional conversations about how they think it should be.
  4. Do you have any performance issues with the current farm?  if yes, do you know the cause? and have you researched a solution? performance issues identify issues with the farms architecture and/or configuration that may be hampering the solution and preventing it from performing as intended.  Also it helps gauge knowledge level and root cause problem solving capabilities.
  5. Which group or groups are the most active on your farm? This will identify who to interview from a site collection administrator perspective, concentrating on the site collections that are the most active and the most need of support.
  6. Do you have remote offices that access the farm?  how good is their connection? Do you get performance tickets from those offices? Often remote connectivity is an issue, identifying where these connections are occurring and if there are issues up front will save you time and effort.  Follow the premise that it is easier to ask the question than search for it, tools are great, but the farm administrator will have insight the tools can't provide.
Notice I didn't ask them questions like how many farms, the servers on the farm, the number of content databases and their size.  These can be asked, but typically you know those things before you begin the engagement and even if you don't, reports from SPRAP or any other health assessment tool will clearly give you all this information.  At my office we have developed our own health assessment tool to answer all the farm questions and to touch over 100 different areas in the farm.  I have included the areas in my post, What should I Check With a Health Assessment? and would love any feedback you have on the points and questions.  With your help I can make it the most complete health assessment list available.

Once we completed we can move on to the Site Collection Administrator questions.  Site Collection Administrators have less knowledge of the configuration, but provide a direct point of contact with your key stake holders.

Site Collection Administrators

Based on question 5 above, you should have an idea on which Site Collection Administrators are needed for this portion of the questions.  In smaller organizations, the Site Collection Administrators may be the Farm Administrators, you should be able to figure that out quickly when beginning the engagement.  The Site Collection are a SharePoint solutions first line of direct contact and problem solving in the business, they are the most likely to know what the users want changed and what issues are recurring the most from a User Experience perspective.

  1. Do you have anything that maps out your daily routine? This serves a different purpose than with Farm Administrators, here you are looking for what is taking up most of their day.  If they don't have it mapped out, you should sit down with them and ask what a typical day would look like.  They may have trouble providing it, so another approach is to ask them to do some logging activities for a couple days, recording what they are working on.  You can then review it and confirm if the tasks are typical or not.
  2. Do you have any requests from your business users you have not been able to fulfill?  If yes, what has prevented you from fulfilling them?  This will often identify issues with configuration, policy or knowledge level, use it as a sounding board to ensure the architecture meets the business needs.
  3. Are there any issues that keep recurring or that never really go away?  This provides insight into pain areas where they may have a work around or an area where they have decided to perform something a specific way and it is not working.  This is another area we will have additional conversations about how they think it should be.
  4. If you could change anything about the solution what would you change?  Site Collection Administrators often have good feedback on improvements specific to user experience and functionality, make note of the changes, then identify them as future state requests for remediation and road mapping.
Remember these are really meant to draw out the pain points and issues with the environment.  You may hear the same answer from many different people, that should raise the importance of the issue.  Some of the answers may be symptoms of a deeper problem, it will be your job to determine that before attempting to remediate it.

Service Desk Manager

The Service Desk Manager can provide you tangible numbers on where issues are occurring, open tickets and typical complaints that users have made about the system.  They are the support of what has been discussed with the farm and site collection administrators and will provide additional insight and numbers behind the importance of certain issues that have been identified.

  1. Can you provide a report of ticket opened for SharePoint in the last 6 months?  This should provide ticket count, time to close and total percentage of tickets for each category.
  2. What are the main complaints your team hears in regards to SharePoint?  The Service Desk is the first line for support, so they hear most of what the users like and dislike about the solution.
  3. What would you change about SharePoint if you could?  This is an open ended question and should elicit conversation on improvements and pain that they feel from their environment.
Remember the questions above are a starting point, you want to draw out their pain experience.  In some cases it might be better to talk directly to the business units, but always remember these are about insight into issues about the environment.

Health Assessment

As mentioned above the Health Assessment portion is usually done through a tool that compiles all the information about the environment.  I analyzes your solution and provides feedback on all areas that need to be considered.  Please refer to What should I Check With a Health Assessment? for actual check points and complete it in whatever manner you wish.


Report and Remediation
From the interviews and health assessment a report of gaps and issues with the design can be created and presented to organizational decision makers.  From the report you will also be able to identify the criticality and with discussion, the priority of the issues involved.  Use this information to build a remediation plan, that includes the issue, it's criticality, priority solution to the issue and the effort needed to resolve the issue, then sit down with the decision makers and work out the remediation plan to resolve the issues.  The plan should provide a timeline for each resolution and the resource allocation needed to resolve it.


Next Part
In the next part of this series, we will look at other parts of your operational governance and what it takes to ensure your environment has the operational governance it needs.  Feel free to read my other posts and follow me on Twitter: @DavidRMcMillan and @DevfactoPortals.


Tuesday, February 17, 2015

What should I check with a Health Assessment?

When you perform a health assessment of a SharePoint farm, you need to check everything you have and compare it to patterns and practices.  In some cases you may come across limits (supported maximums) and boundaries (hard limits) for certain settings, your goal should be to ensure you are well within any limits and to have a plan in place to maintain your settings within the standards and practices as they relate to your farms.


The purpose of this blog post is to give you a guide into the physical attributes for your solution and what you need to check.  I do not talk about tools in this blog, but suggest you employ a tool for your health assessment because it provides consistent, repeatable approach to your solutions health.


I will not be too verbose in this post, but rather will concentrate on the areas one of my cohorts, Kevin Cole (follow him on twitter at ), a Microsoft Certified Master of SharePoint 2010 and brilliant technical mind, and I came up with.  I have the areas broken down into 11 different sections and will briefly talk about what you need to know in each of the areas, so lets get to it.


The Check Points

As I mentioned you can check these things manually, but it will be time consuming, there are many tools available for you to perform these, we use PowerShell and it allows us to regularly and consistently create our reports for health.  I have not gone in depth into any of these, but I will add to this/modify it if you provide feedback.  This is a work in progress, but as far as I know the only check list that I have found to date that covers off the farm.


Servers

  1. Determine the servers being used in the farm: Server identification is needed to understand the resources you are working with and to identify gaps in architecture
  2. Determine the roles of each server in the farm: The role tells you what the server is doing and on which tier of the farm architecture the server resides.
  3. Draw the logical diagram of the farm: A list of servers and their roles is difficult for the average user to understand, a graphical representation makes it easier for everyone to understand.
  4. Gather the number of processors, type and if they are dedicated or shared (VM) for each server: Knowing the allocated processing power helps identify processing shortfalls that may cause performance issues.
  5. Gather the RAM and whether it is dedicated or shared (VM) for each server: Knowing the allocated RAM helps identify when disk caching will occur and identify performance issues.
  6. Gather the total and available storage for each server (Physical and SAN): Understanding your storage and any limitations will ensure you don't run into a situation that has you scrambling to add storage.  In addition, configuration of swap drives, etc. can affect performance.
  7. Gather the type, current capacity, allocated and maximum capacity of the SAN: Knowing the SAN capacity will help with determining current capacity and planned growth. The type of SAN will help identify any RBS provider issues or determine what is needed to implement RBS, if it has not been implemented.
  8. Determine the hardware lifecycle for server infrastructure: Understanding how old each server is and when it is planned to be replaced allows for a proper perspective when identifying which servers are underpowered for the current environment or for future growth.
  9. Determine the patch levels of the server OS and all dependent services: Identifying any outstanding patches will identify any risks to the stability of the OS and the services SharePoint relies upon and may identify possible security exploits.
  10. Determine patching schedule and outage windows for the solution: Patching Schedules and Outage windows are important to the health of the servers, allowing for proper maintenance of the servers without the risk of causing a disruption. Determine if and when patching is
    performed, when the outage window occurs and how long it lasts.
  11. Determine the SQL Server version and patch level: Knowing your SQL Server version and patch level will help you identify issues with performance and may identify security holes.  In addition, the SQL Server version affects some feature availability and limitations, depending on your farm.
  12. RBS SQL Server Configuration: Storing BLOBs in the database can consume large amounts of file space and expensive server resources. RBS efficiently transfers the BLOBs to a dedicated storage solution of your choosing, and stores references to them in the database. This frees server storage for structured data, and frees server resources for database operations.
  13. RBS BLOB Threshold: Setting the right size threshold will ensure a balance between processing needed to offload large files and your content database size.
  14. SAN Configuration: A misconfigured SAN can cause increased latency and other issues to RBS, SharePoint and SQL Server.
  15. Storage Provider Configuration: Using the correct storage provider (and correct version) for your SAN will improve performance. 
  16. SAN Capacity: Ensure your future storage needs do not exceed the current capacity, check for the current utilization and available storage as well as the ability to expand storage hardware if needed.
  17. SharePoint RBS Configuration: Ensure your farm is configured correctly for RBS.
  18. BLOB caching setup: Disk-based caching is extremely fast and eliminates the need for database round trips if it is configured properly.
  19. RAM Utilization: Ensure your farm servers are not over utilized.
  20. CPU Utilization: Ensure your farm servers are not over utilized.
  21. User Profile import filters:  Are service accounts and disabled accounts filtered out?
  22. User profile synchronization schedule: Find the right balance for the sync. 
  23. Portal super reader and super user accounts setup: Verify they are set properly and that the membership is correct. 
  24. Office web apps cache: It is recommended to isolate the content database used for the Office Web Apps cache, so that cached files do not contribute to size of the "main" content database(s) for the Web application.
  25. OWA service apps: Ensure the Apps are running on correct server roles.
  26. Web apps: Ensure Web apps are not running in ASP.NET debug mode in production.
  27. Farms: Record the number of Farms and purpose of each.
  28. Web Apps: Ensure Web apps are configured correctly.
  29. Content Databases: Ensure proper content database sizes and configuration.
  30. Site Collections: Ensure properly sized and organized site collections.
  31. Custom Features: Review and record the Custom Features, where they are used, their intended purpose and proper installation and activation.
  32. Custom Apps: Review and record all custom apps installed on the farm, their intended use and where they are being used.
  33. Custom Web Parts: Review and record where any custom web parts are being used and that they are working properly.
  34. Environments: Record and ensure the environments are synchronized and consistent with each other and that they are being used for their intended purpose.
  35. Environment Patching: Check environments for consistent patching (build numbers) between all environments
  36. SQL Naming: Ensure SQL Servers are using SQL Aliases, not computer names or CNAMES
  37. DNS: Ensure host records defined for the SQL Aliases
     

Platform

  1. Page File on a separate drive from the OS, SharePoint and Logs
  2. Does Storage meet the farms needs (current vs. projected)
  3. Are there large files being stored in document repositories
  4. Record number and size of files
  5. Is there a change management process involved?


Logs

  1. Check Application log for errors
  2. Check System log for errors
  3. Check ULS log for errors/ critical / warnings
  4. Check IIS logs for 503 error pages
  5. Check IIS logs for slow (>200ms) loading pages
  6. Check IIS logs for Active Directory Latency (304 not modified with excessive load times)
  7. Check IIS logs for dead links (404 errors)
  8. Check Requests per second count from IIS logs
  9. Check log locations (SharePoint/IIS should be on a secondary drive)
  10. Check for unrestricted growth
  11. Check log drive capacity/utilization


Solution Integrity

  1. Old SSP Site removed (for in place upgrades)
  2. Check Supported Limits for Managed path counts
  3. Check Supported Limits for Content DB sizes
  4. Check Supported Limits for List item counts
  5. Check for deleted pages in navigation
  6. Check for unused content sources in the search crawl
  7. Check Health Analyzer rules
  8. Check patch levels for all content databases
  9. Check for orphaned site collections
  10. Check for broken site collections
  11. Check for broken my sites
  12. Check for missing web part references (Error web part detected)
  13. Any Sites running in UI Compatibility Mode (2007 or 2010)
  14. Check code quality process for stress testing
  15. Check code quality process for load testing
  16. Check code quality process for security testing (each role)


Continuity

  1. Is backup being performed? 
  2. Review backup process
  3. Is the disaster recovery plan tested and reviewed annually? 
  4. Ensure Central Admin is redundant.
  5. Is disaster recovery farm on another site? 
  6. Virtual machines distributed properly across physical hosts for disaster protection?
  7.  Check for role redundancy for Web front ends
  8.  Check for role redundancy for Application Servers
  9.  Check for role redundancy for Database
  10.  Check for Service redundancy 

Security 

  1. Check for Extra ISA Firewall rules.
  2. Check SSL Use // IPSEC
  3. Are MySites hosted on a dedicated web application?
  4. Is the farm admin able to manage the service accounts?
  5. Ensure farm account is not be used for other services.
  6. Farm account should not be in local administrators group unless doing install or patch.
  7. Ensure external access uses SSL?
  8. Kerberos Configuration (SPN's configured properly)
  9. Ensure the proper number of service accounts:
    SP 2007: 3
    SP 2010: 5
    SP 2013: up to 16 service and 3 server.
  10. Ensure My Sites are configured with secondary site collection owners.
  11. Ensure farm admin and service accounts are not be permitted interactive logon.
  12. Ensure the proper service accounts are used for the proper services:

Database

  1. Check content databases within limits.
  2. Check transaction log sizes.
  3. Check for excessive free space. // shrink db
  4. Trim audit logs to reduce content db size.
  5. Check for maximum degree of parallelism.
  6. Ensure database auto growth sizes set properly.

Information Architecture

  1. Verify: universal site taxonomy.
  2. Check maximum site depth.
  3. Check maximum site width
  4. Check for a high number of role assignments on individual items.
  5. Check for a high number of unique permissions.
  6. Check content growth projections.
  7. Check for a high number of sites sharing a content database.

Branding

  1. Are there any custom master pages?
  2. Are the custom master pages or page layouts working properly?
  3. Are all images / styles / etc checked in and published?

Customization

  1. What WSP Solutions are deployed?
  2. Are any InfoPath forms deployed?
  3. Check for Invalid / missing Feature counts.
  4. Ensure assemblies are compiled in release mode not debug mode.
  5. Which solutions are 3rd party?
  6. Which solutions are in house?
  7. Check solution utilization (Where, activation locations, actual usage)

Search

  1. Check crawl logs for any errors or warnings.
  2. Check crawl schedules.
  3. Check crawl running time versus crawl interval.
  4. Check for successful crawls and crawl failures.
  5. Check search service account configuration.


I realize there may be some repetition above, but the purpose of this is to help you ensure a healthy environment.  If you have any questions, additions or modifications, please comment and I will make updates.  Please follow me on twitter @DavidRMcMillan and @DevFactoPortals.  I look forward to making this a resource any admin can use.

Thursday, February 5, 2015

Nintex Workflow Complexity

Some of the feedback I have received after my recent post Why would I buy Nintex? was in regards to how to gauge the complexity of your workflow.  Now one way is to actually map out the process and see how much work there is in automating the process.  The problem with mapping out the process is that now you have gone a long way down the road and are not even sure it is viable to re-engineer; although I recommend it even if you were not doing workflow automation.  The other way, which I want to go through today is a simple way to quickly evaluate it so you know whether it is a simple, medium or complex workflow.


The other question that users often have is, How do I know if there is value in automating a process?  I will go through each of these and hopefully provide you some insight that makes your life easier or at the least clarify something you had a question about.

Part I - Gauging Complexity

Now. like my previous post I am going to do some basic math that you can apply to calculate the level of complexity, first I want to define those levels and then you will know a ball park of the budget to map and create the workflow in question.


Before I get into defining the levels of complexity, I want to point out that the majority of effort in process automation is the actual mapping and redesign, not the programming, though the programming is the direct cost, while the other efforts are not. 


As I mentioned above, process mapping should be an existing function within your business units, it allows you to see your processes and quickly identify areas for improvement, if it is not being done regularly, you may have a lot of work ahead of you.  The good news is you can and should take an iterative approach to making this happen.  What I mean by that is that you can record it and come back to it as needed to refine the process; processes are often moving targets until they are mapped and set as a standard.  There is really only one rule you can rely on with an undocumented process and that is the process will change!, so map it as soon as you can, communicate it to everyone and make modifications as necessary.


A simple rule to use for gauging the amount of time to map and reengineer a process is typically 60-80% of the total time to get a final automated workflow that meets your needs.


Ok, so let's talk about complexity, as I mentioned above we can use some simple buckets (if you will) for the complexity of the process automation. A simple workflow is from 0-40 hours of effort to create, a medium complexity workflow takes from 41-120 hours of effort to create and a complex workflow takes more than 121 hours to create.  These numbers are based off the use of Nintex and you can multiply then by 2-3 times for C# development, depending on how good your developer is.


Complexity of the Workflow

Ok, so now that we have an idea on the size of the buckets, we can do our math that will help gauge which bucket our proposed workflow should fit in.  If we look at a process from a high level we can ask and answer a few questions that will help us gauge the complexity.  I will explain why it is important before we calculate.


  1. How many users in the organization does this workflow affect (meaning how many are going to use it)? N
  2. How many people/business units are going to need to interact with this process? I
  3. Roughly how many steps from beginning to end do you perceive? S
  4. How many possible results are possible (roughly)? R
  5. How many different systems are involved? V
Ok, so we have some variables and here is why they are important.


  1. N is a number that represents the number of employees (in multiples of 5000) utilizing the workflow, the larger the number the more impact on complexity, a good rule of thumb is that you can expect the workflow complexity to double for every 5000 users that interact with it, but anything less than 5000 should not affect it.
  2. I is the number of interactions, for each person/ business unit that interacts with the workflow you can expect roughly 5 additional actions in the workflow.
  3. S are the total number of perceived steps, for each of these steps you can expect roughly 5 actions in the workflow.
  4. R is the number of results, in a workflow these are different paths and it is easiest to think of it as a multiplier of the number of tasks where two or one is the base number of outcomes and the addition of another result will effectively double the actions.
  5. V is another complexity multiplier, when ever we interact with any system outside SharePoint, we are doubling the complexity.
So here is what the formula looks like:
  1. N = Answer to Q1/5000, Round up
  2. I = Answer to Q2 * 5
  3. S = Answer to Q3 * 5
  4. R = If Answer to Q4 is 1 or 2, then 1 otherwise Answer to Q4 - 1
  5. V = Answer to Q5 (there is always at least SharePoint)
X = (I+S)NRV


Now if we use an example of a company of 250 people wanting to do a vacation leave workflow.  The workflow needs approval from their direct Manager and notification sent to the requestor and HR of the result.  The vacation leave needs to poll the Accounting system to determine the number of days available for leave and then deduct the amount when approved.  From this scenario we can calculate the complexity as:


Q1. There are 250 People
Q2. There are 3 interactions (Requestor, Manager and HR)
Q3. There are 6 steps
    1. Make the request
    2. Poll the accounting system
    3. Send to the Manager
    4. Deduct the Leave
    5. Notify HR
    6. Notify the Requestor
Q4. There are 2 results.
Q5. There are 2 systems (SharePoint and Accounting)


I = 3*5 = 15
S = 6*5 = 30
N = 250/5000 = 0.05, Rounded up = 1
R = 1
V = 2


X = (15+30)*1*1*2 = 90


90 is rough hours to create your workflow using Nintex. If we put that in our bucket we would say this is a medium complexity workflow, which I would expect due to the interaction with the accounting system.  I personally would use the upper end of the bucket for medium and low complexity workflows and would perform a proper mapping for anything that seems to be complex.  Be prepared however for changes in interaction and re-evaluate any workflow each time there is a change in scope as it affect complexity, especially when dealing with the multipliers.

Part II - Gauging Value

The second part of this is how do I gauge the value of automating a workflow?  You can do time and motion studies and determine the actual time spent on tasks, or you can use rough numbers again.  when using the rough numbers in this case be optimistic on how quickly people are performing the process today, it will give you a pessimistic value for automation.


In part one we asked the question, how many users interact with the process, this number will be used again as a base multiplier.  We will then ask two new questions,
  1. How much time does someone currently spend doing this task? remember be optimistic, ask a small sample and take the lowest three numbers averaged as your result.
  2. How often do they need to do this task? again ask a small sample and take the three largest occurrence averaged out and represented as a yearly amount.
Now we simply calculate:


N = Number of people
T = Time spent currently (in Hours)
O = Task Occurrences/Year


X = NTO


The result is the number of Man/hours/year spent on this process.  You can then compare that to the estimated savings in time spend and the cost of development, here is a continuation of our above example.


As above, when asked the above questions we get the following answers: currently our people spend an average of a half hour filling out and submitting a leave request, then carrying it around for approvals and checking the system for available leave.  Based on the small sample people do this once a year.


So now let's do the math:


X = 250*0.5*1 = 125


So currently it costs the company 125 man hours for leave requests, we can expect the workflow to reduce the time of a leave request to roughly 15 minutes (note I am being pessimistic the other way now, taking the maximum time it should take).  If I calculate this I can then get a gauge on how long it will take before I make my money back.


X = 250*0.25*1 = 75


So we can expect to half the total number of man hours by automating this workflow, but the 90 hours we incurred above to develop it, means we will not positively affect the bottom line until the second year (90/75 = 1.2 years ROI).  Is that worth it, I guess it depends on you...


Feel free to follow me on twitter @DavidRMcMillan or @DevFactoPortals or become a member of this blog.  Thanks, feedback is appreciated and encouraged!



Tuesday, February 3, 2015

Properly Defining Information Technology

Information Technology

Not what you expected, or was it...

The definition of Information Technology has become corrupted over time as people have had different ideas on what it means.  If you looked up information Technology, you would think the definition would be something like,
"The management and distribution of information through the use of technology.",
after all that sounds right...


If you go to Wikipedia, Information Technology is defined as,
"Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.
The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services."



 This seems like a  good definition, though it seems a bit much for the management of information.  If we go to other sources for the definition, you will begin to realize how different the definitions have become.  merriam-webster.com defines Information Technology as,
"the technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data"


Now you may say that is not a lot different from Wikipedia or my original simplified definition, but lets exam it again.  If I were to say Library Sciences are a part of information technology (which in reality parts of it are) does it fit into these definitions?  If we look at my simplified definition, we can say "Yes", after all Library Sciences is about the organization of information and that organization in itself is an applied technology.


Now look at the other definitions, how did we move from technology to computer systems specifically? Simple, we applied the way it is done today... Is that correct? absolutely not.  The Typewriter was information technology, yet the definitions could never apply to it. Microfiche is another technology left out of the modern definition and there are many, many, many more.


Now can anyone tell me when the modern computer era began?  you could say 1983 (First PC) or you might say 1977 (Apple II) or even older with the kit computers.  Now let me ask you how old is the oldest Information Technology company? In that case you might say 135 years (IBM established in 1880) or you might say 138 years (Bell Telephone established in 1877), heck you might even say 3700+ years (first postal system) and in truth, you could even argue that.  Remember using technology to manage and distribute, well writing is technology, horse back riding is technology, heck shoes are technology, I still remember "sneaker net", running disks from one computer to another.


So how can we fix it?  Simple, remove any reference to the way we currently perform information technology, the term may have came with the computer revolution, but the definition doesn't need to be that narrow.


Follow me on twitter @DavidRMcMillan or @DevFactoPortals, my goal is to make information technology better, one person at a time.  This post was intended to support another post on Information Governance so stay tuned.

Friday, January 30, 2015

Why would I buy Nintex?

In my blog post ECM Governance - Post 5, I mentioned that I should post about Nintex and how to determine an ROI for the purchase; I bet anyone who knows me never thought I would get a post out this soon...

So, first things first, lets talk about Nintex. 

First, I want to point out that I do lead a large SharePoint practice in Western Canada and we are a Nintex Platinum partner, but that said, I am not driven by sales here, I am driven by a piece of software that can make your life and SharePoint solution better.

Nintex is a company, not a product, in reality Nintex offers several products, but when SharePoint people (like me) talk about "Nintex" they are referring to two products (Workflow and Forms) that are often bundled together as a BPM solution for SharePoint.  These are the products I am going to examine and hopefully explain how you can gain value and save money in their use.

What does Nintex do that SharePoint doesn't?

First Workflow...

Now this is a question I get a lot, while the answer was simple, it has recently changed.  I used to say that Nintex doesn't do anything you can't already do with SharePoint, but now I have to say it doesn't do a lot more, but it has some additional functionality when it comes to integration.  So now you are saying, if it doesn't add much, why would I buy it?  Well that answer is simple, while SharePoint offers extensive workflow capabilities out of the box, you need to be a developer or employ a developer to leverage it.  Out of the Box (OOTB) SharePoint needs either SharePoint Designer or Visual Studio to create anything but the basic workflows in the platform, while the workflow development tools are there, unless you know C# I don't see you creating workflows anytime soon.

Nintex changes all that, it takes what is a programming interface in SharePoint and makes it a graphical interface, then it adds in all the parts that take a lot of effort to program, like auditing, tracking and performance monitoring and throws them in.  The first advantage this gives is that it allows developers to mentor power users in the creation and maintenance of their own workflows (though this still takes a lot of time and effort because it still follows programming logic).  It also allows your developers to reduce the time to create a workflows by a factor of three (from my experience) and finally, everyone can see the workflow and each step as it executes, tracking the time for each step, the decisions made and auditing each step in real time.

Then Forms...

Until recently I would answer that forms provide a nicer interface than the Microsoft tools (InfoPath)and an easier way to brand your forms consistently, but since Microsoft announced the deprecation of InfoPath, Nintex Forms no longer has a Microsoft equivalent to compare, making it and other third party tools a requirement if you want to customize the form user experience.

Ok, so how can we calculate an ROI?

Now I am going to simplify the math I am using, I will first go through my basic Math assumptions and rough estimates (over estimated) cost for Nintex (Forms and Workflow), then you can see where the break even should be.

Assumptions

  • Nintex Cost $15000 USD per Web Front End (WFE) = C
  • You have two WFE Servers (for load balancing and redundancy) = S
  • The average C# workflow will cost $10000 (from my experience it is usually more) to develop = W
  • Nintex Reduces your development by a factor of 2 (as mentioned above, my development team has typically realized a factor of 3) = f
  • We will not account for the value added by Forms or by Power Users who learn to develop workflows.

The Formula

Where X = Number of Workflows to Breakeven

XW = CS + (XW/f)
10000X = 30000 + 5000X
5000X = 30000
X = 6

The teacher always said, Show your work, lol.  Now realize by over estimating the cost Nintex, underestimating the cost of a C# workflow and under estimating the factor of improvement, we achieve a worst case scenario or 6 business processes before you begin saving money, the reality is most of my clients realize it between three and four workflows created.  I guess the question to you is, how many workflows do you have that could be automated and is there any advantage to them being automated?

If you want to find out more about Nintex, you can go to their website:  http://en-us.nintex.com/

If you want to follow me on twitter I am @DavidRMcMillan or @DevFactoPortals, feedback is always appreciated, good, bad or indifferent. 

Thursday, January 29, 2015

ECM Governance - Post 5

I think it is official, I suck at blogs, but I promise to try to post more (not saying I will), but one thing you can be sure of is a tweet when I do.  I seem to be getting worse at this, rather than better, blog posts always end up being low on my priority list and the time between seems to increase, not decrease.  Today I wanted to finish off the definitions of the different principles and then we can move on from there, hopefully in a more timely manner.

Collaboration Principles

Collaboration principles are principles that describe the way in which we plan on using the Team, Project and other collaboration site capabilities of the solution. It should include things like the site design, which lists and libraries are standard, the security roles used in collaboration, Content Types for templates and anything else you want to control in the collaboration portion of your solution.

Collaboration principles are about the control of who can do what, where and when. They provide a foundation for collaborative interaction and as such are always used in your solution in some way. As an example of a Collaboration principle, I have included "Collaboration sites will inherit the top menu navigation from the parent site", which encompasses the fact that collaboration sites are more loosely controlled than portal sites and control revolves more around the user experience.  Later when we examine the application of policies, we will see that as we move down the hierarchy from portals to collaboration sites and then to my sites, we change from highly structured and restrictive control to a looser user enabled control, but more about that later.

Collaboration sites will inherit the top menu navigation from the parent site
Principle
Collaboration sites will inherit the top menu navigation from the parent site.

Implication
To maintain a consistent user experience, the same top menu is used throughout all collaboration sites, Site Administrators will have no control over navigation.


Business Process Principles

Business Process Management is a core function that every ECM needs and SharePoint is no exception.  The one caveat I have to say is that SharePoint sucks for creating workflows and forms, thank goodness they deprecated InfoPath, Nintex does a far better job in the forms area and makes it possible for power users (with training and mentorship) to create some complex workflows.  The developer will never disappear for the complex workflows, but his work is minimized, providing a better return for the organization.  Come to think of it, the ROI of a tool like Nintex seems like a good topic for another blog post, stay tuned for that too, lol.

Anyway, regardless of the tools used, BPM needs some structure around how workflows, integrations and forms are created, where they are stored, how they are executed and anything else you think will need a boundary around.  In this example I have created a principle for the management of alerts and notifications, which fall under business process management.  The principle is intended to reduce the overhead of managing alerts and notifications, but ensuring users are responsible for managing their own alerts and notifications.  Coupled with this principle are other principles around training for users and principles for notification and alert creation.

Each user is responsible for the management of alerts and e-mail notifications
Principle
Each user is responsible for the management of their own alerts and e-mail notifications.
Implication
While anyone can assign alerts to others, each user is responsible for the maintenance of their own alerts and notifications. Each user needs to ensure they receive the notifications they need.


Esthetic/Site Design Principles

Now I changed the name of this, many ECM governance practitioners will call this user experience, but for me, it is more than just the user experience and encompasses the brand, navigation, search style guides, master pages, XSL transformations and anything else that affects the look and feel of the solution, including site and page templates.  These rules are often the most important because they directly affect the user experience and adoption; no one like an ugly site. 

I remember a developer that once worked for me, he had recently come from China and I asked him if he could brand our product SharePoint site.  He said "yes" and with a slight head bow, immediately set to work.  He ended up spending the evening at home working on it, so when I came in the next morning, he proudly presented me with a SharePoint team site branded in bright red and gold, or as his cohorts called it the ketchup and mustard brand, very much like the 1980's McDonald's I grew up loving... but alas there was no Hamburglar anywhere on the site.

That little story illustrates why this is so important, every user has a different idea when it comes to look and feel and if I had simple used a governance principle that said his brand had to align with the corporate style guide, there would never have been an issue; well maybe.  I have included a couple examples in this case, one that identifies the importance of user experience, the other ensures no one thinks they can make up their own brand.

Prefer Findability over Authoring Convenience
Principle
Ensure that “findability” governs design decisions – optimize metadata and site configuration to provide the best value for the end-users, not just the content contributor.
Implication
In situations where design trade-offs must be considered (more metadata versus less, information above or below “the fold”, duplicating links in multiple places), decisions should be made to make it easier for end users rather than content contributors. “Findability” means designing sites so that important information is easily visible and that navigational cues are used to help users easily find key information. It also means using metadata to improve accuracy of search results. Both the “browse” and “search” experience for users will guide design decisions in initial site development and modification over time.

All publishing and collaboration sites will be consistently branded
Principle
All publishing and collaboration sites will be consistently branded.
Implication
In order to maintain consistency in the look and feel of the intranet, standardized brands will be used for collaborative and publishing sites and will not be modifiable by the site owners.

Content Principles

Where the Esthetic principles are the most important principles for user experience and adoption, content principles are about making the solution fit for purpose, after all this is an ECM we are taking about (emphasis on the "C"). In reality, my experience has proven that the content principles will out number all the other principles combined, why, we can every content type will need principles to define it. Whether we are talking about a metadata field, a vocabulary, taxonomy, document, list, image, template page or alert, it is all content and that means that principles that affect any part of the system will probably affect the content principles.

Because content principles are such a vast area, I have included several examples to help you get started, but understand even in the beginning this category is the main part of a governance document. Some of these you have already seen, which reinforces my point of overlap, yet I am sure you can see how they apply to more than one category.

All content is owned
Principle
All content must have a clearly identified “owner”.
Implication
Users need to know who to contact if content on a site is out-of-date or inaccurate. The content owner is accountable for all the content in a site and for ensuring it is up-to-date. Each site should have a clearly defined owner that is visible on the main page of each site.

Maintain a single source of truth
Principle
All content exists in only one location.
Implication
This means that the official version of a document is posted once by the content owner. For the reader’s convenience, users may create a link to the official copy of a document from any site, but should not post a “convenience copy”. Users should not post copies of documents to their personal hard drives or My Site web sites if they are already on a site.

In situations where some documents or records need to be available offline due to a very slow or inconsistent connection to the SharePoint sites, SharePoint Workspace can be used to make these records available offline.

Use built-in versioning
Principle
Edit documents in place. Do not download or make copies for editing, if possible.
Implication
Version control will be enabled in document libraries where prior versions need to be retained during document creation or editing. If prior versions need to be retained permanently for legal purposes, “old” versions of documents should be stored as records. Documents should be edited in place rather than deleted and added again, so that document links created by other users will not break.

Sponsors/Owners are Accountable
Principle
Site Sponsors/Owners are accountable, but everyone owns the responsibility for content management.
Implication
All content that is posted to a site and shared by more than a small team will be governed by a content management process that ensures content is accurate, relevant, and current. Site Sponsors/Owners are responsible and accountable for content quality and currency and archiving old content on a timely basis but site users are responsible for making Site Sponsors/Owners aware of content that needs updating.

Business Intelligence Principles

Business Intelligence principles encompass the use and presentation of BI data, reports, dashboards,  charts, graphs and KPIs.  They are intended as with any other principle to provide both consistency and ease of management, with the later being very important.

In this example, we define a principle that ensures we are controlling access to the BI tools, which require an enterprise license in SharePoint.

The Business Intelligence Centre can only be accessed by the BI User role
Principle
Only the BI User Role will have access to the BI Centre
Implication
To control the Enterprise license in SharePoint the BI Centre exists in it's own web application and can only be accessed by users who have the proper licensing.  The BI User role has been configured to ensure compliance with the Microsoft licensing model.

So anyway, this one took a long time, so I made up for it with more content, but I am trying to get my MVP, so I may need to blog a lot more, it doesn't do a lot of good knowing a bunch of stuff if I am not going to share it with the world.  Got to change the IT world, one client or reader at a time.

ps.  follow me on twitter, @DavidRMcMillan, I will post articles and where I will be speaking there.

Wednesday, October 8, 2014

ECM Governance - Post 4

Guiding Principles continued

Hi Everyone,

Last post I had a pretty short entry and it took a long time for me to get it out, unfortunately work sometimes gets in the way of my sharing of information as you can see I haven't posted in some time due to client constraints, anyway, this post I want to continue the descriptions of the different categories of guiding principles I have outlined for a SharePoint implementation (realize these are not the only categories, some may need to be added or some may not be needed, depending on the purpose of the solution being provided).

Last post we reviewed the General and Security principle categories, this post we will look at the following categories:
  • Document Management Principles
  • Publishing Principles

  • So let's get to it...

    Document Management Principles

    Document Management principles are principles that have to do with the way in which our solution should manage and control the creation and modification of documents in the system.  The reason it is not referred to here as records or information management is really a matter of scope. Document Management principles need to encompass records and information management (RIM), but it should also encompass content that is not part of the document or information strategy.  We are not going to rewrite the RIM (if one exists), but rather we will try to capture the principles behind why the architecture and strategy were implemented.  In addition, we will define principles for content that has not be defined in the RIM (like SharePoint lists) and configuration specific items, like the version control, check-in and check out and draft version rules.

    The Document Management Principles are not meant to replace the RIM, but are meant to tie the RIM strategy and governance into the governance of the ECM solution (remember keep it simple and understandable).  In reality, the document management principles are probably one of the most numerous rules you will have in you ECM governance plan, as most of ECM is about the management of documents and content.  As an example of a document management principle, I have my number one rule for all governance (your governance plan will fail without it), "All Content is Owned." as it is outlined below.  Notice it is a principle that encompasses more than just records or documents, addressing all types of content; yet it still applies as a document management process because it affects the way people will work with documents and records.
    All Content is Owned
    Principle
    All content must have a clearly identified “owner”. 

    Implication
    Users need to know who to contact if content on a site is out of date or inaccurate. The content owner is responsible for all the content in a site and for ensuring it is up to date. Each site should have a clearly defined owner that is visible on the main page of each site.

    Publishing Principles

    Publishing principles are principles that describe the way in which we plan on using the intranet and published site capabilities of the solution.  It should encompass plans for language variations, the way we are reviewing and publishing content and any other principles that may affect the configuration of the publishing and audience components of the solution.

    Publishing principles are about the control of who sees what and when, they provide a foundation for portals and as such are only used when a portal solution is required.  As an example of a publishing principle, I have included "All portal and department site content is reviewed and approved prior to being published", which encompasses the fact that portal and department sites are highly controlled and all changes are reviewed and approved prior to being made available to users.

             All portal and department site content is reviewed and approved prior to being published
               Principle
               All content changes for publishing sites must be reviewed and approved before the changes can be published.
             Implication
             Content owners are responsible for updates, while site approvers are accountable for the correctness
               and appropriateness of the content being published.

    Next post we will complete the definitions of the categories for principles and from there will move on to other important governance, like who does what in governance implementation and how we can make principles work within an organization.