Workflow Manager Fail with “Trusted provider is missing” error

I encountered this problem with a customer where one of our workflow heavy applications stopped to function .

The error I got within ULS Logs look like

00000003-0000-0ff1-ce00-000000000000  trusted provider is missing

Turns out it is a security change to the IIS Application for the “Workflow Management Site” where the Authentication provider included

Asp.Net Impersonation Enabled 
Along with other changes to that IIS application .


This customer is using SharePoint 2013 Enterprise on premises with NTLM only in a windows integrated mode , no SSL .

After narrowing down the issue to “Authentication problem” I did setup a new WFM with OOB configuration.

Asp.Net Impersonation Disabled 

I have checked all those settings below this will fix this issue after you restart the application pool.

See Below

Best of Luck.






SharePoint 2013 TypeError: Unable to get property ‘replace’ of undefined or null referenceTypeError

You get the following error when you attempt to edit list items or list views.

TypeError: Unable to get property ‘replace’ of undefined or null referenceTypeError: Unable to get property ‘replace’ of undefined or null reference

TypeError: Unable to get property ‘replace’ of undefined or null referenceTypeError: Unable to get property ‘replace’ of undefined or null referenceTypeError: Unable to get property ‘_events’ of undefined or null reference.

Problem :

this happened after an automatic windows update installed the January 2016 SharePoint Server cumulative updates.

Solution :

First , try to run the SharePoint Products upgrade wizard on the SharePoint host with administrative rights.

Second Install the January 2016 SharePoint foundation ( I know that was not required before but for some hideous reason , this is coming back again )

You can get this download from here ..

After the installation  ,please re run the SharePoint upgrade wizard it might leave the server in an unstable state ( some components may take the update and some don’t , that is why you need to run this wizard).

Best of luck

PS: This is a deja vu  from a similar issue in 2007 …. no more comments

SharePoint ULS Log analysis using ELK (ElasticSearch LogStash and Kibana)

SharePoint ULS Log analysis using ELK (ElasticSearch LogStash and Kibana)

A Solution to Multi-Tenant systems Log Access.

By: George Gergues



SharePoint is a large platform that is always growing, and changing, and as with large application platforms that hosts many components, the complexity is always manifested in the platform log (ULS Logs) and log management. There are several ways to trace and debug issues using those logs, and tooling in this area has not been keeping up with the speed SharePoint. The other major complexity is multi tenancy, as the cost to own and operate a single SharePoint farm has been on the rise, many companies started to offer a multitenant farm system, mainly Office 365 by Microsoft, along with other hosting vendors in the same domain. Multitenancy reduces the cost of owning into a cost to lease or operate, but takes away platform specific facilities like Audit Log and platform log, as those log containers are not partitioned at the same tenancy level but are singletons per host in the case of ULS Log or per Content Database in the case of Audit Log table.


The Solution

In this posting we are proposing the use of the ELK platform (ElasticSearch, LogStash and Kibana) as a tool to ingest the ULS logs, leverage the fast index and search capabilities and make use of the event correlation features that come by aggregating logs and other factors.

Such tool used by system engineers and hosting staff can be very useful in responding to first level support incidents, or to relay the events to the end user without compromising security.

Document level links: The system engineer can share a document level link, to the event to show the full details of the event, those are mainly

Also this tool can show trending of events and repeated problems or patterns that might be cyclical in nature and provide better picture of farm level performance and common problems.

With little development, you can expose those search portal to the end users to directly perform those tenant level searches without exposing the full log to the end user or tenant admin.

The next few sections will show in detail how to build and customize such system.

What is ELK

ELK = [ E + L + K ]

ELK is the combination of three open source solutions to three problems and merged together, they form the best solution for large log and large dataset analytics. All three systems operate using REST.

E: ElasticSearch:

Index and search server base on the Lucene open source project, storing documents based items (JSON objects), the storage is clusterable and highly redundant through shards and multiple distributed nodes. Elastic Search is the Java ported version of Lucene and runs inside a JVM.

L: LogStash

is a Log capture and processing framework that works in a Capture-> Process -> Store cycle per single event on the log source. It is open source project that evolved a lot in the past year to operate with a long list filters, modules and plugins. LogStash is very powerful when used to construct schema JSON documents from unstructured text inside log files using GROK filters. LogStash is written mainly in Ruby and running inside JVM using JRuby.

K: Kibana

is a “node.js” data visualization layer that is tightly integrated with the elasticsearch index and can build charts and dashboards that represents data in the elastic search index very fast. It is making use of the fast REST api and is very responsive inside the browser even with large datasets. The Kibana configuration is stored as a JSON document in the elastic search index, along with all the charting and dashboard scripts that it produces.

The Challenge

  1. Multitenancy

    As mentioned in the introduction, with multitenancy comes the lack of access to platform ULS logs due to security restrictions to OS Filesystem level. The only way SharePoint shows application and system exceptions is via a custom application error pages, and they are very obscure, normally assigned a session or transaction id ( uuid ) named (correlation Id)and that is the only piece of information the user gets along with the timestamp.

Example of Correlation Id “53fed7f1-cf35-1253-0000-000050f7b00c”

Cannot give all tenants access to the shared logs as they main contain information regarding custom application or extensions, it might also expose other tenants content or exceptions depending on the debug level.

  1. Log Size and Log Cycling

    The other problem is Log Sizes and Log cycling. As with any large enterprise deployment, you are required to maintain some weeks of log history, either on the same system or offline. The minute those files leave the system, they are harder to manage and /or correlate.

  2. Multi Log Events

    Some events can span the end of one log file and continue onto the next log, and some of the current tools like ULS viewer could not handle easily. With such system, the log parsing is already done, and the queries are much faster.            

  3. Aggregate Log Sources

    Using such system, an engineer would have the ability to shows events from multiple aggregate sources, and to analyze events that may have started and caused by other dependencies, while manifested only at SharePoint where the error occurs. Some easy log aggregation candidates would include the IIS server logs per host along with windows event logs and SQL server logs.

ULS Logs

The Unified Logging System (ULS log) is the standard logging format used for SharePoint farms.

Location is Configurable by admin but default installation point to the SharePoint hive C:\Program files\Common Files\Microsoft Shared\Web Server Extensions\14 or 15 or 16 \LOGS\*.log

Process: Host
Executable running and causing the event.

TID: Process thread Id on that host.

Area: Component or application generating the event.

Event ID: Internal category Id per application.

The files rotate every 30 minutes by default.

SharePoint Application Error Page

The SharePoint custom error page for 2010, 2013 and Office 365 looks like this.

Only a correlation Id is visible to the users and they are to report it to the support technician.

This can be very frustrating as a developer as you need to wait for hours and in some cases days before you get an answer .

Setting up ELK

  1. Install Java Runtime (JRE)
  2. Download the packages
    (zip files) for each of the following and place each on a separate folder. Logstash , ElasticSearch, and Kibana.
  3. Edit the configuration file for each package for your environment (Development configuration )

For simplicity we opted to use the file input module to perform in our limited development environment, yet you can steam the log files using from multiple source using any plugin, LumberJack is the most commonly used log streaming plugin.




Details on the GROK filter syntax

    The Grok filter is the main core component that tries to understand the schema of your unstructured log text and generates a temporary schema based object to be stored, indexed and later on queried upon.


There are a few formats for the log file and the date timestamp variations that puts a bit of a burden to collect construct a pattern to match all the value.


Sample Pattern

SP_ULS_FMT1 (?<sptimestamp>%{MONTHNUM}/%{MONTHDAY}/%{YEAR}%{HOUR}:%{MINUTE}:%{SECOND}\*?*)%{SPACE}%{PROG:sp_process}\(%{BASE16NUM:sp_pid}\)%{SPACE}%{BASE16NUM:sp_tid}%{SPACE}\t+%{DATA:sp_area}%{SPACE}\t+%{DATA:sp_category}%{SPACE}\t+%{DATA:sp_eventid}\t+%{SPACE}%{WORD:severity}%{SPACE}%{DATA:sp_eventmessage}%{SPACE}%{UUID:correlationid}%{SPACE}

You can see the full configuration script on the GitHub repo

Building the Dashboard: Kibana Visualization

The Kibana configuration is fairly simple and all constructed through the UI. The Configuration of each of those elements is stored in the elasticSearch storage as an index named (.kibana).

The dashboard is made of some smaller components, mainly charts, and widgets, that target a specific measurement against the index.

The raw data of each element can be displayed and extracted as csv file.

A Severity chart, is simply a vector of the unique elements in the field SP_SEVERITY describing the event severity level.

The whole dashboard is dynamically rendered and filtered via the queries you select


This sample shows the event correlation id with showing the severity level and the total count of events that happened along with the components







Full Configuration scripts

You can find the full configuration scripts to get you started here.


ELK inside JVM instances (Java Virtual Machine), so you are bound by the limits of that particular instance. As with any Java based process you can tweak
free memory and heap allocation parameters to get the best throughput, yet the optimal performance will be achieved by using clusters of the service, and that is where shard storage shine.


  1. Aggregate Log sources in dashboards

As a next natural enhancement for this project, IIS Logs on all servers, windows log and SQL Server logs as well as firewall and or load balancers.

  1. Security

    1. You can start implementing some security over this system, not that in our sample system. Initially users don’t have access to the Search API directly but only via Kibana (the visualization layer) this is the first measure you can take.
    2. You can also implement an IIS or Apache proxy to implement authentication and authorization with any access control.
    3. You can use one of the commercial products (Shield and Marvel) from Elastic.Co
  2. Scalability

    This system is designed to be very highly distributed and available, with multiple node and clusters, (only keep the elastic nodes in the same geography to reduce latency). But you can have multiple hosts the logstash role, kibana role or all of them.


  1. Elasticsearch :
  2. LogStash :
  3. Kibana :
  4. ELK guides :
  5. Guides on using Kibana :
  6. Grok Debug Tool :
  7. The full configuration scripts:

SharePoint audit in action

The Triangle SharePoint User Group  ( TriSPUG) meeting for Tuesday  03JUN2014 at the Microsoft building in Durham .
I finally finished the presentation and the slides and the code samples on codeplex.


Please find attached

The code samples are here


Let me know if you have any questions.


Best of luck

SharePoint audit in action




Having Multiple UserProfile Sync services

In an architecture I worked on I was isolating two service groups , I wanted to isolate a service group that service some SP Applications from others , with security , information leakage and other concerns in mind ( like having own managed metadata services , and profile services etc.)

The problem was having two or More UserProfile Synchronization services, While many might say , why , and you don’t need another one , or it can’t be done.

Simply the problem is that User Profiles are always tied to the mySite host and that is a global configuration setting on the service application (your first provisioned UPS application ).

The opted design:

You can have multiple UserProfile Sync services but they can each reside on a single SharePoint hos within your farm.

You cannot have multiples on the same host , as it is simply a Forefront Identify management engine ( Geneva ) and given the complexity of this thing , I would not even think about running two of them on the same host.

The main problem :

After declaring success on this front and having the two sync service application , each with multiple and different AD connections , I noticed that the sync works fine , to all sites and content . yet the farm backup job fails sporadically with this error on the log.

[8/21/2013 9:11:08 PM] FatalError: Object SP-USER-PROFILE-SERVICE failed in event OnBackup.

For more information, see the spbackup.log or sprestore.log file located in the backup directory.

SPDuplicateObjectException: An object of the type Microsoft.Office.Server.Administration.ProfileSynchronizationUnprovisionJob

named “ProfileSynchronizationUnprovisionJob” already exists under the parent Microsoft.SharePoint.Administration.SPTimerService named “SPTimerV4”. Rename your object or delete the existing object.

The main solution:

The cause of the problem is the incremental backup that took longer than usual and got into the AD Sync cycle of one of the Sync Services.

The solution is simple, Stop both SYNC services, start both sync services ( on each respective server) , push the backup window a few hours apart.

That solved my problem. Hope is solves yours.

Best of Luck

Error with User Profile services

Error with User Profile services

The server encountered an unexpected error in the synchronization engine:

“BAIL: MMS(268): eafam.cpp(1510): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): eafam.cpp(901): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): eafam.cpp(1013): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): amexec.cpp(1701): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): amexec.cpp(2086): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): eaf.cpp(1417): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): eaf.cpp(657): 0x80230304 (The image or dimage already has an attribute with that name.)

ERR: MMS(268): synccoreimp.cpp(5266): 0x80230304 – export-flow failed 0x80230304

BAIL: MMS(268): synccoreimp.cpp(5267): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): synccoreimp.cpp(4858): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): synccoreimp.cpp(10873): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): synccoreimp.cpp(10557): 0x80230304 (The image or dimage already has an attribute with that name.)

BAIL: MMS(268): synccoreimp.cpp(2545): 0x80230304 (The image or dimage already has an attribute with that name.)

ERR: MMS(268): synccoreimp.cpp(6483): 0x80230304 – MV to CS synchronization failed 0x80230304: [{F81CD149-ADC9-4720-89E2-E9CBD2CE39A9}]

BAIL: MMS(268): synccoreimp.cpp(6486): 0x80230304 (The image or dimage already has an attribute with that name.)

ERR: MMS(268): syncmonitor.cpp(2515): SE: Rollback SQL transaction for: 0x80230304

MMS(268): SE: CS image begin

MMS(268): SE: CS image end

Forefront Identity Manager 4.0.2450.34″

The Microsoft article does not actually describe the problem nor the solution, yet it is very simple .

The FIM engine [That is the Sync engine described in the message] is the Forefront Identity Manager

The problem is : One or more properties are being overwritten by mistake to the temp storage by the Sync engine.

In my case ( and the most common one ) it was the AD Attributes (Both with Import direction )

AD Attribute “Title” – > SP Profile “Title”

AD Attribute “Title” – > SP Profile “Job Title”

Solution : If the Microsoft solution does not do it for you (like it didn’t work for me)

  1. Simply remove both mappings.
  2. Do a full Profile Sync.
  3. Add the first mapping and Do full profile sync (monitor errors)
  4. Add the Second mapping and do full profile sync (monitor errors)

Thank you Google ….. and Microsoft.

Best of luck.

InfoPath SharePoint FormServer error 5566

The Error code 5566 is very common, and if you get that error

“ An error occurred querying a data source.

Click OK to resume filling out the form. You may want to check your form data for errors.
 Hide error details
 System.Xml.XmlException: There are multiple root elements. Line 2, position 2 ……………………..  “

Code 5566 is a very common error when performing cross web services calls

The Problem is more of a server architecture issue ( on a single sever farm configuration you may not have those issues)

The Root causes

  1. Name resolution
  2. Certificate validation errors
  3. UAG or any Url Filter or traffic parsing engines (F5 Big-IP and the like.)

The cause can be one or all of the above.

Simply to understand the problem, you need to understand how InfoPath handles this type of traffic.

  1. Client ( C  ) requests a form operation from form server ( S )
  2. S read the template from the same server or the document library or storage .
  3. S builds a temp map in memory for the current user of the form rules and code for the duration of the session.
  4. S Execute the operation (read, update, or new) form.
  5. C render on InfoPath Client or Browser (thin client )
  6. S terminates session.

Where things break

At steps 1, 2, 3 and 4

Problem Solution
1 [1] C resolve the server as [IP x.y.z.w] but Sresolves as different IP and server encounter a template or form load error but does not report it to the user Try to browse the data connection urls from the server itself  and check if you encounter any problem , resolve accordingly(In some cases internal DNS record does not match the proper configuration  use hosts file entry to manually force the session to the same server)
2 [1] If you are behind a proxy or load balanced farm Try to configure it so that the server sessions are bound to a single server for the same client.
3 [1] If you are using a public name and internal name using AAM Make sure you are resolving the correct IP inside and outside the proxy/firewall see Item 1
4 [2] S can’t load the form or the template That should not cause 5566 but it will be more descriptive If you are using a proper proxy configuration , but in some cases where the proxy configuration is not correct this will show as error 5566
5 [3] If you have dynamic links for services that gets compiled at load You need to debug this by loading this form on the same server.
6 [3] If you are using SSL certs Make sure your server can validate the certificate or disable certificate validation.
7 [4] If read new or  update  operation Check the on load rules and see if those generate certain other web services or list connections that cause this issue and handle as in item 1 above

Best of Luck