Integrating Sharepoint With Other Systems: Avoiding Common Pitfalls

Common Integration Challenges

When integrating SharePoint with external systems and applications, several common challenges can arise that impede interoperability and data flows between the platforms. Understanding these potential pitfalls upfront enables developers and architects to design robust integrations that avoid these problems.

Managing Different APIs and Protocols

SharePoint provides a wide array of APIs across its stack, from lower-level storage and data access APIs to specialized search, taxonomy, and business connectivity services APIs higher in the architecture. Integrating with other apps and databases often involves mapping between entirely different API paradigms and communication protocols between the source and destination platforms.

For example, pulling data from a MySQL database into SharePoint may require translating from MySQL’s structural querying API into SharePoint’s Client Object Model for issuing create, read, update, and delete operations against SharePoint lists. This can add layers of complexity and potential breakage points that must be designed and coded carefully when building integrations.

Ensuring Secure Access Between Systems

Opening up SharePoint data and services for external access introduces new attack surfaces and opportunities for unauthorized parties to access sensitive organizational data. Security controls must be implemented carefully to authenticate valid users and applications, encrypt data in transit, prevent injection attacks, and validate all inputs into SharePoint from other systems.

Common threats that can compromise integrations include cross-site request forgery, cross-site scripting, bot attacks, and injection of malicious SQL queries. Security best practices dictate leveraging native authentication frameworks like OAuth 2.0 and SAML 2.0 for federated identity and single sign-on across integrated apps.

Avoiding Performance Bottlenecks

Unoptimized data flows and API call patterns can lead to severe performance issues that degrade both SharePoint and connected systems. Excessive API calls in a loop rather than batched operations, unthrottled database write activity flooding queues, chatty communications requirements across systems, and other bottlenecks can result.

Caching, throttling, indexing, pagination, async jobs, message queues, and similar performance optimization techniques should be applied when designing integration architectures to smooth out peak loads.

Best Practices for Integration

Keeping the above challenges and risks in mind, below are several recommended best practices to employ when planning and implementing SharePoint integrations.

Use SharePoint Web Services for Programmatic Access

For server-side application integrations that need to access SharePoint programmatically, leverage native SharePoint REST APIs and client object models. These provide structured methods for reading and modifying SharePoint content and metadata in a permissions-aware manner.

The client APIs handle authentication, structuring requests properly, and translating responses to and from SharePoint’s internal data representation. This simplifies coding while providing greater speed and reliability than trying to directly interface the underlying SQL database.

Implement Token-Based Authentication

Rather than credentials being directly exposed across systems, applications accessing SharePoint remotely should implement the OAuth 2.0 authorization framework with bearer token patterns. After a user or service authenticates initially to SharePoint and is issued a time-expiring access token, this non-reusable token can be passed to other systems granting access only to specified resources.

This avoids storage and transmission of usernames and passwords while still enabling authorized access controls across apps and environments.

Load Test Integrations Before Deployment

Full end-to-end integration testing under simulated loads across expected data volumes and API call frequencies should occur before deploying to production environments. This will reveal performance issues, timeouts, throttling needs, caching optimization opportunities and other integration pain points under realistic operating conditions.

Tools like Apache JMeter and Microsoft Azure Pipelines can execute scripted multi-user test loads for QAing the entire SharePoint backend stack with all interconnected systems functioning together.

Optimizing Data Flows

Below are key data and API optimization techniques for flowing information efficiently across SharePoint integrations while minimizing latency and resource usage.

Employ Caching and Throttling

Adding caching proxies and gateways strategically within long-running data flows prevents the same data needing to move repeatedly across connections unnecessarily. This might entail caching reference or master datasets close to other applications reducing data retransmission.

Complementary throttling ensures any one application doesn’t overwhelm another with requests exceeding sustainable levels. This helps maintain responsiveness across interconnected apps.

Choose Pull Over Push Where Applicable

Having SharePoint “pull” data periodically from external systems often makes more sense than pushing out real-time notifications or batched updates constantly in high throughput situations. Pulling data on intervals provides throttling naturally while only grabbing information as needed vs. constant notifications.

This helps consuming apps deal with high velocity data by only ingesting at their own controlled pace. Webhooks and event handling may still play a role capturing intermediate updates between pull windows.

Set Up Indexing for Faster Queries

For SharePoint farms dealing with millions of records and objects, indexing search and content crawl databases is essential for fast lookups and retrievals across architectures. Well indexed content sources, metadata stores, libraries and lists massively improve response times for search-driven apps.

Index and metadata partitioning may be needed so one app’s activities don’t block another’s, with index caching also potentially helping avoid expensive re-indexing operations with frequent data changes.

Testing and Troubleshooting

Rigorous testing and monitoring procedures establishing baselines and standards for normal operation speeds troubleshooting and reduces mean-time-to-repair when issues inevitably emerge, especially under peak conditions. Common techniques include:

Validate All Inputs and Outputs

Code defensively checking NULLs, data types/formats, input lengths, expected ranges and other validation helps prevent bad data from crashing workflows or corrupting databases. Similarly escaping special characters on output contexts like CSV exporting prevents injection attacks.

Tools like Fiddler proxies enable inspecting requests/responses to baseline normal patterns and detect anomalies. Unit testing frameworks validate modular pieces of build integrations incrementally.

Log Activity for Auditing Issues

Pervasive logging across apps, platforms and infrastructure tracks processing and data flow forensically for both real-time monitoring and historical auditing. Syslog data aggregators like Splunk index immense volumes of machine data to uncover latent issues or pinpoint sources during troubleshooting.

Unique transaction IDs propagate between systems enabling transaction-level tracing including timing and metadata best practices. Error handling generates exceptions/notifications to facilitate fast response.

Check Permissions and Access Rules

Seemingly random application errors may trace back to permissions misconfigurations or overly tight access rules that work under test conditions but then fail accessing real-world data volumes in production. Examining identity management, authentication policies, and access control lists helps determine necessary minimum permissions.

Bastion hosts and controlled test accounts isolate access changes from production environments when experimenting to find right balance between security and availability across interconnected systems.

Example Code Snippets

Below find several simplified code examples demonstrating common SharePoint integration tasks:

C# Web Service Call

“`csharp
var client = new WebClient();
client.Headers[HttpRequestHeader.Authorization] = “Bearer ” + accessToken;
client.UploadString(sharepointURL + “/_api/web/lists/getbytitle(‘Links’)/items”, “POST”, jsonPayload);
“`

PowerShell Script for Throttling

“`powershell
while ($true) {
$currentJobs =(Get-Job | Measure-Object).Count
if ($currentJobs -gt 100) {
Start-Sleep -s 15
}
else {
Start-Job {
# Call SharePoint service
}
}
}
“`

CAML Query with Indexing

“`xml













Marketing




“`

Leave a Reply

Your email address will not be published. Required fields are marked *