Showing posts with label MOSS 2007. Show all posts
Showing posts with label MOSS 2007. Show all posts

Friday, April 16, 2010

Error in Site Data WebService

Just recently I was trying to track down an indexer issue on MOSS 2007 in the Shared Services Provider (SSP). The error was "Error in the Site Data Web Service" in the location of http://<ssp web host>/ssp/admin/Content.


After digging around in the logs I found an error to the call: GetWebDefaultPage. After pointing SharePoint Designer at this Content sub site (which hosts the BDC entity display template pages) in the SSP host, I found out that there is no default page (default.aspx).


You can do 4 things to solve this problem.



  1. Ignore it

  2. Create a crawl rule to skip this site completely

  3. Create an empty Default.aspx in the site

  4. Remove the SSP host from the Local SharePoint Sites Content source.


I am not convinced Option 2 is a good idea as it could lead to strange searhc behaviour especially with BDC applications.


In our environment, we decided on Option 4 as we had no need to index the SSP at the moment.


I hope this helps.


Nate


Tuesday, April 13, 2010

SharePoint 2007 and Event Id: 6482 and 6398

I recently ran into 2 strange errors when deploying a SharePoint server. It so happens that this error occurred just after deploying the Adobe x64 iFilter pack.

Event ID: 6482 - Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (GUID).

Reason: Exception from HRESULT: 0x80040D1B

And

Event ID: 6398

The Execute method of job definition Microsoft.Office.Server.Search.Administration.IndexingScheduleJobDefinition (GUID) threw an exception. More information is included below.

Exception from HRESULT: 0x80040D1B

As a result, I was unable to navigate to the Search Settings in the default SSP.

It turns out that I had 2 duplicate entries in the registry for the Adobe PDF file type.
Check out the registry key:
HKLM\Software\Microsoft\Office Server\12.0\Search\Applications\GUID\Gather\Portal_Content\Extensions\ExtensionList


I removed one of them and we were off again.

This will happen when you add the file type by hand in the registry and then add it as a File Type in the SSP Host web application.

I hope it helps.

Wednesday, August 20, 2008

SharePoint Backup

Backing up SharePoint can be a daunting task.  There are so many different options. Sometimes more than one option required.  For example, MOSS native backup (STSADM -o backup) + File System backup for the 12 Hive.

So what is the best option?  This is a common question.  Microsoft has release a wonderful White paper on backup located here: http://technet.microsoft.com/en-us/library/cc262129.aspx (which Ivan Wilson; a colleague of mine reviewed).

Working in the Consulting World of IT, it is not always possible to implement a new backup solution (such as System Centre Data Protection Manager) within a customer's site.  As such, the lowest common denominator - MOSS Native Backup is always the safest option.

Utilising STSADM -O Backup is an easy way to achieve a good solid backup (and repeatable restore).  You can perform full and differential backups which can be scripted and then scheduled to run via Task Scheduler.

The other advantage of the native backup is that it will place the Search database and the Full Text Index in a consistent state before backing them up.  The command will pause any currently running crawls, perform the backup and the resume any paused crawls.  This is extremely important, especially if your MOSS solution is an Enterprise Search  Solution.  (What's the point in a Search solution that the recovery process requires a week of full crawl indexing!!!).

One thing to watch out for when using the native backup mechanism (besides some of the pointers in the document) is your database size and available disk space.

During the first part of the native backup, the command will query the SQL Server and retrieve the database file sizes for all of the databases that are being backed up.  It will then total these values and query the available disk space of the location of the backups.  If there is not enough space available, it will not continue with the backup.

Why is this a problem?  Well, if you size your databases to a maximum size (whether that is recommended is a different story), the query returns the physical database size and not the amount of space used.  Take for example, a content database sized to a maximum of 500GB but only 10MB is currently used.  In this case the backup will first check to see if there is 500GB of free space on the target backup location before it will continue regardless of the fact there is only 10MB of space available.

Friday, July 18, 2008

The Elusive Crawl Rule

I have been working on this rather large Enterprise Search project for a little while now. For some background, we are using Microsoft Search Server 2008 to index content where the meta data is contained in databases and the full text content is contained on network shares.

The project is at the point we we are bringing on extra content sources such as the traditional file share content. In order for this content to be indexed, our default content account will need to be granted read access to the file shares. At this point, my customer asked me where the password was stored for the content account.

Hmm, interesting question I thought. So we began hunting down where this information is stored.

Step 1: SQL Profiler. Create a crawl rule with a custom account and see what is logged. Lo and behold - nothing.

Step 2: Registry. Success. It turns out that the crawl rules, content sources and basically everything related to the index process is located in the registry.

Registry Root:HKLM\Software\Microsoft\Office Server\12.0\Search\Applications\<SSP GUID>\
Content Sources:...\Gather\Portal_Content\Content Sources
Crawl Rules:...\Gather\Portal_Content\Sites\...
Credential Cache:...\Gathering Manager\Secrets\...

So initially I thought, why the registry? In fact the answer makes quite a lot of sense. As you can only have a single indexer for a SSP, it makes sense to place this information in the registry where it can be locked down. So from a security point of view, you must compromise the server first to get access to the registry, then you would need to try and decrypt the secrets. Also performance would be increased as it is a local registry read.