Friday, March 13, 2015

GCD Rollback

FileNet configuration is saved in GCD database. If you have made changes in FileNet that has corrupted the configuration e.g. LDAP update from FileNet EM, there are likely chances that you will not be able to login again. Since the configuration data is stored as BLOB in database, it is not easy to update the existing value. The  best option is to rollback to the last good state.

How to rollback

Option 1

If you have database backup then you can get it restored. This depends upon the database schedule backup frequency. Most of the time it works as chances are that changes are not that frequent

Option 2

FileNet also maintains the change audit history in FNGCD. You can rollback to the previous good state using SQL updates. This document provides the steps in detail.

Friday, October 17, 2014

Performance Data

Extract Performance Data
FileNet System Dashboard retrieves real time performance data from content engine and process engine.  You can find more details here. System Dashboard provides five views- Summary, Details, Clusters, Alerts and Reports.
Following steps outline how to export the performance data in CSV or text file.
1.       Create a new report using system dashboard.
a.       Open or create cluster
b.      Open or create report template
c.       Edit Report and add the template
d.      Save the report e.g. report.xml
2.       Use Archive Manager utility  to export data
a.       java -jar archiver.jar -h -d report cluster.xml
                                                               i.      cluster.xml is the xml file saved from the System Dashboard
                                                             ii.      report is the report directory
b.      It will create file with this naming pattern “<servername>.<port>@<date-time>”
 Sample cluster.xml
<?xml version='1.0' encoding='UTF-8'?><Clusters><Cluster><Name>ST</Name><Interval>0</Interval><MaxDataPoints>500</MaxDataPoints><Host>hostname</Host></Cluster></Clusters>
3.       Export Archive in CSV format
a.       Sample file provided by System Dashboard 5.0.0 (../ Dashboard/samples/
b.      compile to exportUtil.jar

Run the following command
java -jar exportUtil.jar -o out.csv  -t report.xml -r Report1 <servername>.<port>@<date-time>
Report.xml – Saved report template. This can have multiple reports.
Report1- Name of the report in report.xml
out.csv- Output in CSV file
<servername>.<port>@<date-time> - file exported in step #2.

Friday, August 29, 2014

WAS 8.0

This error comes on WAS 8.0 Patch 8.  You will see login exception for single signon  solution. 
This is fixed in WAS 8.0 Patch 9. 

Caused by: java.rmi.AccessException: CORBA NO_PERMISSION 0x49424300 No; nested exception is:
        >> SERVER (id=6f1e8c1d, host=server1) TRACE START:
        >>    org.omg.CORBA.NO_PERMISSION:  vmcid: 0x49424000  minor code: 300  completed: No
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at
        >>       at$

Friday, May 30, 2014

Add Multiple Documents in a single transaction (Batch)

If you have requirement to upload multiple  documents as part of the single  transaction you can use com.filenet.api.core.UpdatingBatch API. 

This API  first create a batch object that allows adding  multiple documents as part of the batch. Once the batch is completed you can call updateBatch() to complete the transaction. If there is any error it will rollback the transaction. In this case no documents will be added.

The API returns BatchItemHandle that has the document ID's and any exception. You can iterate the objects and get the document ID that was added.

One of the user case where it can be used when you want to all documents in a directory. Instead of client making a single call to add documents in a loop you can expose API that takes dir_path as input. This will enable easy error handling at the client side.

If you have storage that has retention policy e.g. snaplock it will still not add the documents  if error occurs in middle of transaction. Snaplock use staging area that allows deletion before commit is made.

API Usage

createUpdatingBatchInstance(Domain domain,RefreshMode refresh)

// create update batch
UpdatingBatch ub = UpdatingBatch.createUpdatingBatchInstance(this.getDomain(), RefreshMode.REFRESH);
// add documents in a loop
// returns non object property for BatchItemHandle


// execute the batch

// get the list of document IDs and check for exception 

public static List<DocumentMetadata> getDocumentList( UpdatingBatch ub )
List<DocumentMetadata> documentList = new ArrayList<DocumentMetadata>();
Iterator<BatchItemHandle> it = ub.getBatchItemHandles( null ).iterator();
// iterate over the batch object
while( it.hasNext() )
BatchItemHandle batchItem =;
if( batchItem.hasException() )
// Displays the exception,
EngineRuntimeException thrown = batchItem.getException();
// get the document object 
Document doc = ( Document )batchItem.getObject();
// document metadata bean
DocumentMetadata docMetadata= new DocumentMetadata();
// add id to  bean
docMetadata.setDocumentId( doc.get_Id().toString() );
ContentElementList docContentList = doc.get_ContentElements();
Iterator iter = docContentList.iterator();
// mostly single content document
while (iter.hasNext() )
ContentTransfer ct = (ContentTransfer);
// file name of the document set in bean
docMetadata.setFilePath( ct.get_RetrievalName());
// adding the bean (has the ID and the file name )  to the list
documentList.add( docMetadata );
return documentList;

Friday, May 9, 2014

FileNet Deployment Manager

P8 Assets Migration across environments

There are two ways data can be migrated across environments in Content Engine

1. Using Enterprise Manager (Export/Import)

2. Using Deployment Manager

Most of the time we use Enterprise Manager export/import option to move the P8 assets across environments. This works fine if the migration involves small data set with few environments. However if you have multiple environments and you need to sync-up the data, deployment manager is better option. This guide will help you get started. 

Using Deployment Manager

Deployment Manager is installed as part of Content Engine client installation.


As detailed in the guide first step is to add all environment details. This should include the source and destination environment.


Add the Content Engine Connection details for each environment and test the connection.

Object Stores

Once you add the environment details next step is to map all object stores that will be used for export/import. Save the configuration.  The location of the configuration file (xml) is relative to the deployment manager working folder.

Export Assets (Source Environment)

You should perform the following on source environment. Right click and export manifest and then click on New Export Manifest. Give the name of the export and save.

Double click on export manifest and then click '+' symbol (Add Assets to the export manifest) to add the assets that you need to export.  If you are moving documents that are 'unfiled', deployment manager provides option to select unfiled documents. Once you click unfiled document wait for sometime. It may take a while to load all the documents. Once you have all required assets add them to the export manifest and save.

Note: There is no way to filter the  unfiled document using date range. You might face problem with memory issue if you have large set of documents.

You can filter the export using  export options (edit include options). This is same options as we have in Enterprise Manager export.

Once all the assets are retrieved then right click on the export manifest and click on export. This will create export file data set. You can create multiple export files for same object store or from different object store.

Note:  This export manifest is not compatible with FileNet Enterprise Manager export manifest.


In this step we first convert the assets from the source to destination environment. The conversion involves copy of data over with the new configuration. This converted assets is then imported in new environment.

Object Stores Mapping

If your source and destination has many object stores you can do the selective mapping. Deployment manager does not have option to ignore object store mapping. If you don't want object store from the source  to map  destination object store you can either map it to itself or leave the destination object store empty.

Security Mapping

Deployment Manager will map source and destination security based on the CE connection. If your source and destination has the same LDAP users and groups the mapping will be done by the tool. You don't need the mapping file as required by Enterprise Manager mapping.

Deployment Manager will display the mapping and will list down the security users/group that did not match. It also provides a UI to select a different user/group for the mapping. Once you have fixed the mapping you can save this mapping file. This mapping file will be used for assets conversion and import.

Note: If you find large number of mismatch security assets it is better to update the file manually instead of updating from the UI.  Best way is to map one of the security user/group from the screen UI and  save it . Open the  configuration XML file and map the rest of the security manually. Usually in these type of user/group mismatch best option is to map all mismatch with same user (p8admin).

Deployment Manager also has a feature  to import the document with system properties. This include the 'timestamp' of the document when it was created and also the original owner of the document. This is not possible when done via Enterprise Manager. To achieve this you should also enable the  security on destination object store ( Object store security -> add user to update system property).

Perform the import of documents in destination environment in this sequence

1. Convert Assets

Assets is  converted with the new configuration. It actually copies the data in new directory with this configuration

2. Analyze

This step validates the new configuration file before actual import.

3. Import

This is the final step for the import. This will import the data from the converted assets to target object store.