Thursday, August 21, 2008

7 Documentum Jobs

Job Description
1. DMClean The DMClean job removes deleted and orphaned objects from the
Docbase.
2. DMFilescan The DMFileScan job removes deleted and orphaned content files from
the file system.
3. LogPurge The Log Purge job removes server and session logs from the Docbase
and file system.
4. ConsistencyChecker The Consistency Checker job runs 77 referential integrity checks on
the Docbase.
5 UpdateStats The Update Stats job updates database table statistics and repairs
fragmented tables
6. QueueMgt The QueueMgt job deletes dequeued Inbox items from the Docbase.
7. StateOfDocbase The State of the Docbase job produces a report of the repository
environment and statistics about object types.

Difference between Simple and Advanced Search

In Webtop the DQL issued by Simple Search and the DQL issued by Advanced Search are different. For example in Simple Search, if I search for a document called "always on" and enter this as such, the simple search text box the following DQL is issued:

SELECT ALL r_object_type,r_modify_date,r_object_id,r_lock_owner,i_vstamp,owner_name,r_version_label,i_is_reference,score,a_content_type,object_name,r_is_virtual_doc,r_link_cnt,r_content_size FROM dm_document SEARCH TOPIC ' ("*always*","*on*")'


For the Advanced search, if I carry out the same search the following DQL is issued:

SELECT ALL r_object_type,r_lock_owner,r_object_id,owner_name,i_is_reference,a_content_type,object_name,r_is_virtual_doc,r_link_cnt,lower(object_name) AS lowerobjname,r_content_size,r_version_label,r_modify_date FROM dm_document WHERE FOLDER('/Your Folder',DESCEND) AND lower(object_name) = lower('always on') ORDER BY 10 ASC,3 ASC


The differences in the two searches is as designed. Simple search matches the words searched against indexed text and properties such as, Filename, Descriptive name, Category and Author. Spaces or commas can be used to separate text / keywords and Webtop will search for all separated chunks of text using Verity queries, such as AND or OR. For more information regarding Simple Search, please refer to the "Using a Simple Search" section of the Webtop 5.2x User Guide.

In our example, a client searching for "always on" would be considered to be searching for keywords "always" AND "on". This is a non-issue with Advanced Search as it is a less ambiguous search.


Another example:
If I have a document called "NEW Vacation Request Form.doc" in my home cabinet. I may carry out the following searches:
1) Search for "NEW Vacation Request Form.doc" --> returns the document (full object_name)
- SEARCH TOPIC ' ("*NEW Vacation Request Form.doc*")'

2) Search for "NEW Vacation Request Form" --> returns the document (partial object_name)
- SEARCH TOPIC ' ("*NEW Vacation Request Form*")'

3) Search for NEW Vacation Request Form --> did not return as we know
- SEARCH TOPIC ' ("*NEW*","*Vacation*","*Request*","*Form*")'

ACCRUE selects documents that include at least one of the search elements you specify. The more search elements that are present, the higher the score will be. e.g. (computer, laptop). As you can see, ACCRUE is used to search content not object_name.

The workaround suggested is to enclose double quotes around the words in simple search.

Remove select Object from DocApp/Type

“Remove select Object from DocApp/Type” removes object from the docapp not from docbase, if you want to remove objects from docbase the in DAB, checkout docapp

Select the object, Go to Menu item Edit  Delete object(s) from Docbase.

Or

TBO objects reside in the docbase in the location "/System/Modules/TBO" with the object type "dmc_module". You can uninstall the TBO’s by destroying your custom TBO objects from docbase.

select r_object_id, r_object_type from dm_sysobject where folder('/System/Modules/TBO');

User is getting UCF_E_SPECIFY_APPLICATION: A valid application does not exist

symptoms

User is getting UCF_E_SPECIFY_APPLICATION: A valid application does not exist. when try to view or edit specific file even application exist on the machine.
Cause
No Registry entries to edit and view with the specific extension.

Resolution
The Win32 Native Library used by UCF to identity and launch applications on the client relies on entries in the Windows Registry to discover file-type/application associations. The sequence for identifying which application to use for a given type is as follows:
• Look for entries in HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts matching the type
• For each sub-key found, the program identifier and name are recorded (i.e. HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.asp\OpenWithProgids\aspfile)
• For each of the sub-keys, the program identifier is search for in HKEY_CLASSES_ROOT (i.e. HKEY_CLASSES_ROOT\aspfile)
• If there is a shell sub-key, then the 'shell\open\command' and 'shell\edit\command' values are used for launching the application. If only one is available, then it will be used for viewing and editing.
For an example of a key that works, check HKEY_CLASSES_ROOT\Excel.Sheet.8\shell\Open\command.
So, immediate solution user can choose viewer/editor in WDK Client Application, or to provide sub-keys in the Windows registry HKEY_CLASSES_ROOT\aspfile entry to indicate the shell/open/command and shell/edit/command values.

How do Acls work on folders

This has nothing to do with folder security, which is a different topic altogether.

A permission set, or ACL, on a folder, controls whether that folder is displayed in the Desktop and Intranet clients. It has nothing to do with whether a user can make changes to the documents within the folder. This is determined by the ACLs assigned to each document. Folders are visible to all users with BROWSE permission. A user with NONE cannot see a folder from their client. A folder does not protect documents inside. Users with access to those documents can still search for, find, and change those documents

Groups Vs Roles

Introduction

Whether you're a developer new to the Web Development Kit (WDK) or you're a system administrator trying to be efficient in your Documentum user management, you may have wondered why there are both groups and roles. This article explores the differences between them, and provides some guidance on how to decide when to use a group or a role.

In the Beginning, there were Groups

Groups are a convenient way of aggregating individual users, thereby simplifying the process of managing permissions. Specifically, groups can be used in the following ways:

In conjunction with an Access Control List (ACL) to assign object-level permissions to all members of the group

As the owner of a document to allow all members of the group to have ownership

As the performer of a workflow task to allow that task to be delivered to all members of the group (or optionally the first to receive it)

The Content Server Fundamentals manual provides the following example of using groups to control permissions: you might set up a group called engr and assign version permission to the engr group in an ACL applied to all engineering documents. All members of the engr group then have version permission on engineering documents.

Groups contain the following properties:

Name: the name of the Docbase group

Class: the type of group (added in Content Server 5 - more on this later)

Email address: the address for the group. If no value is entered, the group email address defaults to the group name.

Owner: the name of the Docbase owner who owns this group (most groups are owned by the Docbase owner, but normal users can have their own groups as well, although this feature is rarely used)

Administrator: specifies a user or group, in addition to a superuser and the group owner, who can modify the group (this allows a manager to maintain group membership without having system administration privileges to the rest of the system)

Alias set: the default alias set for the group (used in certain cases to resolve an Alias Set's scope)

Group is Global: indicates that the group is displayed only in the governing Docbase of a federation

Description: a user-friendly description of the group

Is Private: defines whether the group is private or public. Public groups are visible to all users, while a private group is visible only to the owner of the group.

And to make your user management even more flexible, a group can belong to another group, facilitating a hierarchical user structure within which individual users and groups can be combined in the same group.

Along Came Roles

The release of Content Server 5 ushered in the era of roles. A role is a special type of group. Prior to roles, there was only one type of group, now called a standard group. This type of group is used to assign object-level permissions and participation in workflows as described above.

Version 5 added two new types of groups, a role group and a domain group:

Role group: this type of group is assigned a particular role within a client application domain. As with standard groups, a role group may contain a set of users, other groups, or both. A role group is created by setting the group_class attribute to role and the group_name attribute to the role name.

Domain group: this represents a particular client domain. A domain group contains a set of role groups, corresponding to the roles recognized by the client application. Note that if you create a role as a domain, it is listed in the Groups list page, not the Roles list page.

Since a role is a specialized group, roles contain the same nine properties as groups, with restrictions on two of the properties (name and class) as discussed above.

Summary of the Differences

We've been focusing on the fact that roles are just a special type of group. So let's summarize by highlighting the differences:

Groups are used for object permissions; roles are used for application or function permissions.

Again we turn to the Content Server Fundamentals manual for an example of using roles and domains to control function permissions. Suppose you write a client application called report_generator that recognizes three roles: readers (users who read reports), writers (users who write and generate reports), and administrators (users who administer the application). To support the roles, you create three role groups, one for each role. The group_class is set to role for these groups, and the group names are the names of the roles: readers, writers, and administrators. Then, create a domain group by creating a group whose group_class is domain and whose group name is the name of the domain. In this case, the domain name is report_generator. The three role groups are the members of the report_generator domain group.

When a user starts the report_generator application, the application is responsible for examining its associated domain group and determining the role group to which the user belongs. The application is also responsible for ensuring that the user performs only the actions allowed for members of that role group. Content Server does not enforce client application roles, since this responsibility is delegated to the application via the domain group.

Once the roles exist in your Docbase, they can be reused across applications. Just create a new domain group and add any of the existing role groups (readers, writers, and administrators in the example above) to the new domain group.

Roles in Web Publisher

Web Publisher is already configured with the roles that are defined in the Web Publisher doc app:

wcm_content_author_role

wcm_content_manager_role

wcm_web_developer_role

wcm_administrator_role

Web publisher uses these roles to determine which features are available to the user. For example, content authors are not allowed to see cabinets and folders, while content managers can. Content managers are not allowed to access administrative features, while administrators can. These out-of-the-box roles are ready to be used within any Web Publisher application, and they can be extended if needed. Plus, you can create new roles as needed.

Roles in WebTop

Webtop and WDK components can be configured to use any role that is defined in the associated Docbase. If no roles are configured in the Docbase, or if the Docbase is pre-5.1, Webtop defaults to using the client capability model in which four client capability levels can be set as client_capability attributes on the dm_user object in the Docbase:

consumer

contributor

coordinator

administrator

If you create role groups in the Docbase, you can create roles named consumer, contributor, coordinator, and administrator. Your custom roles can contain these roles (or vice-versa), so that Webtop and WDK components will not need to be reconfigured for your custom roles.

What can cause the error: "DM_DOCBROKER_E_CONNECT_FAILED"?

ID: esg5033
SOLUTION

You may get the following errors when trying to log into the Docbase even though the Docbase is up, but no client can connect to it. IAPI on the Server machine can also give the following errors:

[DM_DOCBROKER_E_NETWORK_ERROR]error: "An error occured performing a network operation: (Unknown error code 112 (_nl_error_ = 10061)). Network specific error: (Winsock error: connection refused; server probably not running)."

[DM_DOCBROKER_E_CONNECT_FAILED]error: "Unable to connect to DocBroker. Please check your dmcl.ini file for a correct host. Network address: (INET_ADDR: family: 2, port: 1489, host: 199.82.105.56 (199.82.105.56, c7526938))."

[DM_DOCBROKER_E_REQUEST_FAILED_E]error: "The Docbroker request failed."

The customer with the problem has the following environment:

1. Docpage server is on a Solaris box.

2. No DNS service used.

3. The server.ini file indicates that the server projects to machine named 'ods_dms'

4. Client connects from an NT machine and uses the server machine's IP address instead of the hostname in its dmcl.ini file:

[DOCBROKER_PRIMARY]

host = 199.82.105.56 (tried both specifying the IP address or hostname but both

failed with the same errors)

5. From the client machine, telnet and ping using the server machine's IP address 199.82.105.56 both succeed.

Upon running 'dmqdocbroker -i' on the server machine and obtaining the docbase map, the following was output :

Docbroker network address : INET_ADDR: 02 5d1 ac10490a ods_dms 127.0.0.1

where ods_dms is the host machine name.

Checking the /etc/hosts file, the customer only has 1 entry:

127.0.0.1 localhost

RESOLUTION:

1. Add a second entry to the /etc/hosts file for the actual IP address of the host machine since 127.0.0.1 is only the local loop back address. In this case the new entry looks like:

199.82.105.56 ods_dms loghost

The above entry should exist by default if the OS had been set up properly. But in this case, we suspect the OS had not been properly set up causing the second entry to be missing.

2. Kill the docbase server process with Unix kill -9. Shutting down using the standard dm_shutdown_ods_dms script fails because IAPI cannot connect to the docbase at all.

3. Run dm_stop_docbroker to shutdown the docbroker process

4. Run dm_launch_docbroker to bring up the docbroker

5. Run dmqdocbroker to verify that the docbroker picks up the correct entry from the /etc/hosts file. The correct output looks like:

Docbroker network address : INET_ADDR: 02 5d1 ac10490a ods_dms 199.82.105.56

6. Run dm_start_ods_dms to start up the docbase.

Now all client connections succeed, because the docbroker's docbase map contains the correct network address.

How do I correct the errors and warnings reported by this consistency checker?

ID: esg29545
Related Bugs:
SOLUTION

More specifically the question is what is the impact of the following errors that still exist and how do we clean them up?

Here are the steps to clean them up:

NOTE: PLEASE ENSURE THAT YOU HAVE A DATABASE BACKUP WHICH YOU CAN USE TO RECOVER THE SYSTEM IN THE UNLIKELY EVENT OF A CORRUPTION

You must log on to the database as docbase owner and run the SQL queries;

--- QUERY---: select a.r_object_id as p1, a.i_chronicle_id as p2 from dm_sysobject_s a where a.i_chronicle_id <> '0000000000000000' and not exists (select * from dm_sysobject_s b where b.r_object_id = a.i_chronicle_id)

WARNING CC-0023: Sysobject with r_object_id '09006e7880082141' references a non-existent i_chronicle_id '09006e788008211f'

WARNING CC-0023: Sysobject with r_object_id '09006e788008214b' references a non-existent i_chronicle_id '09006e788008211f'

Problem:

For some reason the root version doesn't exist anymore. Normally, WE CAN delete this object because it does not have a root document

Steps:

- Need to get the dump of the objects and check which version are those and check the all tree

- Need to change in SQL the i_chronicle_id to the lower version and make it the root


In SQL

SQL> update dm_sysobject_s set i_chronicle_id='< who ever is the lower version>' where i_chronicle_id='09006e788008211f';

SQL> commit;
OR

SQL>delete from dm_sysobject_r where r_object_id ='09006e7880082141'

SQL>delete from dm_sysobject_s where r_object_id ='09006e7880082141'

--- QUERY---: select a.r_object_id as p1, a.i_antecedent_id as p2 from dm_sysobject_s a where a.i_antecedent_id <> '0000000000000000' and not exists (select * from dm_sysobject_s b where b.r_object_id = a.i_antecedent_id)

WARNING CC-0024: Sysobject with r_object_id '09006e7880082141' references a non-existent i_antecedent_id '09006e788008211f'

Problem:

In this particular case the i_chronicle_id which is the i_antecedent_id is gone, OR WILL BE DELETED FROM THE STEP ABOVE.

Steps:

- Need to update i_antecedent_id to 16 zeros


SQL> update dm_sysobject_s set i_antecedent_id='0000000000000000' where i_antecedent_id='09006e788008211f';

SQL> commit;

--- QUERY---: select a.r_object_id as p1, a.r_workflow_id as p2 from dmi_workitem_s a where not exists (select b.r_object_id from dm_workflow_s b where b.r_object_id = a.r_workflow_id)

WARNING CC-0043: dmi_workitem object with r_object_id '4a006e7880000119' references non-existent dm_workflow object with id '4d006e788000010c'

WARNING CC-0043: dmi_workitem object with r_object_id '4a006e788000011e' references non-existent dm_workflow object with id '4d006e788000010e'


Problem:


The workitem is pointing to a non-existing workflow instance probable related to a bug.


Steps:


- Dump the dmi_workitem object and set to 16 zeros the r_workflow_id

- Get the r_queue_item_id value and set the item_id to 16 zeros and delete_flag to true as solution of Warning CC-0042

- dm_QueueMgt will cleanup the queue_item and destroy the dmi_workitem object


Workitem:

API> fetch,c,4a006e7880000119


API> set,c,4a006e7880000119,r_workflow_id

SET> 0000000000000000


API> save,c,4a006e7880000119


Queue_item


API> fetch,c,1b006e788001d911


API> set,c,1b006e788001d911,item_id

SET> 0000000000000000


API> set,c,1b006e788001d911,delete_flag

SET> 1


API> save,c,1b006e788001d911


Workitem:


API> fetch,c,4a006e7880000119


API> destroy,c,4a006e7880000119



--- QUERY---: select a.r_object_id as p1, a.r_workflow_id as p2 from dmi_package_s a where not exists (select b.r_object_id from dm_workflow_s b where b.r_object_id = a.r_workflow_id)

WARNING CC-0045: dmi_package object with r_object_id '49006e7880000116' references non-existent dm_workflow object with id '4d006e788000010c'

WARNING CC-0045: dmi_package object with r_object_id '49006e7880000118' references non-existent dm_workflow object with id '4d006e788000010e'


Problem:


dmi_package is pointing to a non-existing workflow


Steps:


- Set the r_workflow_id to 16 zeros

- Destroy the dmi_package object


API> fetch,c,49006e7880000116


API> set,c,49006e7880000116,r_workflow_id

SET> 0000000000000000


API> save,c,49006e7880000116


API> fetch,c,49006e7880000116


API> destroy,c,49006e7880000116



--- QUERY---: select ws.r_object_id as p1, pr.r_component_id as p2 from dm_workflow_s ws, dm_workflow_r wr, dmi_package_s ps, dmi_package_r pr where ws.r_object_id = wr.r_object_id AND wr.r_act_state != 2 AND pr.r_component_id > '0000000000000000' AND ws.r_runtime_state IN (1,3) AND ws.r_object_id = ps.r_workflow_id AND wr.r_act_seqno = ps.r_act_seqno AND ps.r_object_id = pr.r_object_id AND not exists (select a.r_object_id from dm_sysobject_s a where a.r_object_id = pr.r_component_id)

WARNING CC-0046: dm_workflow object with r_object_id '4d006e7880002501' references non-existent sysobject with r_component_id '09006e788000dffd'

WARNING CC-0046: dm_workflow object with r_object_id '4d006e7880000d09' references non-existent sysobject with r_component_id '09006e7880006e60'


Problem:


The r_component_id of the dmi_package and is making reference to a workflow instance doesn't exist anymore


Steps:


- Set the r_component_id and r_component_chron_id (Chronicle ID of the object identified at the corresponding index position in r_component_id.) to 16 zeros and destroy it.

- Or set r_component_id to a existing package with his chronicle_id in r_component_chron_id


API> retrieve,c,dmi_package where r_workflow_id='4d006e7880002501'

...

4d0a636280000900

API> set,c,490a636280000900,r_component_id

Set >0000000000000000

...

Ok

API> set,c,490a636280000900,r_component_chron_id

Set >0000000000000000

...

Ok

API> save,c,490a636280000900

...

Ok



OR



API> set,c,490a636280000900,r_component_id

Set >090a636280004c85

...

Ok

API> set,c,490a636280000900,r_component_chron_id

Set >090a636280004655

...

Ok

API> save,c,490a636280000900

...

Ok



Check ACLs with non-existent users

WARNING CC-0007: ACL object with r_object_id '45006e5880000507' has a non-existent user 'test1'

WARNING CC-0007: ACL object with r_object_id '45006e5880000d00' has a non-existent user 'test1'


Problem:


The ACL contains a non-existing user in r_accessor_name attribute


Steps:


- Dump the acl

- Check the index for the non-existing user

- Using remove API call, remove the user per index and also make sure that you remove the same index for r_accessor_permit, r_accessor_xpermit and r_is_group attributes. If CS is 4.2.x this will not have r_accessor_xpermit


In API:


API> fetch,c,450a636280002512

...

Ok

API> dump,c,450a636280002512

...

USER ATTRIBUTES


object_name : peoplesoft_acl

description : peoplesoft permission set

owner_name : Miguel_Test52

globally_managed : F

acl_class : 0


SYSTEM ATTRIBUTES


r_object_id : 450a636280002512

r_is_internal : F

r_accessor_name [0]: dm_world

[1]: dm_owner

[2]: dmadmin

[3]: test1

r_accessor_permit [0]: 3

[1]: 7

[2]: 7

[3]: 7

r_accessor_xpermit [0]: 0

[1]: 0

[2]: 3

[3]: 3

r_is_group [0]: F

[1]: F

[2]: F

[3]: F

r_has_events : F


APPLICATION ATTRIBUTES



INTERNAL ATTRIBUTES


i_is_replica : F

i_vstamp : 1


In this case the index 3 contains the user that we need to remove (test1)


API> remove,c,450a636280002512,r_accessor_name[3]

...

Ok

API> remove,c,450a636280002512,r_accessor_permit[3]

...

Ok

API> remove,c,450a636280002512,r_accessor_xpermit[3]

...

Ok

API> remove,c,450a636280002512,r_is_group[3]

...

Ok

API> save,c,450a636280002512

...

Ok

API> dump,c,450a636280002512

...

USER ATTRIBUTES


object_name : peoplesoft_acl

description : peoplesoft permission set

owner_name : Miguel_Test52

globally_managed : F

acl_class : 0


SYSTEM ATTRIBUTES


r_object_id : 450a636280002512

r_is_internal : F

r_accessor_name [0]: dm_world

[1]: dm_owner

[2]: dmadmin

r_accessor_permit [0]: 3

[1]: 7

[2]: 7

r_accessor_xpermit [0]: 0

[1]: 0

[2]: 3

r_is_group [0]: F

[1]: F

[2]: F

r_has_events : F


APPLICATION ATTRIBUTES



INTERNAL ATTRIBUTES


i_is_replica : F

i_vstamp : 2


API



WARNING CC-0059: The dm_sysobject with id '0900f6ae80002980' references a non-existent policy object with id '46004a1a80003dae'


Need to update the r_policy_id for this object to 0000000000000000


SQL> insert into dm_policy_s values('46004a1a80003dae',2,NULL,'0000000000000000',NULL);


SQL> commit;



--- Query --- select a.name,a.s_index_attr from dm_type_s a where not exists(select b.r_object_id from dmi_index_s b where b.r_object_id=a.s_index_attr)


WARNING CC-0074: Type object for type 'dm_folder' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_document' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_note' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dmi_dist_comp_record' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_query' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_script' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_smart_list' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'

WARNING CC-0074: Type object for type 'dm_procedure' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'


Problem:


The type is pointing to a non-existing index in dmi_index



This is a harmless warning you can ignore. BUG# 65798 is logged on this issue and fixed in Content Server 5.3 OR if you really don't want to see this Warning, you can fix it like this;


EXAMPLE;


WARNING CC-0074: Type object for type 'dm_folder' references a non-existent dmi_index object for _s table with r_object_id '0000000000000000'


Steps:


- Confirm the value of the index by doing the following in SQLplus:


SQL> select S_INDEX_ATTR from dm_type_s where name='dm_folder';


S_INDEX_ATTR

----------------

1f001a9880000142


- Check the indexes for this type:


SQL> col index_name format a20

SQL> col column_name format a20

SQL> col column_position format 99999999990

SQL> select index_name,column_name,column_position from user_ind_columns where table_name like 'DM_FOLDER%';


INDEX_NAME COLUMN_NAME COLUMN_POSITION

-------------------- -------------------- ---------------

D_1F001A9880000143 R_OBJECT_ID 1

D_1F001A9880000143 I_POSITION 2

D_1F001A9880000015 R_FOLDER_PATH 1

D_1F001A9880000016 I_ANCESTOR_ID 1

D_1F001A9880000016 R_OBJECT_ID 2

D_1F001A9880000142 R_OBJECT_ID 1


6 rows selected.


SQL>


As you can see the the index name + D_ match with the value of S_INDEX_ATTR last row in the example.


You need to check if this index exists in dmi_index_s and _r


SQL> select count(*) from dmi_index_s where r_object_id='1f001a9880000142';


COUNT(*)

----------

1


SQL> select count(*) from dmi_index_r where r_object_id='1f001a9880000142';


COUNT(*)

----------

1


SQL>


You should not have the object in any of this tables if exists in one of this tables you must delete it.


SQL>delete from dmi_index_s where r_object_id='1f001a9880000142';


SQL>delete from dmi_index_r where r_object_id='1f001a9880000142';


OR


- Set the S_INDEX_ATTR to 16 zeros and then recreate the index using API with unique attribute "R_OBJECT_ID" as is state it above. update S_INDEX_ATTR from dm_type_s to the correct value without the D_ and that will fix the inconsistency

STEPS FOR CHANGING THE INSTALL OWNER PASSWORD

1. Login into the appropriate Content Server box using telnet.

NOTE: since our Application is Load Balancer we need to change for all the three content servers.

2. Stop all the process and services (follow the start/stop document).
3. Stop the Index Server & Index Agents.

NOTE: Stop all the server side Documentum related process/services. E.g.:
Docbrokers, docbases, method servers, thumb nail server etc... If 2DAM docbase resides in the same content server its safe that we stop DTS/MTS Boxes).

4. On the /lapps/docadm location, type the command “passwd”.
5. It will ask for the current password, give it and press enter.
6. It will ask for the new password, type the new password and click enter.
7. Repeat the steps 1 to 6 for the other content servers.
8. Restart all the Documentum server side process as per the start/stop document in all the three content servers.
9. Edit the DSCredentials.xml in the location cabinets/gdm/customizations folder by login in through DA.
10. Modify with the new installation owner password

NOTE: New GDM Env (GDM Clean & GDM_PT docbases does not require to update the DSCredentials.xml since the installation owner account credentials was not hard coded. But safer sides please check when this particular Operation is being performed).

11. update the business.xml using DA in the location cabinets/gdm/ (if available)

NOTE: New GDM Env (GDM Clean & GDM_PT docbases does not require to update the business.xml since the installation owner account credentials was not hard coded. But safer sides please check when this particular Operation is being performed).


12. Repeat the steps 8 and 9 for all the 3 applications.

NOTE: login to each docbase using DA and perform the operations (if applicable).

13. Do complete healths check of all the 4 Applications.
14. Select any of the default jobs and check its status by running it manually.

Note: Consistency Checker Job is the safe option to check the status of the password change as it won’t create/modify/ update/delete any objects.

NOTE: As we are unable to test the LDAP job in PSIC, we need to make sure that it is running in production when we perform this operation.

Stop and Restart Application process

Sequence of Events to Stop the System
1. Log into Windows hosts to stop all of the rendition agent services (MTS/DTS)
2. Log into Index Agent host, to the Admin console for each agent, and Stop the indexing process for each agent in the console
http://:9081/IndexAgent1/login.jsp
Click on the Agent Status action link named “Stop”.
3. From the command line of the primary Index Server host, stop the Index Agent
a. $ cd $DOCUMENTUM/share/IndexAgents
b. $ ./shutdownAllIndexAgents.sh
4. From the command line of the secondary Index Server, stop the FAST index server a. $ cd $DOCUMENTUM/fulltext/IndexServer/bin
b. $ ./shutdown.sh
c. To verify the shutdown, you can run the “nctrl systatus” command.
d. To clear the DMCL cache navigate to “/tmp/dmcl” and perform
the “rm –rf *” command
5. From the command line of the primary Index Server, stop the FAST index server
a. $ cd $DOCUMENTUM/fulltext/IndexServer/bin
b. $ ./shutdown.sh
c. To verify the shutdown, you can run the “nctrl systatus” command.
d. To clear the DMCL cache navigate to “/tmp/dmcl” and perform
the “rm –rf *” command
6. From the WebLogic management console, stop all of the running applications
a. WebDAV
b. DA
c. DAM
d. Webtop
e. To clear the DMCL cache navigate to “/tmp/dmcl” and perform
the “rm -rf *” command
7. On each of the 3 content servers, stop the following IN THIS ORDER:
a. Java Method server
i. $ cd $DM_HOME/tomcat/bin/
ii. $ ./shutdown.sh
b. Thumbnail server
i. $ cd $DM_HOME/thumbsrv/bin
ii. $ ./dm_thumbsrv_stop.csh
c. Repository
i. $ cd $DOCUMENTUM/dba
ii. $ ./dm_shutdown_<2DAM>.sh
iii. $ ./dm_shutdown_.sh
d. The last thing to stop on each of the 3 content servers are the 3 Docbrokers
i. $ cd $DOCUMENTUM/dba
ii. $ ./dm_stop_Docbroker
1. This stops the Docbroker on port 1489
iii. $ ./dm_stop_DocbrokerP1
1. This stops the Docbroker on port 1490
e. To clear the DMCL cache navigate to “/tmp/dmcl” and perform
the “rm –rf *” command
8. Log into the SunOne web servers and stop the services for your applications
9. With all repository services stopped, the servers and database services can be safely stopped/restarted if necessary.

Sequence of Events to Start the System
The process for starting the application is basically to execute the steps above in reverse order
1. Ensure database services and all hosts have been restarted and are running
a. Try to connect to the database with sqlplus as the repository owner
2. Log into the SunOne Web servers and start the services for
3. On each of the 3 content servers, start the following IN THIS ORDER
a. Docbrokers
i. $ cd $DOCUMENTUM/dba
ii. $ ./dm_launch_Docbroker
1. This starts the Docbroker on port 1489
iii. $ ./dm_launch_DocbrokerP1
1. This starts the Docbroker on port 1490
b. Repositories
i. $ cd $DOCUMENTUM/dba
ii. $ ./dm_start_.sh
c. Method Server
i. $ cd $DM_HOME/tomcat/bin
ii. ./startup.sh
iii. Verify this by hitting the method server URL at http://:9080/DmMethods/servlet/DoMethod
d. Thumbnail Server
i. $ cd $DM_HOME/thumbsrv/bin
ii. $ nohup ./dm_thumbsrv_start.csh &
iii. Verify this by hitting the URL for the Thumbnail server at https://:8443/thumbsrv/getThumbnail?format=msw8
1. You can substitute other format values here, such as jpeg or bmp
4. On the 3 WebLogic application server hosts, log in and start all of the applications from the command line. (NOTE: If the server itself has been restarted, you will need to log and start the WebLogic admin console as well.)
a. $ wls
b. $ ./startda.sh
c. $ ./start.sh
f. You can verify that all applications have started up correctly on each WLS by logging into the WLS admin console.
g. Within the WLS admin console, do first time Garbage Collection on all the managed servers by navigating into “Monitoring” > “Performance” tabs and click on “Force Garbage Collection” button.
5. From the command line of the primary Index Server, start the FAST index server
a. $ cd $DOCUMENTUM/fulltext/IndexServer/bin
b. $ ./startup.sh
c. To verify the start, you can run the “nctrl systatus” command and see that all processes are “running”
6. From the command line of the secondary Index Server, start the FAST index server a. $ cd $DOCUMENTUM/fulltext/IndexServer/bin
b. $ ./startup.sh
c. To verify the startup, you can run the “nctrl systatus” command.
7. From the command line of the primary Index Server host, start the Index Agent a. $ cd $DOCUMENTUM/share/IndexAgents
b. $ ./startupAllIndexAgents.sh
8. Log into Index Agent host, to the Admin console for each agent, and Start the indexing process for each agent in the console
i. http://:9081/IndexAgent1/login.jsp
Click on the Agent Status action link named “Start”.
9. Log into Windows hosts to start all of the rendition agent services (MTS/DTS)
10. From within DA or applications themselves, verify complete environment functionality with the test cases provided.

Monday, August 11, 2008

Failed to load preferences, Cannot locate file - dfc.properties

Failed to load preferences, Cannot locate file - dfc.properties

ID: esg82728

SOLUTION

Symptoms

While uninstalling the existing Index Agent on Solaris machine, an error dialog box is displayed:

"Failed to load preferences: java.io.FileNotFoundException: Cannot locate file - dfc.properties"

Cause

Environment variable DFC_DATA set in dmadmin's profile seems to cause this problem. All the other environment variables seemed to be correct.

Resolution

When DFC_DATA was commented out in the dmadmin's profile, the problem disappeared.

Component 'MSCOMCTL.OCX'

Component 'MSCOMCTL.OCX' or one of its dependencies not correctly registered: a file is missing or invalid
MSCOMCTL.OCX error screenshot:

How to Correct or fix this dependency error:
1. First search your local drive for MSCOMCTL.OCX to see if it is missing. The path to the file is typically: C:\WINDOWS\system32 if you are running Windows XP
2. If the file is missing you can download it HERE
3. Once downloaded, click the MSCOMCTL.exe and extract the file to your C:\WINDOWS\system32 directory

Resolution : Install Microsoft ActiveX Control Pad Installation

( setuppad.exe )

How can I change the docbase (repository) owner password?

You will need to do the following steps:

- Stop the Docbase (repository)
- Backup file dbpasswd.txt under /Documentum/dba/config//
- Modify dbpasswd.txt file with the new database password in plain text (without encrytion just like abraxis1234)
- Re-encrypt the dbpasswd.txt file:

cd $DM_HOME/bin means --> C:\Documentum\product\5.3\bin

dm_encrypt_password -docbase -rdbms -encrypt

- Start the Docbase (repository).

Login Page is not displaying

Installed DCM and DA on APP Server machine but login page is not displayed but DOne message at status bar
Sol: Update the class path in weblogic.cmd and restart weblogic

Unable to instantiate WordPerfect MPI: Can't get object clsid from progid

Symptoms

When running Document Transformation Services, the following message can be observed in the CTS log file located under Program Files\Documentum\CTS\logs:

INFO [ main] CTSPluginHandlerImpl - Unable to instantiate the following MP: com.documentum.cts.plugin.wordperfect.WordPerfectPlugin
com.documentum.cts.plugin.common.CTSPluginException: Can't get object clsid from progid
Cause Exception was: Unable to instantiate WordPerfect MPI: Can't get object clsid from progid
com.jacob.com.ComFailException: Can't get object clsid from progid
at com.jacob.com.Dispatch.createInstanceNative(Native Method)
at com.jacob.com.Dispatch.(Dispatch.java:160)
at com.jacob.activeX.ActiveXComponent.(ActiveXComponent.java:54)



Cause

This issue will appear if WordPerfect is not installed on the DTS server. By default, the plugin for WordPerfect is included in the CTSPluginService.xml file located under Program Files\Documentum\CTS\config.



Resolution
- Take a back up for CTSPluginService.xml
Navigate to Program Files\Documentum\CTS\config and open the CTSPluginService.xml with a text editor. Locate the following line:





Comment out this line using the following syntax:





Restart the CTS services and restart. The issue should be resolved.

Sunday, August 10, 2008

Oracle Error - shared memory realm does not exist

Error:
ORA-01034: Oracle not available
ORA-27101 : shared memory realm does not exist

- restart Oracle Host service in services.msc in oracle server machine

Change in Data directory

- Copy the File store from Source to Target Environment
.... copy the data folder to target Data folder
- Update the dm_location_s for filestore path

select file_system_path from dm_location_s

D:\Documentum\data\ABRAXIS_PRD\content_storage_01
D:\Documentum\data\ABRAXIS_PRD\thumbnail_storage_01
D:\Documentum\data\ABRAXIS_PRD\streaming_storage_01
D:\Documentum\data\ABRAXIS_PRD\replicate_temp_store
D:\Documentum\data\ABRAXIS_PRD\replica_content_storage_01

Update the above to the targeted environment path

restart the docbase

Update aek.key

1. Shutdown the docbase being moved on the target server
2. Create a copy of the AEK file (aek.key) in \Documentum\dba\secure directory on the target server and rename it (e.g., aek.bak). This is just in case something happens to the file and we need to rollback.
3. From sql on the database:
update dm_docbase_config_s set i_crypto_key = ' ';
4. From sql:
SQL> select r_object_id from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
5. delete from dmi_object_type where r_object_id = 'returned r_object_id from above';
6. SQL> commit;
7. SQL> delete from dmi_vstamp_s where r_object_id = 'returned r_object_id from step above'
8. SQL> commit;
9. From sql on the database
update dm_docbase_config_s set i_ticket_crypto_key = ' ';
10. from sql:
SQL> select r_object_id from dmi_vstamp_s where i_application = 'dm_docbase_ticket_config_crypto_key_init';
11. delete from dmi_object_type where r_object_id = 'returned r_object_id from above';
12. SQL> commit;
13. SQL> delete from dmi_vstamp_s where r_object_id = 'returned r_object_id from step above'
14. SQL> commit;
15. To re-encrypt the dbpasswd.txt file for the moved docbase, navigate to the :\documentum\product\5.3 \bin directory in a command window.
16. From a command prompt, enter the following:

dm_encrypt_password -docbase -rdbms -encrypt
17. You will need to reboot the entire system - Content Server, PDF Aqua, Queue Manager, and Application and remove all caches once you have moved it..
18. Startup the docbase service
19. You might have to repeat some of the steps from this process. especially the i_ticket_crypto_key. I was unable to find it the first time, but the second time I ran through the instructions after trying to
start the docbase once, I found this key and was able to delete it.

Cloning an Environment

Migration/ Cloning an environment


1. Create a new docbase with the same Repository name, repository Id on the target server
2. Have your DBA to create an empty oracle/sql schema with the same docbsae name in your source environment
3. Copy the file store from the source environment into the new 'Data' folder
4. Import Oracle Schema from the source environment into the target server
5. Update aek.key
6. If the path has to change , update dm_location_s entries
7. Restart the target database

Saturday, August 9, 2008

How to create a new file store by using Documentum Administrator

File Store creation comprise of two parts first is creation of file location and second creation and association of file store with file location.



For creating file location through Documentum Administrator kindly follow the following procedure.



1 Create the folder physically on your content server for file store.

2 Login to your Documentum Administrator instance.

3 Select the Docbase

4 Expand an Administration tab.

5 Select Storage option under administration



Click on File and select New Location

Provide the name for your new desired service location

Now Select the file location where you have created the Folder physically to store the Documents.

Leave the option Path type as Directory and security type as publicopen.

Click on OK to create the new location for file store.



Follow the below procedure to create new File Store.



Now select File and click on New file Store option

Give the name for your new file store

Select the Location which you have created just before from Drop down list.

Now Click on OK.



Run the following API to reinitialize the docbase connection

.

API>reinit,c

API>flush,c,persistentcache

API>flush,c,persistentobjcache

Dump and Load process for a file

Dump Script:



create,c,dm_dump_record

set,c,l,file_name

D:\Abraxisdump\testdata\CDMSFileDump.dmp

set,c,l,include_content

T

append,c,l,type

dm_document

append,c,l,predicate

object_name=’ ‘

save,c,l

getmessage,c





Load Script:



create,c,dm_load_record

set,c,l,file_name

D:\AbraxisLoad\testdata\CDMSFileDump.dmp

save,c,l

getmessage,c

Friday, August 8, 2008

Why do I see this error "[DM_API_E_NOTYPE]error: "Type name 'dm_application' while installing Docapp using DAB

Why do I see this error "[DM_API_E_NOTYPE]error: "Type name 'dm_application' is

Below is an excerpt of the error message seen in the docapp installer log.

ERROR: Installation cannot proceed.

DfException@1d0:: THREAD: main; MSG: [DM_API_E_NOTYPE]error: "Type name 'dm_application' is not a valid type."; ERRORCODE: 100; NEXT: null

ERROR: Installation cannot proceed.

DfException@1d0:: THREAD: main; MSG: [DM_API_E_NOTYPE]error: "Type name 'dm_application' is not a valid type."; ERRORCODE: 100; NEXT: null at com/documentum/fc/server/session/DfiSessionDMCL.dmAPIGet (DfiSessionDMCL.java) at com/documentum/fc/client/DfSession.apiGet (DfSession.java) at com/documentum/fc/client/DfPIntObjectL. (DfPIntObjectL.java) at com/documentum/fc/client/DfObjectCache.createPIntObject (DfObjectCache.java) at com/documentum/fc/client/DfObjectCache.newObject (DfObjectCache.java) at com/documentum/fc/client/DfSession.newObject (DfSession.java) at com/documentum/ApplicationManager/DfApplication.newApplication (DfApplication.java) at com/documentum/ApplicationInstall/DfAppInstallerUtilities.createLivePackage (DfAppInstallerUtilities.java) at com/documentum/ApplicationInstall/DfAppInstaller.createEnvironment (DfAppInstaller.java) at com/documentum/ApplicationInstall/DfAppInstaller.startInstall (DfAppInstaller.java)

ABORT TRANSACTION


RESOLUTION:
This error occurs because the dm_application object type is not installed in the Documentum environment. This means the 'headstart.ebs' script failed or was not executed.

Check the following path on the Content Server:

\documentum\dba\config\

Search for the 'headstart.out' file.
Open the file, look for creation of dm_application type.

If the 'headstart.out' file is not listed, this means the script wasn't executed.

Run the 'headstart.ebs' script. Follow the instructions as listed in the Content Server documentation for executing a script.

Once this is executed. The dm_application type, will be created.

Re-run the docapp installer, the docapp installation will now succeed.

How can I manually run headstart.ebs ?

Content Server (CS) Version 5.1 and higher uses new version of headstart.ebs which is replacing headstar.dql in previous version of eConentent Server (eCS).

If for some reasons you need to run this script manually, include all 15 arguments for the dmbasic command.

From the command line (Unix or Windows) type the following:

dmbasic -fheadstart.ebs -eInstall -docbaseName -docbaseUserPassword -docbaseDescrption -documentumHome $DM_HOME -dataHome $DOCUMENTUM/data -dbaHome $DOCUMENTUM/dba -configureHome $DM_HOME/install -shareHome $DOCUMENTUM/share -hostName -osType -localeLanguage en -smtpServerName -email -loginUsername

Why RPC 116 error and/or Authentication failure errors on CS 5.3 SP4 and SP5?

Symptoms

5.3 SP4 and SP5 Clients will see "RPC 116 error" and/or "Authentication failure".

This error is only seen with trace -otrace_authentication and -oticket_trace options:
Start-AuthenticateUserByTicket:UserLoginName(testuser1),
TICKET TRACE: dmLoginTicketMgr::VerifyTicket() : encodedBuffer = DM_TICKET=AAAAAgAAAOQAAAAKAAAAFUeUZYRHlGawAAAAOGxzY21zAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEZyYW4gU2Nod2lldHprZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGxzY21zAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGx2Y21zMDEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGNlaWxpbmcxMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAM0pyOTdRTDJOakFUbVhwak15Y21CWDdLbG5zYnM2aUVUU1pXUy9kbHdwWjlzcllmaGIrNFZnPT0=
TICKET TRACE: dmLoginTicketMgr::LoadTicket() : Login ticket successfully loaded into dmLoginTicket struct.
TICKET TRACE: Failed to verify login ticket because user name mismatch: ticket.m_userName=tsestuser2, userName=testuser1
End-AuthenticateUserByTicket:
failure

Cause

If a ticketed session is timed out, the generated ticket would be different from the user who actually uses the ticket to establish the connection. Also setting the wrong password in the server's session causes subsequent server reconnects to fail in the case of client session timeout.

Resolution

This issue is resolved in CS 5.3 SP6, otherwise an eng patch request must be submitted.

Determining if you are having this issue:
Perform the following 2 tests with tracing turned on.

# To enable ticket trace
API> apply,c,NULL,SET_OPTIONS,OPTION,S,ticket_trace,VALUE,B,T

# To disable ticket trace, after the test are completed.
API> apply,c,NULL,SET_OPTIONS,OPTION,S,ticket_trace,VALUE,B,F
Send us the content server log file with ticket trace info.

DESCRIPTION OF UNIT TESTING TO BE TO VERIFY CHANGE:

This is an example, do not use the ticket generated in this example for your test.
Just use the api commands listed.

Test 1:
=====
Run IAPI to connect to docbase as super user.
Make sure the login ticket timeout value is set to 5 minutes in serverconfig object.
Make sure the DMCL session timeout value is also set to 5 minutes.

Generate a login ticket for any non-super user, say tuser1 (make sure tuser1's password is different from that of super user).

API> getlogin,c,tuser1
...
DM_TICKET=AAAAAgAAAOQAAAABAAAAAUcxydlHMcsFAAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0bVpNQVpyZ3BJT1JGcXZIY0w0V1h1S1oyMDRsY2x2ZnNnPT0=

Dump login ticket to see expiration date.

API>dumploginticket,c,DM_TICKET=AAAAAgAAAOQAAAABAAAAAUcxydlHMcsFAAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0bVpNQVpyZ3BJT1JGcXZIY0w0V1h1S1oyMDRsY2x2ZnNnPT0=
...
LOGIN TICKET DUMP
==========================================
Version : 5.3 (ticket version 2)
Scope : global
Sequence Number : 0000000001
Single Use : No
Create Time : Wed Nov 07 06:21:13 2007
Expiration Time : Wed Nov 07 06:26:13 2007
User : tuser1
Password : *********
Domain : rrpvmindex01
Server : vmcs01_535_ora10g203
Docbase : vmcs01_535_ora10g203
Host : RRPVMINDEX01
Connect to docbase as "tuser1" using the newly generated ticket.

API>connect,vmcs01_535_ora10g203,tuser1,DM_TICKET=AAAAAgAAAOQAAAABAAAAAUcxydlHMcsFAAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0bVpNQVpyZ3BJT1JGcXZIY0w0V1h1S1oyMDRsY2x2ZnNnPT0=
...
s1

********************************************************
IMPORTANT:
Wait for at least 10 minutes to make sure the ticket is expired, and the DMCL client's connection to docbase/content server is timed out ********************************************************

Then, try to re-connect to content server after client session is timed out and ticket is expired, by trying to create a dm_document object.

API> create,s1,dm_document
...

Make sure you do not get the following error message, you should get a new object_id after the "create" command is issued.

[DM_API_E_NOTYPE]error: "Type name 'dm_document' is not a valid type."
[DM_SESSION_E_START_FAIL]error: "Server did not start session. Please see your system administrator or check the server log.
Error message from server was:
[DM_SESSION_E_RPC_ERROR]error: "RPC error 116 occurred: Unknown error code 116 (_nl_error_ = 0). Extended network error: 0"
[DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user tuser1 with docbase vmcs01_535_ora10g203.""


Test 2:
=====
Run IAPI as super user to create "TestMethod" dm_method object.
-------------------------------------------------------
create,s0,dm_method
set,s0,l,object_name
TestMethod
set,s0,l,method_verb
"sh" "/testmethod.sh"
set,s0,l,method_type
program
set,s0,l,trace_launch
1
save,s0,l
-------------------------------------------------------
Create file testmethod.sh in /tmp directory. Contents of testmethod.sh is as follows:
-------------------------------------------------------
#!/bin/sh -xvf
# print date time stamp to output file
date > /tmp/testmethod_output.txt
------------------------------------------------------
Now connect to docbase as non-super user "tuser1"
Generate a login ticket for "tuser1" (for himself)

API> getlogin,c
...
DM_TICKET=AAAAAgAAAOQAAAABAAAAAkcxy85HMcz6AAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0dUw0R0tzcG94TVZkYW9sdnFHcmZzaFJ6VmUzNTFPRmJBPT0=

Dump the login ticket to see ticket detail

API>dumploginticket,c,DM_TICKET=AAAAAgAAAOQAAAABAAAAAkcxy85HMcz6AAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0dUw0R0tzcG94TVZkYW9sdnFHcmZzaFJ6VmUZNTFPRmJBPT0=
...
LOGIN TICKET DUMP
==========================================
Version : 5.3 (ticket version 2)
Scope : global
Sequence Number : 0000000002
Single Use : No
Create Time : Wed Nov 07 06:29:34 2007
Expiration Time : Wed Nov 07 06:34:34 2007
User : tuser1
Password : *********
Domain : rrpvmindex01
Server : vmcs01_535_ora10g203
Docbase : vmcs01_535_ora10g2
Host : RRPVMINDEX01
API> quit

Quit out of API. This is to make sure the next IAPI DMCL connection pool does not contain any "tuser1" entries.
Now run IAPI and connect as super user, say "dmadmin".

Then connect to docbase as "tuser1" using the login ticket generated from the previous IAPI run.
API>connect,vmcs01_535_ora10g203,tuser1,DM_TICKET=AAAAAgAAAOQAAAABAAAAAkcxy85HMcz6AAAAOHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAHR1c2VyMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHJycHZtaW5kZXgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAHZtY3MwMV81MzVfb3JhMTBnMjAzAAAAAAAAAAAAAAAAAFJSUFZNSU5ERVgwMQAAAAAAAAAAAAAAAAAAAAAAAAAAAG1hbmFnZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaUkzL25xYXZPZzdMWEY4WWVNajg0dUw0R0tzcG94TVZkYW9sdnFHcmZzaFJ6VmUzNTFPRmJBPT0=
...
s1
After new session is established. Run "apply DO_METHOD" to launch "TestMethod".

API> apply,s1,,DO_METHOD,METHOD,S, TestMethod
...
q0

Check the "/temp" directory, see if output file testmethod_output.txt is created successfully, and the output file "testmethod_output.txt" contains the date timestamp info.

Run the following commands to make sure the "TestMethod" is run successfully.
Make sure you do NOT see errors like this:
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES
result : 0
process_id : 0
launch_failed : T
method_return_val : 0
os_system_error : No Error Message Available
timed_out : F
time_out_length : 60
SYSTEM ATTRIBUTES
APPLICATION ATTRIBUTES
INTERNAL ATTRIBUTES
API> getmessage,s1,3
...
[DM_METHOD_E_ASSUME_USER_UV]error: "Your method named (Method2) failed to execute because the assume user process could not validation your user credentials. Assume User Process returned (-11=DM_CHKPASS_BAD_LOGIN)."
API>

If testing proves it is a ticket timeout issue.
This issue will be resolved in CS 5.3 SP6
Since this support Note was written prior to reslease of SP6, please request a eng hot fix

CS_5.3_SP5_BUG_148355_WINDOWS_ORACLE_HOTFIX.zip
CS_5.3_SP5_BUG_148355_WINDOWS_SQL_HOTFIX.zip
ContentServer_aix_oracle_5.3SP5_bug_148355.tar.gz
ContentServer_solaris_oracle_5.3SP5_bug_148355.tar.gz

Please provide the following information:
1. Content server version.
2. OS platform and version.
3. RDBMS info.
4. Test results testing above.

Manual steps to uninstall CTS

The following steps are required to perform a manual uninstall of a Content
Transformation Services product
1. Stop the CTS and CTS Agent Services in the Windows Services dialog.
2. Use Windows Add/Remove Programs to uninstall Documentum Content
Transformation Services.
3. Use Windows Add/Remove Programs to uninstall Documentum DFC.
4. Restart the host.
5. Delete the following folders and all their contents:
• C:\Documentum
• C:\Program Files\Documentum
6. Remove any remaining Documentum products using Windows Add/Remove
Programs.
7. Open Windows regedit.
8. Navigate to: HKEY_LOCAL_MACHINE\SOFTWARE.
9. Delete the Documentum entry.
10. Reboot the host.
11. Log in to the CTS configured repository as an administrator user using DAM,
Webtop, or DA.
12. Delete the Media Server folder, located in the \System cabinet (select all children
and all versions when prompted during delete).
13. Navigate to \System\Applications.
14. Delete the CTSTransformRequest and MediaProfile folder (select All Objects, All
Versions, All Descendants when prompted).
15. Run the following two DQL statements against the repository, in this order:
a. delete cts_instance_info object

You can now start a new installation of any Content Transformation Services product
(such as DTS or MTS).

DTS Installation - Premature end of file

Symptom:
During the installation of DTS you notice a failure "DiWACTSTransformXML failed! - Premature end of file",
and in the setuperror.log an error like this:
(Apr 20, 2007 8:08:12 AM), Setup.product.install, com.documentum.install.shared.common.error.DiException,
err, An exception occured in: DiWACTSTransformXML, beanID: transformMediaServerServiceXml1 -
DiWACTSTransformXML failed! - Premature end of file.; For more detailed information, see the error log:
C:\Program Files\Documentum\CTS\server_install\setupError.log
(Apr 20, 2007 8:08:12 AM), Setup.product.install, com.documentum.install.shared.common.error.DiException,
err, ; Line#: -1; Column#: -1
javax.xml.transform.TransformerException: Premature end of file.
at org.apache.xalan.transformer.TransformerImpl.fatalError(TransformerImpl.java:739)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:715)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1129)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1107)
at com.documentum.install.shared.common.services.xml.DiXMLUtils.transform(DiXMLUtils.java:181)
at
com.documentum.install.cts.common.services.xsl.DiCTSXslTransformServices.transform(DiCTSXslTransformServices
.java:71)
at
com.documentum.install.cts.common.beans.wizard.action.DiWACTSTransformXML.execute(DiWACTSTransformXML.java:9
7)
at com.installshield.wizard.StandardWizardListener.execute(Unknown Source)
at com.installshield.wizard.StandardWizardListener.currentBeanChanged(Unknown Source)
at com.installshield.wizard.Wizard$RunThread.run(Unknown Source)

Resolution:
Check in the C:\Program Files\Documentum\CTS\config folder, and check if any of the XML files in this folder are empty, 0KB size. If so, take one of the backup files of this xml file (in the same folder with extension .bak.00) and rename it to the original xml files. This should allow you to continue with your installation

How to test DTS Adlib

Goal
How to test the Adlib installation, or test a particular document.

Resolution
The steps to test the Adlib installation, or to test a particular document are outlined as follows:

1) Copy the attached files to C:\Temp on the DTS/RPTS server.

2) Open the WSA Sample Application from:
Start > All Programs > Adlib > Exponent Web Service Adaptor > Exponent WSAS Sample

3) Click the 'Submit Job Ticket' tab.

4) Click the Browse button next to the XML Job Ticket field.

5) For a DTS install, select C:\Temp\DTS_jobticket.xml.
For a DTS install, select C:\Temp\RPTS_jobticket.xml.

6) Click 'Validate' to ensure the file is valid (a success message is displayed in the window).

7) Click 'AddJob()' under the Validate button.

The status of the job will be printed in the "Web Service Response" window. Click the 'GetJobSatus()' button to view the progress. A pdf file should be created in C:\Temp if the job is successful.

Documentum Search Audit Trails

Q) How to collect audit trails of searches performed by users?
Documentum does not capture any audit events for searches performed. However, search statistics and reports can also be used to identify frequently used keywords and tune the search engine to provide accurate results.

The statistics can also be used for creating management reports if needed.

Design Approach:
1. Create a new persistent object (”sp_search_log”) to store Search log information
2. Customize the search component’s behaviour class’s onRenderEnd() method to create a new “sp_search_log” object
3. Save the object before displaying the JSP

Alternate Approaches:
1. Use JDBC to capture the information in a database table. Complicated approach involving opening database connections.
2. Create custom audit trails to create dm_audittrail objects. I have not yet studied the implications of this.

CREATING A NEW TYPE to store Search Logs:
CREATE TYPE "sp_search_log"
( “r_search_id” ID,
“userid” CHAR(10),
“userdisplayname” CHAR(200),
“deptcode” CHAR(6),
“keyword” CHAR(100) REPEATING,
“location” CHAR(250) REPEATING,
“attrib_namevalue” CHAR(250) REPEATING,
“starttimeofsearch” DATE,
“endtimeofsearch” DATE,
“noofresults” INT,
“noofvieweddocs” INT
) WITH SUPERTYPE NULL PUBLISH

OUTPUT OF DQL > new_object_ID 030004d2800001b9

ALTER TYPE “sp_search_log” DROP_FTINDEX ON “userid”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “userdisplayname”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “deptcode”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “location”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “attrib_namevalue”

Use this DQL to drop any fields if needed:
ALTER TYPE “sp_search_log” DROP “Field-Name” PUBLISH

Use this DQL to add new fields if needed later:
ALTER TYPE “sp_search_log” ADD “New-Field-Name” DATE PUBLISH
Note:
- “attrib_namevalue” CHAR(250) REPEATING will be used to store the params from advanced search in the form date=22/01/2006, etc.
- If the user uses a phrase search like “new york”, it can be stored in one keyword. If new york is used without quotes, it will be stored as two keywords

SAMPLE JAVA SOURCE CODE FRAGMENTS
Note: This code is meant to prove the concept. This may not be the best approach for performance.
A better approach could be to store the “starttimeofsearch” in an instance variable then, create & save the

sp_search_log object only once after the search operation is completed.

public class SearchEx extends com.documentum.dam.search.SearchEx
implements IControlListener, IDfQueryListener, Observer,
IReturnListener, IDragDropDataProvider, IDragSource, IDropTarget
{
private boolean m_loggedToDB = false;
private boolean m_loggedNoOfResultsToDB = false;
private boolean m_isFirstCall = true;public void onInit(ArgumentList args)
{
System.out.println(”## Inside custom search”);
String strQuery = args.get(”query”);
System.out.println(”## strQuery: ” + strQuery);
super.onInit(args);
}

public void onRenderEnd()
{
super.onRenderEnd();

if(m_loggedToDB == false && m_isFirstCall==true) {
createSearchLogObject();
m_isFirstCall = false;
}

if(m_loggedToDB == true && m_isFirstCall==false && m_loggedNoOfResultsToDB==false) {
updateSearchLogObject();
}
}

private void createSearchLogObject(){
String objectId = null;

IDfSession sess = this.getDfSession();

String userid = “Not found”;
try {
userid = sess.getLoginUserName();
System.out.println(”### userid: ” + userid);

String queryDesc = getQueryDescription();
System.out.println(”### queryDesc: ” + queryDesc);

IDfPersistentObject searchLog =
(IDfPersistentObject)sess.newObject(”sp_search_log”);
searchLog.setString(”userid”, userid);
searchLog.setString(”userdisplayname”, userid);
//searchLog.setString(”deptcode”, “DEPT_CODE GOES HERE”);

IDfTime timeNow = new DfTime();
searchLog.setTime(”starttimeofsearch”, timeNow);
searchLog.setInt(”noofresults”,-1);
setNewValuesForAttribute(”keyword”, queryDesc, ” “, searchLog);

String searchLocations = getSearchSources();
setNewValuesForAttribute(”location”, searchLocations, “,”, searchLog);

searchLog.save();

m_NewSearchLogObjectId = searchLog.getObjectId().getId();
System.out.println(”************ Saved Search Log ************” + objectId);
m_loggedToDB = true;
} catch (DfException e) {
e.printStackTrace();
}

}

private void updateSearchLogObject(){
System.out.println(”### Updating the record”);

Datagrid datagrid = (Datagrid)getControl(”doclistgrid”,

com.documentum.web.form.control.databound.Datagrid.class);
//Get total number of results available from the underlying DataHandler
//Note that a value of -1 indicates that the DataHandler does not support results counting.
int noOfResults = datagrid.getDataProvider().getResultsCount();
System.out.println(”Datagrid noOfResults: ” + noOfResults );
if(noOfResults != -1) {
IDfSession sess = this.getDfSession();

IDfClientX clientx = new DfClientX();
try {
IDfPersistentObject searchLog = (IDfPersistentObject)sess.getObject(
clientx.getId(m_NewSearchLogObjectId));

IDfTime timeNow = new DfTime();
searchLog.setTime(”endtimeofsearch”, timeNow);
searchLog.setInt(”noofresults”,noOfResults);

searchLog.save();
m_loggedNoOfResultsToDB=true;
System.out.println(”************ Updated Search Log ************”);
} catch (DfException e) {
e.printStackTrace();
}
}

private void setNewValuesForAttribute(String attributeName,
String queryString, String delimiter, IDfPersistentObject obj) throws DfException {

StringTokenizer st = new StringTokenizer(queryString, delimiter);
for (int i = 0; st.hasMoreTokens(); i++) {
obj.appendString(attributeName, st.nextToken());
}
}

Thumbnail Server and McAfee Anti-virus Port Clash

I recently installed Documentum Thumbnail Server on a Windows box and I had strange problem. The thumbnail server showed status as “Started” in the Windows Services console. But the DAM refused to show any thumbnails.

To test if the Thumbnail server was running correctly, I used IE to hit this URL: http://localhost:8081/thumbsrv/getThumbnail?

If the thumbnail server was running allright, IE should display a default document icon. Instead I saw the logs of my McAfee Anti-virus. This meant that McAfee was using the port:8081 which is the default port used by Thumbnail server.

Fix:

Since Thumbnail server uses Tomcat internally, I had to change the port to 8082.

a) Using Notepad, open the server.xml in the D:\Documentum\product\5.3\thumbsrv\container\conf
b) Search for “” and in this section for connectors, change the port number ”/> I used 8082 successfully.
c) Restart the thumbnail server
d) Test using http://localhost:/thumbsrv/getThumbnail?

Update the configuration of the thumbnail file-store to change the base_url attribute.
a) Open Documentum Administrator (DA),
b) Look for the file-store - “thumbnail_store_01″
c) View properties and update the base url.
d) Restart the docbase
e) Open DAM and test if thumbnails are being displayed correctly

Back Up Vs Archive

Backup
Secondary copy of information
Used for recovery operations
Improves availability by enabling application to be restored to a specific point in time
Typically short-term (weeks or months)
Data overwritten on periodic basis (monthly)
Not useful for compliance

Archive
Primary copy of information
Available for information retrieval
Adds operational efficiencies by moving fixed/unstructured data out of the operational environment
Typically long-term(months,years,decades)
Data retained for analysis or compliance
Useful for compliance

Registered Tables in Documentum

Registered tables are the tables that are present database which are registered in Documentum, so that it can be accessed using DQL. Basically Registered tables are used when the application needs to access data from the RDBMS within the Documentum. This can be either a Table or a View. The Scenarios where I mostly used registered tables are for providing value assistance for Object attributes. I am not getting into too much of details about Value Assistance here, Value assistance is a list of values that a client program (such as Webtop or a Custom WDK Application) displays at runtime for an object attribute. A user can select a value from this list (or, if allowed, add a new one to it). You can set the Value assistance for an Attribute using DAB (Documentum Application Builder).

As I mentioned above uou can register a Table or a view as a Registered Table, The Registered tables are stored as dm_registered objects in repositories. This extends dm_sysobject. And the r_object_id of this type always starts with 19. The following table lists the attributes of dm_registered

Name Info Description
column_count Integer - Single Number of columns in the table.
column_datatype string(64) - Repeating List of the datatypes of the columns.
column_length. integer R Lengths of the columns that have a string data type
column_name. string(64) - Repeating List of the names of the columns in the table
group_table_permit integer - Single Defines the RDBMS table permit level assigned to the registered table’s group.
is_key. Boolean Repeating Indicates if an index is built on the column
owner_table_permit integer - Single Defines the RDBMS table permit level assigned to the registered table’s owner
synonym_for string(254) - Repeating Name of the table in the underlying RDBMS (can be an Oracle table synonym, or an MS SQL Server or Sybase table alias)
table_name string(64) Single Name of the table.
table_owner string(64) Single Name of the owner of the RDBMS table (the person who created the RDBMS table).
world_table_permit integer - Single Defines the RDBMS table permit level assigned to the world

You should either own the table or have super user privileges to register a table. And since this object is linked with /system cabinet you should have write permission on /system cabinet. This is applicable only if the folder security is enabled in Repository

You cannot version a dm_registered object. And also the changes made to the table are not automatically updated in dm_registered object. So if any changes has been made to the structure of the table or view you should unregister it first and register the table again with changes.

How to Register a Table?
Use the following DQL to register a table. REGISTER TABLE [owner_name.]table_name (column_def {,column_def}) [[WITH] KEY (column_list)][SYNONYM [FOR] ‘table_identification‘] This DQL will return the r_object_id of the newly created dm_registered object. In this owner_name is the name of the table owner. table_name is the name of the RDBMS table. column_def defines the columns in the registered table.

column_def arguments should have following syntax column_name datatype [(length)] the valid values for types are float, double, integer, int, char, character, string, date, time.

Length should be specified for character, char, or string data type.

column_list Identifies the columns in the table on which indexes have been built. column_list is usually separated with commas. table_ identification is the name of the table in the Database Example:

REGISTER TABLE “hr.users” (”first_name” CHAR(30), last_name (char 40), “emp_id” INT)KEY (”emp_id”)



Granting Rights
You need to give the permission to the users to access the registered tables. The values for various permission levels are as follows 0 (None): No access 1 (Select): The user can retrieve data from the registered table 2 (Update): The user can update existing data in the registered table4 (Insert): The user can insert new data into the registered table8 (Delete): The user can delete rows from the registered table If a user wants update and insert permissions the value should be 2+4 = 6 , The repository owner also should have the same level of permission in the underlying database to grand those permission to those users. Granting Rights full permission to users in the above example

update dm_registered object set world_table_permit = 15 where object_name = ‘users’;

update dm_registered object set owner_table_permit = 15 where object_name = ‘users’;

update dm_registered object set group_table_permit = 15 where object_name = ‘users’;




How to Unregister a Table?

Use the following DQL to Unregister a Table.

UNREGISTER [TABLE] [owner_name.]table_name In this owner_name is the name of the table owner. table_name is the name of the RDBMS table. You should be the owner of table or super user to do this


Accessing Data from Registered Table

Just like in RDBMS you can access registered table using the following syntax

Select [ATTRIBUTES] from dm_dbo.[REGISTERED_TABLE_NAME] where [CLAUSE]

The Operations such as update/ delete also has the same RDBMS syntax that’s used for a ordinary SQL, Only difference is prefixing dm_dbo to the table name

Example:

Select first_name, last_name, emp_id from dm_dbo.users ;

Sysadmin and Super User

Sysadmin

Create, alter, and drop users and groups
Create, modify, and delete system-level ACLs
Grant and revoke Create Type, Create Cabinet, and Create Group privileges
Create types, cabinets, and printers
Manipulate workflows or work items, regardless of ownership
Manage any object’s lifecycle
Set the a_full_text attribute

The Sysadmin privilege does not override object-level permissions

Super User

Perform all the functions of a user with Sysadmin privileges
Unlock objects in the repository
Modify or drop another user’s user-defined object type
Create subtypes that have no supertype
Register and unregister another user’s tables
Select from any underlying RDBMS table regardless of whether it is registered or not
Modify or remove another user’s groups or private ACLs
Create, modify, or remove system ACLs
Grant and revoke Superuser and Sysadmin privileges
Grant and revoke extended privileges
View audit trail entries

Documentum Vs Sharepoint

1. Sharepoint 2007 is tightly integrated with Office 2007. Documentum has some light integration with Office through Webtop Application Connectors. Documentum has stronger integration with other authoring applications including Dreamweaver, QuarkXPress, and Adobe InDesign.
2. Sharepoint provides various mechanisms to access and modify content when offline (eg Outlook, Access, etc). Documentum only supports offline editing if you install Documentum Desktop application.
3. Sharepoint 2007 supports rights management with Office 2007 natively. Documentum requires you to install Information Rights Manager to have this feature.
4. Both Documentum and Sharepoint provide the ability to create custom object types. However, Sharepoint’s object model does not seem to support object inheritance.
5. Lifecycle features (eg applying actions, defining entry criteria, applying lifecycle to multiple documents, etc) is more extensive in Documentum than in Sharepoint.
6. Documentum security model is more extensive than Sharepoint. Documentum has extended permissions that allow users to perform specific functions (eg change ownership, change state, change permissions, etc).
7. All objects in Documentum are secured using the security model. In Sharepoint only certain objects can be secured (eg web site, list, folders, documents, etc).
8. Content can be only published to Sharepoint site; however, if content needs to publish outside of MOSS repository, this requires custom coding. Content can be published to any website using Documentum Site Caching and Site Delivery Services. Documentum also has portlets for various portal vendors that allow those portals to access content that is stored in Documentum repository.
9. The obvious - Sharepoint only runs on Windows using Microsoft SQL Server. If you are enterprise standard is UNIX/Linux or Oracle/DB2, then Sharepoint is not a valid option. Documentum is OS and database agnostic. Documentum is supported on various OS and database configurations.
10. The next obvious – Sharepoint is built on ASP.NET; thus, customizations are done via .NET framework. Documentum is built on DFC, which is built on Java. You should consider the development and support skills of your staff when considering which system to choose.
11. Content Storage - Sharepoint stores content within the SQL Server database. This allows Sharepoint to utilize SQL Server native search capabilities. This also means that backup of content is solely dependent on backing up of the database. Documentum stores content on a file storage system and content metadata in any database. This architecture allows for multi-server single docbase configuration. Since content is stored on file system, you can also create a mix storage architecture composed of SAN, NAS, RAID, tape, etc