Feed aggregator

Anti-Malware Software Installation on LINUX servers with Oracle DB

Tom Kyte - 1 hour 5 min ago
Hello! Our Billing system works with Oracle DB of version 11.2.0.4 (next year we are planning to upgrade all DBs to 19C with RAC). Oracle DBs run on LINUX servers. We got a requirement from our customer to install Anti-Malware Software on all LINUX servers (DB and Application). Please advise how it will affect the Oracle software and DBs. What will be the Oracle recommendations for such approach. Thank you in advance!
Categories: DBA Blogs

Weblogic Logs - alerting mechanisms

Tom Kyte - 1 hour 5 min ago
hello I hope you can guide me. I need a tool to notify me of significant log messages from weblogic to some other tool (for example whatsapp) My goal is "not to be sitting in front of the console watching the logs"
Categories: DBA Blogs

How can I find out from within a stored function whether it was called from a query?

Tom Kyte - 1 hour 5 min ago
Hi In a stored function I would like to detect whether it was called from an sql query. If not then my function will perform DML. Otherwise it will save function parameters into a buffer - a package variable. The DML execution will be deferred until after the query. Please could you let me know what would be a good, fast, reliable way of determining whether my function was called from a query? Please note that I would not like to run an autonomous transaction from within my query. This is because the decision whether to go ahead with the deferred DML cannot be made until after the query is completed. I would very much like finding the answer to the original puzzle rather than finding a workaround. Kind Regards Alex
Categories: DBA Blogs

How to find child or parent from data with only ID and Level

Tom Kyte - 1 hour 5 min ago
Hi Tom, I have this data <code>select 1 LineNumber, 1 mylevel, 10 id from dual union all select 2 , 2 , 11 from dual union all select 3 , 3 , 13 from dual union all select 4 , 3 , 14 from dual union all select 5 , 4 , 15 from dual union all select 6 , 2 , 20 from dual union all select 7 , 2 , 30 from dual union all select 8 , 3 , 31 from dual union all select 9 , 4 , 33 from dual union all select 10 , 3 , 32 from dual union all select 11 , 3 , 34 from dual union all select 12 , 4 , 35 from dual union all select 13 , 5 , 36 from dual</code> I'm look for a way to get - the child ids recursively of the id = 30. - the parent id of the id = 30
Categories: DBA Blogs

How to dynamically transpose data into with changing column headers with or without pivot/pivot xml?

Tom Kyte - 1 hour 5 min ago
This question is most common with no clear solution. I have 2 scenarios for dynamic pivoting and the solution requested could be using pivot xml(tried, but extracting is a task)/dynamic sql execution/arays if possible we could use.. Scenario 1: In the first case,I i need to pivot the data where data source is same table. However the pivot needs to be dynamic as the columns header would keep changing as per column app_id. So if app_id=1. The column header would be A,B,C,D, If app_id=2, column would be CDEF and so on. Also each set of value has an id. So for id, 120 and app_id=1 , column A,B,C,D, would display the values and so on. The current sample data has only 2 app_ids, but there could be many more, so app_id and labels would kepe changing thus i need to write a dynamic query. Table is DATA_1 <code>ID label value app_id --- ----- ----- ------ 120 A Alpha 1 120 B Beta 1 120 C Class 1 120 D Delta 1 120 C Alpha 2 120 D Beta 2 120 E Class 2 120 F Delta 2</code> And expected output would be something like this. P.S. Data is dummy and app_ids, and ids would keep getting inserted and removed with changing column headers,so a dynamic solution is needed. <code>SELECT * FROM data WHERE ID = 120 AND app_id = 1; app_id A B C D ID ------ ------ ----- ----- ----- ----- 1 Alpha Beta Class Delta 120 SELECT * FROM data WHERE ID = 120 AND app_id = 2; app_id C D E F ID ------ ------ ----- ----- ----- ----- 2 Alpha Beta Class Delta 120</code> Scenario 2: Here were had the data in one table. Now we have separate table containing the labels and current table with values. SO in table from scenario one we will focus on id, app_id and value columns only. Select app_id,id,value from data_1; The labels will come from another table DATA_Labels in column Header_Name: <code>APP_ID SEQ HEADER_NAME 1 1 A 1 2 B 1 3 C 1 4 D 1 5 E 1 6 F 1 7 G 1 8 H 1 9 I 1 10 J 2 1 P 2 2 Q 2 3 R 2 4 S 2 5 T 2 6 U 2 7 V 2 8 W 2 9 X 2 10 Y</code> So, for labels we would use header_names. ANd Data_1 and Data_Labels could be joined on basis of app_id. In case the values exceed column headers, we can assume that the headers would reach say maximum 20. And in case the header/name is not available, it could put some default header name like COL11,COL12 if the value stretches upto there. I did lot of research but most solutions are too complex and confusing. Any leads would be appreciated.
Categories: DBA Blogs

export DB dumps using DBMS_DATAPUMP api to the Object storage on Oracle Cloud Infrastructure

Tom Kyte - 1 hour 5 min ago
Hi, I have a requirement where I need to export DB dumps using DBMS_DATAPUMP api to the Object storage on Oracle Cloud Infrastructure. I found below statement on official document to export dumps at object storage using expdp utility. <code>expdp admin/password@ADWC1_high \ filesize=5GB \ dumpfile=default_credential:https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp%U.dmp \ parallel=16 \ encryption_pwd_prompt=yes \ logfile=export.log \ directory=data_pump_dir</code> But I didn?t get any reference for DBMS_DATAPUMP api, Could you please help me to know how to perform same task using DBMS_DATAPUMP api.
Categories: DBA Blogs

How can I get min_date and max_date, grouping by some field

Tom Kyte - 1 hour 5 min ago
Hi. I'm trying to build a query to get dates intervals. Let me explain it in a simple way. We have a table with this fields: <code>Soldier_ID | Soldier_Name | Location | Ranking | Start_Date | End_Date 1001 Jones Texas Lieutenant 2000/03/20 2002/08/15 1001 Jones Afghanistan Lieutenant 2002/08/16 2003/03/18 1001 Jones Michigan Lieutenant 2003/03/19 2004/06/01 1001 Jones NY Lieutenant 2004/06/02 2004/10/01 1001 Jones NY Captain 2004/10/02 2005/04/20 1001 Jones Berlin Captain 2005/04/21 2007/02/20 1001 Jones Michigan Major 2007/02/21 2008/10/22 1001 Jones Ohio Major 2008/10/23 2010/01/26 1001 Jones Ohio Captain 2010/01/27 2013/11/26 1001 Jones Texas Captain 2013/11/26 2014/05/11 1001 Jones Texas Major 2014/05/12 2016/04/22 1001 Jones Texas General 2016/04/23 2020/10/10 1001 Jones Washington General 2020/10/11 2020/11/30</code> I need to get the time intervals that soldier spent on each ranking, so the end result I need to get should be something like this: <code>Soldier_ID | Soldier_Name | Ranking | Start_Date | End_Date 1001 Jones Lieutenant 2000/03/20 2004/10/01 1001 Jones Captain 2004/10/02 2007/02/20 1001 Jones Major 2007/02/21 2010/01/26 1001 Jones Captain 2010/01/27 2014/05/11 1001 Jones Major 2014/05/12 2016/04/22 1001 Jones General 2016/04/23 2020/11/30</code> As you can see the soldier is promoted/demoted along the time. Any suggestion on how to do this?
Categories: DBA Blogs

In-row LOBs

Jonathan Lewis - 6 hours 44 min ago

If you’re like me there are probably lots of little things that you know but find terribly easy to forget. Here’s one of my bêtes noires – starting with a little quiz:

rem
rem     Script:         lob_in_row.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Purpose:
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 (
        v1      varchar2(20),
        b1      clob,
        s1      clob
)
lob(b1) store as basicfile  b_lob,
lob(s1) store as securefile s_lob
;

insert into t1 values(
        rpad('x',20,'x'),
        rpad('x',20,'x'),
        rpad('x',20,'x')
)
/

commit;

execute dbms_stats.gather_table_stats(user,'t1');

select  column_name, avg_col_len
from    user_tab_columns
where   table_name = 'T1'
/

select  avg_row_len
from    user_tables
where   table_name = 'T1'
/

column rel_file_no new_value m_file_no
column block_no    new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &amp;amp;m_file_no block &amp;amp;m_block_no;

I’ve created a table with a varchar2() column, a basicfile CLOB, and a securefile CLOB, and inserted the same value into all three. By default the CLOB columns will be defined as ‘enable storage in row’ and since the inserted value is very short it will be stored “in row” for both these CLOBs. The question is:

Which column takes the most space in the data block?

To answer this question we need only gather stats and query user_tab_columns and/or user_tables – except the results may be a little surprising, so my script also uses the dbms_rowid package to find the file and block number where the row has been inserted, flushes the buffer cache to make sure that we don’t get confused by older versions of the block (you probably shouldn’t run this script on a big busy system) then dumps the block into the session trace file.

You’ll see why it’s necessary to look at the block dumps when I show you the results from gathering stats. The three sets of results below come from 11.2.0.4, 12.2.0.1, and 19.3.0.0 in that order. A key feature the databases have in common is that they were all created with a multi-byte character set:


11.2.0.4
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           123
S1                           117 

AVG_ROW_LEN
-----------
        261

12.2.0.1
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                           171
S1                           165

AVG_ROW_LEN
-----------
        357

19.3.0.0
========
COLUMN_NAME          AVG_COL_LEN
-------------------- -----------
V1                            21
B1                            77
S1                            71

AVG_ROW_LEN
-----------
        169

There are three things that the three versions agree on.

  • First the row length is the sum of the column lengths.
  • Secondly the average length of the varchar2() column is 21 – remember that the avg_col_len includes the length byte(s) when we collect stats with the dbms_stats package.
  • Finally that the difference between the basicfile and securefile LOBs is 6.

But are the actual values reported for the LOB Lengths in any way truthful, or are we seeing reporting errors (that vary with version).

That’s why we need to look at the block dump, and this is what we find as the interesting bit of the dump – which is basically the same in all three cases:


block_row_dump:
tab 0, row 0, @0x1eec
tl: 172 fb: --H-FL-- lb: 0x1  cc: 3

col  0: [20]  78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78

col  1: [76]
 00 54 00 01 02 0c 80 00 00 02 00 00 00 01 00 02 75 f6 ff fb 00 38 09 00 00
 00 00 00 00 28 00 00 00 00 00 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00
 78

col  2: [70]
 00 54 00 01 02 0c 80 80 00 02 00 00 00 01 00 02 75 f6 ff fc 00 32 48 90 00
 2c 00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78

end_of_block_dump

When Oracle 19c reported the avg_col_len (and maybe the same would be true of 18c) it was correctly reporting the space taken up in the row by the entries for the two LOBs (and their overheads) that  I had inserted.  The earlier versions of Oracle are doing something a little odd.

A key thing you’ll notice, of course, is that the varchar2() column is storing my input string as one byte per character, but because I used a multibyte characterset for my database characterset in all three cases Oracle has stored the LOB equivalents of the string using a two-byte fixed-width characterset that has effectively doubled the storage (and then has to include the “Lob Locator” overhead – hence the 20 character string turning into 70+ bytes).

This highlights two important issues. First that trying to work out how to “tune” your table storage based on the avg_col_len / avg_row_len is a lot harder to do if you’ve got LOB columns in the table, and your method of estimating storage will have to change to suit the version of Oracle. Secondly, and much more importantly, if you’re thinking of changing a varchar2() column (such as “description” or “comments” from a character data type to a CLOB to allow for a small number of cases where people want to supply longer bits of text then if you’re using a multibyte character set there are two awful consequences:

  • the storage requirement for the column will more than double
  • the LOB will go out of line when the in-row storage requirement exceeds 4,000 bytes – which means when the original string content exceeds something between 1,938 and 1,985 characters depending on your version of Oracle and whether you are using basicfile or securefile LOBs.  (And a “long” input string will exceed a single block in the LOB segment whent it goes over roughly 4,000 characters.)

So if you want to handle “long comments” in multibyte charactersets you might want to consider making your code more complex so that up to 4,000 bytes you store the data as a varchar2(), and only use a CLOB column when the data goes over that limit.

Footnote

If you’re wondering  what all the overheads are for the in-row LOB you’ll find that the 11g block dump gives you some extra details (though the extra details disappeared from my 12c and 19c block dumps).  Here’s the content that appeared immediately after the col1 and col2 data respectively in the 11g block dump.

LOB
Locator:
  Length:        84(76)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fb
  Flags[ 0x02 0x0c 0x80 0x00 ]:
    Type: CLOB
    Storage: BasicFile
    Enable Storage in Row
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  Inode:
    Size:     56
    Flag:     0x09 [ Valid DataInRow ]
    Future:   0x00 (should be '0x00')
    Blocks:   0
    Bytes:    40
    Version:  00000.0000000001
    Inline data[40]
Dump of memory from 0x00007FB4244BED8D to 0x00007FB4244BEDB5
7FB4244BED80                            00780001              [..x.]
7FB4244BED90 00780078 00780078 00780078 00780078  [x.x.x.x.x.x.x.x.]
        Repeat 1 times
7FB4244BEDB0 00780078 54004678                    [x.x.xF.T]



LOB
Locator:
  Length:        84(70)
  Version:        1
  Byte Length:    2
  LobID: 00.00.00.01.00.02.75.f6.ff.fc
  Flags[ 0x02 0x0c 0x80 0x80 ]:
    Type: CLOB
    Storage: SecureFile
    Characterset Format: IMPLICIT
    Partitioned Table: No
    Options: VaringWidthReadWrite
  SecureFile Header:
    Length:   50
    Old Flag: 0x48 [ DataInRow SecureFile ]
    Flag 0:   0x90 [ INODE Valid ]
    Layers:
      Lengths Array: INODE:44
      INODE:
        00 00 28 01 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78 00 78
        00 78 00 78

Footnote 2

Oracle allows for an empty_clob() to populate a CLOB column with a zero length CLOB, and an empty_clob() is not the same as a null.

Many years ago, when I wrote “Practical Oracle 8i” I pointed out that if you were using a LOB column it would be better to use empty_[bc]lob() to represent an absence of data to avoid the need for checking the column in two different ways for “no data”.

There are two details to remember about this, though. First, empty_clob() take 36 bytes of storage for a basicfile and 30 bytes for a secure file (reported at 37/31 in 19c, 135/131 in 12cR2 and 103/97 in 11gR2). You may decide that’s expensive way of saying “nothing to see here”.

Secondly, while PL/SQL will allow you to compare a PL/SQL clob variable with empty_clob() you cannot use a simple “column = empty_clob()” predicate in SQL and will need something like:

where length(s1) = 0
where dbms_lob.getlength(s1) = 0
where dbms_lob.compare(s1, empty_clob) = 0

As a final closing comment about the oddity of avg_col_len and LOBs – my first test to check the lengths reported for an empty_clob() also inserted NULL into the v1 column. In this special case (all the columns in the row were empty_clob() or null) 11g reported the avg_col_len of s1 and b1 as zero!

Setting up Control Accounts and Restricting GL Manual Journal Posting

Richard Byrom - Sun, 2020-11-29 16:21

One of the options to consider when setting up the nominal segment within your Chart of Accounts is what your Third Party Control Account values should be. Before doing this its critical to understand what options are available and the impact of selecting these. Hence, I thought I’d put together a brief post explaining these.

The image below depicts where within the value set you would select the appropriate option and is followed by an explanation of the resultant impact.
Third Party Control Account Options

Third Party Control Account Options

General Ledger journals can be entered directly in that module or be received from a Subledger. Since control accounts should accurately reflect the movements in a related Subledger its ideal that we only allow postings to these that are sourced from the Subledger. Hence, minimising the number of manual postings to a control account is recommended and makes Subledger to General Ledger Control Account reconciliation a lot simpler.

Whichever option is chosen below will determine the allowed journal source for a GL nominal account and the level of restriction to be applied:

  1. Customer Control Account - only journal postings sourced from Accounts Receivable can be posted to this nominal.
  2. No - either this is not a control account or you want to allow any type of journal to be posted to this account.
  3. Restrict GL Manual Journals - journals with a source of Manual which are those entered directly into General Ledger will not be allowed. This provides more flexibility in that any Subledger can post to this account.
  4. Supplier Control Account - only allow journal postings sourced from Accounts Payable can be posted to this nominal.
  5. Third Party Control Account - only allow journal postings sourced from Accounts Receivable or Payables can be posted to this nominal.

The PostgreSQL shared/global catalog

Yann Neuhaus - Sun, 2020-11-29 13:02

A PostgreSQL instance (or cluster) can contain many databases, three of them (template0, template1 and postgres) are there by default. Over the last years we trained many people on PostgreSQL Essentials and there have been mainly two points that needed more clarification when it comes to catalogs and the postgres default database:

  1. Does the postgres default database define the catalog and somehow is the master database?
  2. What exactly is in the global catalog?

In this post we’ll look into both points and I hope to make it more clear what the shared/global catalog contains, and that the postgres default database is not a master database and it does not define the postgres catalog.

For the first point (is the default postgres database a master database and does it define the catalog?) the answer can quite easily be given. The default postgres database is there for only one reason: Because most client utilities assume it is there, and by default connect into that database. But this does not mean, that the default postgres is any special, you can go well ahead and drop it:

postgres=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

postgres=# \c template1
You are now connected to database "template1" as user "postgres".
template1=# drop database postgres;
DROP DATABASE
template1=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(2 rows)

template1=# 

We even have customers which do that by default. The default postgres database is nothing special and initially it is exactly the same as template1. You can easily re-create, it if you want:

template1=# create database postgres;
CREATE DATABASE
template1=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

This answers the first question: The default postgres database is not a master database and it does not define the PostgreSQL catalog. Again, check here if you want to have more details about the three default databases.

The second question can be answered easily as well: What exactly is in the global/shared catalog? Most of the PostgreSQL catalog tables are per database, such as pg_tables:

postgres=# \d pg_tables
              View "pg_catalog.pg_tables"
   Column    |  Type   | Collation | Nullable | Default 
-------------+---------+-----------+----------+---------
 schemaname  | name    |           |          | 
 tablename   | name    |           |          | 
 tableowner  | name    |           |          | 
 tablespace  | name    |           |          | 
 hasindexes  | boolean |           |          | 
 hasrules    | boolean |           |          | 
 hastriggers | boolean |           |          | 
 rowsecurity | boolean |           |          | 

All these catalog tables and views are in a system schema called “pg_catalog”. This schema is not listed by default when you use the “\dn” shortcut in psql:

postgres=# \dn
  List of schemas
  Name  |  Owner   
--------+----------
 public | postgres
(1 row)

You need to add “S” for system, to list the system schemas:

postgres=# \dnS
        List of schemas
        Name        |  Owner   
--------------------+----------
 information_schema | postgres
 pg_catalog         | postgres
 pg_toast           | postgres
 public             | postgres
(4 rows)

Some catalog tables/views are global to the cluster/instance and are not per database. The obvious ones are users/roles and tablespaces. None of them are per database as users/roles can have access to various databases and various databases can store relations in the same tablespace. The question now is: How can I know if a catalog table/view is global or per database? Even global catalog tables/views are listed in the local catalog schema:

postgres=# \d pg_catalog.pg_roles
                         View "pg_catalog.pg_roles"
     Column     |           Type           | Collation | Nullable | Default 
----------------+--------------------------+-----------+----------+---------
 rolname        | name                     |           |          | 
 rolsuper       | boolean                  |           |          | 
 rolinherit     | boolean                  |           |          | 
 rolcreaterole  | boolean                  |           |          | 
 rolcreatedb    | boolean                  |           |          | 
 rolcanlogin    | boolean                  |           |          | 
 rolreplication | boolean                  |           |          | 
 rolconnlimit   | integer                  |           |          | 
 rolpassword    | text                     |           |          | 
 rolvaliduntil  | timestamp with time zone |           |          | 
 rolbypassrls   | boolean                  |           |          | 
 rolconfig      | text[]                   | C         |          | 
 oid            | oid                      |           |          | 

By only looking in the catalog schema we can not answer that question. What we can do, however, is to look at the data directory ($PGDATA). The databases are in “base” and the global/shared catalog is in “global”:

postgres@centos8pg:/home/postgres/ [pgdev] cd $PGDATA
postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] ls -l | egrep "base|global"
drwx------. 6 postgres postgres    58 Nov 21 09:50 base
drwx------. 2 postgres postgres  4096 Nov 21 09:48 global

When we look into the “global” directory we’ll see a number of OIDs (object identifiers), this is how PostgreSQL internally is referencing the relations:

postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] ls -l global/
total 564
-rw-------. 1 postgres postgres  8192 Nov 21 03:52 1213
-rw-------. 1 postgres postgres 24576 Nov 20 22:52 1213_fsm
-rw-------. 1 postgres postgres  8192 Nov 21 03:53 1213_vm
-rw-------. 1 postgres postgres  8192 Nov 20 22:52 1214
-rw-------. 1 postgres postgres 24576 Nov 20 22:52 1214_fsm
-rw-------. 1 postgres postgres  8192 Nov 20 22:52 1214_vm
-rw-------. 1 postgres postgres 16384 Nov 20 22:52 1232
-rw-------. 1 postgres postgres 16384 Nov 20 22:52 1233
-rw-------. 1 postgres postgres  8192 Nov 20 22:57 1260
-rw-------. 1 postgres postgres 24576 Nov 20 22:52 1260_fsm
-rw-------. 1 postgres postgres  8192 Nov 20 22:52 1260_vm
...

Each of these OIDs is one relation of the global/shared catalog. As we are not interested in the visibility maps and free space maps let’s exclude them, and only list the unique OIDs:

postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] ls -l global/ | awk -F " " '{print $9}' | egrep "^[0-9]" | egrep -v "fsm|vm"
1213
1214
1232
1233
1260
1261
1262
2396
2397
2671
2672
2676
2677
2694
2695
2697
2698
2846
2847
2964
2965
2966
2967
3592
3593
4060
4061
4175
4176
4177
4178
4181
4182
4183
4184
4185
4186
6000
6001
6002
6100
6114
6115

These are the relations in the global/shared catalog. For translating these OIDs into human readable names there is oid2name. Without any additional parameters oid2name will give you the name of the databases listed in the “base” directory:

postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  24616       postgres  pg_default
  12905      template0  pg_default
      1      template1  pg_default

We can also pass the OIDs of the shared/global catalog to oid2name and the result will answer the second question: What, exactly, is in the global/shared catalog?

postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] for i in `ls -l global/ | awk -F " " '{print $9}' | egrep "^[0-9]" | egrep -v "fsm|vm"`; do oid2name -x -S -q -o $i; done | grep -v "index"
      1213  pg_tablespace  1213  pg_catalog   pg_global
      1214  pg_shdepend  1214  pg_catalog   pg_global
      1260   pg_authid  1260  pg_catalog   pg_global
      1261  pg_auth_members  1261  pg_catalog   pg_global
      1262  pg_database  1262  pg_catalog   pg_global
      2396  pg_shdescription  2396  pg_catalog   pg_global
      2846  pg_toast_2396  2846  pg_toast   pg_global
      2964  pg_db_role_setting  2964  pg_catalog   pg_global
      2966  pg_toast_2964  2966  pg_toast   pg_global
      3592  pg_shseclabel  3592  pg_catalog   pg_global
      4060  pg_toast_3592  4060  pg_toast   pg_global
      4175  pg_toast_1260  4175  pg_toast   pg_global
      4177  pg_toast_1262  4177  pg_toast   pg_global
      4181  pg_toast_6000  4181  pg_toast   pg_global
      4183  pg_toast_6100  4183  pg_toast   pg_global
      4185  pg_toast_1213  4185  pg_toast   pg_global
      6000  pg_replication_origin  6000  pg_catalog   pg_global
      6100  pg_subscription  6100  pg_catalog   pg_global

Here is the answer (excluding the indexes). If we exclude the toast tables as well, you’ll notice that not many catalog tables/views are in the global/shared catalog:

postgres@centos8pg:/u02/pgdata/DEV/ [pgdev] for i in `ls -l global/ | awk -F " " '{print $9}' | egrep "^[0-9]" | egrep -v "fsm|vm"`; do oid2name -x -S -q -o $i; done | egrep -v "index|toast"
      1213  pg_tablespace  1213  pg_catalog   pg_global
      1214  pg_shdepend  1214  pg_catalog   pg_global
      1260   pg_authid  1260  pg_catalog   pg_global
      1261  pg_auth_members  1261  pg_catalog   pg_global
      1262  pg_database  1262  pg_catalog   pg_global
      2396  pg_shdescription  2396  pg_catalog   pg_global
      2964  pg_db_role_setting  2964  pg_catalog   pg_global
      3592  pg_shseclabel  3592  pg_catalog   pg_global
      6000  pg_replication_origin  6000  pg_catalog   pg_global
      6100  pg_subscription  6100  pg_catalog   pg_global

That’s it, hope it helps.

Cet article The PostgreSQL shared/global catalog est apparu en premier sur Blog dbi services.

Documentum – D2 doesn’t load repositories with “Unexpected error occured”

Yann Neuhaus - Sun, 2020-11-29 10:58

I had a case today where all Documentum components were up and running, including D2 but while accessing its login page, the repositories wouldn’t appear and a message “An unexpected error occurred. Please refresh your browser” would pop-up in the lower-right corner and disappear quickly. Refreshing the browser or opening a private window wouldn’t do anything. In such cases, of course the first thing to do would be to make sure the docbroker and repositories are responding but if it is the case, then what could be the problem? The root cause of this can be several things I assume since it’s a rather generic behavior but I saw that a few times already and it might not be really obvious at first glance so sharing some thoughts about it might prove useful for someone.

Here is the login screen of D2 having the issue:

In my case, the repositories were apparently available on the Content Server and responding (connection through iapi/idql working). The next step would probably be to check the D2 logs with DEBUG enabled to make sure to capture as much as possible. This is what you would see in the logs whenever accessing the D2 login URL:

2020-11-29 11:12:36,434 UTC [DEBUG] ([ACTIVE] ExecuteThread: '37' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : D2 full build version: 16.5.1050 build 096
2020-11-29 11:12:36,435 UTC [DEBUG] ([ACTIVE] ExecuteThread: '37' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : patch version: 16.5.1050
2020-11-29 11:12:36,886 UTC [DEBUG] ([ACTIVE] ExecuteThread: '66' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.s.s.labels.X3ResourceBundleFactory      : getAllBundle for resources.i18n en
2020-11-29 11:12:36,924 UTC [DEBUG] ([ACTIVE] ExecuteThread: '99' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server properties
2020-11-29 11:12:36,940 UTC [DEBUG] ([ACTIVE] ExecuteThread: '21' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server shiro.ini
2020-11-29 11:12:36,942 UTC [DEBUG] ([ACTIVE] ExecuteThread: '55' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.p.s.s.settings.RpcSettingsServiceImpl   : Fetching Server adminMessage Settings
2020-11-29 11:12:36,978 UTC [DEBUG] ([ACTIVE] ExecuteThread: '84' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x.s.s.labels.X3ResourceBundleFactory      : getAllBundle for resources.i18n en_US
2020-11-29 11:12:37,709 UTC [DEBUG] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.common.dctm.objects.DfDocbaseMapEx    : Load docbases from docbrocker 0.623s
2020-11-29 11:12:37,711 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : Loaded repositories from docbroker: GR_Repo,Repo1
2020-11-29 11:12:37,712 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : loginRepositoryFilter=GR_Repo
2020-11-29 11:12:37,713 UTC [INFO ] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d2fs.dctm.web.services.D2fsRepositories   : Filtering out repository GR_Repo
2020-11-29 11:12:37,713 UTC [DEBUG] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.config.D2OptionsCache          : D2Info element not for in cache
2020-11-29 11:12:37,713 UTC [ERROR] ([ACTIVE] ExecuteThread: '26' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.config.D2OptionsCache          : Trying to fetch D2Info before it's been set
2020-11-29 11:12:37,815 UTC [DEBUG] ([ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.D2Session                      : D2Session::initTBOEx after tbos from map 0.000s
2020-11-29 11:12:37,815 UTC [DEBUG] ([ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2.api.D2Session                      : D2Session::initTBOEx after tbos C6-dbor bundle 0.001s
2020-11-29 11:12:38,808 UTC [INFO ] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.x3.portal.server.X3HttpSessionListener  : Created http session 3tGeYTFa9ChEQJP-V7GdMyQreCk3t7_BFfS3EixfHtTbO6qFtOg3!781893690!1606648358808
2020-11-29 11:12:38,809 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : XSRF_TOKEN not found in session
2020-11-29 11:12:38,811 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : D2 full build version: 16.5.1050 build 096
2020-11-29 11:12:38,811 UTC [DEBUG] ([ACTIVE] ExecuteThread: '27' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.utils.X3PortalJspUtils   : patch version: 16.5.1050

 

At first glance, the log content doesn’t look so strange, there are no obvious warnings or errors clearly showing the issue. As you can see, the list of repositories is present, it’s filtered properly so the drop-down should display something but it’s not. The only message that might give you some hint is the one error and its associated debug message just before about the D2OptionsCache: the D2Info elements aren’t in the cache while D2 is trying to use it. In this case, the only way to clearly see what is actually the issue would be to restart the Application Server of D2 to force the LoadOnStartup to be re-executed. Maybe this is only true if the LoadOnStartup is enabled. I didn’t test without but it might be worth to check whether D2 is able to refresh it at runtime in this case. After a restart of the Application Server, it becomes clear what the problem is:

2020-11-29 11:18:28,421 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : d2fs
2020-11-29 11:18:28,426 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : subscriptionsService
2020-11-29 11:18:28,427 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Service Bean is set to Remote
2020-11-29 11:18:28,431 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : exceptionResolver
2020-11-29 11:18:28,433 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.services.ServiceBeanPostProcessor  : Initialized Bean : soapProvider
2020-11-29 11:18:28,503 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : DFC version : 16.4.0200.0080
2020-11-29 11:18:28,543 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - START =====================================
2020-11-29 11:18:28,544 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - HTTP Headers
Remote : null (null)
Locale : null
Request Protocol : null
Request Method : null
Context Path : /D2
Request URI : null
Request encoding : null
Request Parameters :
Request Headers :
2020-11-29 11:18:30,799 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Plugins (0.001s)
2020-11-29 11:18:30,803 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Start plugin before : D2-Widget v16.5.1050 build 096
2020-11-29 11:18:30,804 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - End plugin before : D2-Widget v16.5.1050 build 096 0.000s
2020-11-29 11:18:30,806 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Standard Servlet :
2020-11-29 11:18:30,808 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : Cache BOCS URL disabled.
2020-11-29 11:18:40,865 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.servlets.init.LoadOnStartup   : Free memory=3.1386707 GB, Total memory=4.0 GB
2020-11-29 11:18:50,217 UTC [ERROR] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - DfException:: THREAD: [ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'; MSG: [DM_STORAGE_E_NOT_ACCESSIBLE]error:  "Storage area filestore_01 is not currently accessible.  Reason:  errno: 2, message: No such file or directory."; ERRORCODE: 100; NEXT: null
2020-11-29 11:18:50,220 UTC [ERROR] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : {}
com.documentum.fc.common.DfException: [DM_STORAGE_E_NOT_ACCESSIBLE]error:  "Storage area filestore_01 is not currently accessible.  Reason:  errno: 2, message: No such file or directory."
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:329)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForInt(NetwiseDocbaseRpcClient.java:600)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$6.evaluate(DocbaseConnection.java:1382)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1180)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForInt(DocbaseConnection.java:1375)
        at com.documentum.fc.client.impl.docbase.DocbaseApi.makePuller(DocbaseApi.java:630)
        at com.documentum.fc.client.impl.connection.docbase.RawPuller.<init>(RawPuller.java:22)
        at com.documentum.fc.client.impl.session.Session.makePuller(Session.java:3796)
        at com.documentum.fc.client.impl.session.SessionHandle.makePuller(SessionHandle.java:2468)
        at com.documentum.fc.client.content.impl.BlockPuller.<init>(BlockPuller.java:27)
        at com.documentum.fc.client.content.impl.PusherPullerContentAccessor.buildStreamFromContext(PusherPullerContentAccessor.java:40)
        at com.documentum.fc.client.content.impl.PusherPullerContentAccessor.getStream(PusherPullerContentAccessor.java:28)
        at com.documentum.fc.client.content.impl.ContentAccessorFactory.getStream(ContentAccessorFactory.java:37)
        at com.documentum.fc.client.content.impl.Store.getStream(Store.java:64)
        at com.documentum.fc.client.content.impl.FileStore___PROXY.getStream(FileStore___PROXY.java)
        at com.documentum.fc.client.content.impl.Content.getStream(Content.java:185)
        at com.documentum.fc.client.content.impl.Content___PROXY.getStream(Content___PROXY.java)
        at com.documentum.fc.client.content.impl.ContentManager.getStream(ContentManager.java:84)
        at com.documentum.fc.client.content.impl.ContentManager.namelessGetFile(ContentManager.java:252)
        at com.documentum.fc.client.content.impl.ContentManager.getFile(ContentManager.java:198)
        at com.documentum.fc.client.content.impl.ContentManager.getFile(ContentManager.java:173)
        at com.documentum.fc.client.DfSysObject.getFileEx2(DfSysObject.java:1978)
        at com.documentum.fc.client.DfSysObject.getFileEx(DfSysObject.java:1970)
        at com.documentum.fc.client.DfSysObject.getFile(DfSysObject.java:1965)
        at com.emc.d2.api.config.modules.property.D2PropertyConfig___PROXY.getFile(D2PropertyConfig___PROXY.java)
        at com.emc.common.java.xml.XmlCacheValue.<init>(XmlCacheValue.java:63)
        at com.emc.common.java.xml.XmlCacheImpl.getXmlDocument(XmlCacheImpl.java:154)
        at com.emc.common.java.xml.XmlCacheImpl.getXmlDocument(XmlCacheImpl.java:182)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.loadXmlCache(LoadOnStartup.java:501)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.refreshCache(LoadOnStartup.java:424)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.processRequest(LoadOnStartup.java:208)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:244)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:510)
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:113)
        at com.emc.d2fs.dctm.servlets.init.LoadOnStartup.init(LoadOnStartup.java:136)
        at javax.servlet.GenericServlet.init(GenericServlet.java:244)
		...
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
2020-11-29 11:18:50,230 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d.dctm.web.services.D2fsSessionManager    : Using non-sso shiro SSO filter with non-sso.enableDFCPrincipalMode=false
2020-11-29 11:18:50,231 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.d.dctm.web.services.D2fsSessionManager    : Not using DFC Principal Support
2020-11-29 11:18:50,232 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - Free memory=2.5813167 GB. Total memory=4.0 GB.
2020-11-29 11:18:50,232 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.servlets.D2HttpServlet      : LoadOnStartup - END (21.726s) =====================================

2020-11-29 11:18:50,235 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : D2SecurityConfiguration : Start
2020-11-29 11:18:50,235 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : ServletContext: D2
2020-11-29 11:18:50,269 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : D2SecurityConfiguration : End
2020-11-29 11:18:50,270 UTC [INFO ] ([ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.portal.server.servlet.init.LogMemory   : Free memory=2.5780156 GB, Total memory=4.0 GB

 

So, as you can see above, the issue is actually linked to the Data of the repositories that weren’t available and it is only displayed during the LoadOnStartup execution, then it’s not showing-up anymore. Here, it was the NAS that was unreachable at that time and therefore D2 was impacted and nobody could login to D2. From my point of view, it’s a pity that D2 behaves this way… Even if the Data/Documents aren’t reachable, in a perfect world this shouldn’t prevent you from logging into the system and using it, except for actions involving the content of the documents of course. Browsing the repository, checking properties and some other stuff should work without issue but it’s not because of how Documentum is designed and how it works.

Because the LoadOnStartup actions are only executed at startup (if it is enabled), then it means that once the Data of the repositories are back, you will need to restart D2 again, otherwise the issue will remain. Therefore, if you have this issue and even if the Data are currently available, it might be worth to check whether it was available when D2 started. In addition to that, a restart of D2 never really hurts…

If you encountered this behavior of D2 with another root cause, feel free to share!

 

Cet article Documentum – D2 doesn’t load repositories with “Unexpected error occured” est apparu en premier sur Blog dbi services.

Using LDAP resolved connection descriptor in WebLogic JDBC data sources.

Yann Neuhaus - Sun, 2020-11-29 03:02

I got the question if it is possible to have a the WebLogic JDBC Data Source to be resolved by an LDAP entry. The answer is yes; since WebLogic 12.2.1.3 a JDBC connection pool URL declaration can point to an LDAP entry.

This can be done by simply editing an existing JDBC data source.

jdbc:oracle:thin:@ldap://vm01.dbi-workshop.com:1389/cn=orcl,cn=OracleContext,dc=DatabaseConnection,dc=com

WebLogic_JDBC_Connection_pool

Of course the LDAP server needs to have been configured to store the TNS entries. I explained how to do this in one of my previous blogs: how-to-declare-tns-entries-in-oracle-unified-directory-oud
Or it can be done using WLST scripts.
First a properties file

 DS_NAME=MyLDAPDatasource
JNDIName=jdbc/MyLDAPDatasource
TEST_TABLE_QUERY=SQL SELECT 1 FROM DUAL
JDBC_DRIVER=oracle.jdbc.OracleDriver
TARGET=Server1
JDBC_URL=jdbc:oracle:thin:@ldap://vm01.dbi-workshop.com:1389/cn=orcl,cn=OracleContext,dc=DatabaseConnection,dc=com
DB_USER=USER01
DB_USER_PASSWORD=Welcome1
POOL_MIN_CAPACITY=10
POOL_MAX_CAPACITY=100
POOL_INITIAL_CAPACITY=1
POOL_STATEMENT_CACHE=10           
XA_TRANSACTION_TIMEOUT=7200
XA_RETRY_INTERVAL_SECONDS=60
XA_RETRY_DURATION_SECONDS=300
JDBC_DEBUG_LEVEL=10

and then the python script

#read the domain properties file
try:
  print "Load properties file"
  properties =  os.environ["WEBLOGIC_DOMAIN_DEF_DIR"] + "/" + os.environ["WEBLOGIC_DOMAIN"] + "/domain.properties"
  print properties
  loadProperties(properties)
except :
  print "unable to load domain.properties file"
#  exit(exitcode=1)

try:
  jdbcProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/JDBC_Datasource.properties"
  print jdbcProperties
  loadProperties(jdbcProperties)

except :
  print "Unable to load JDBC_Camunda.properties"
  exit(exitcode=1)

#AdminUser=raw_input('Please Enter WebLogic Domain Admin user Name: ')
#AdminPassword= "".join(java.lang.System.console().readPassword("%s", ['Please enter WebLogic Domain Admin user password:']))


try:
    #Connect to AdminServer
    connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
    #connect(url=ADMIN_URL)
    #connect(AdminUser,AdminPassword,ADMIN_URL)
    #connect()
    #connect('weblogic','Welcome1')
except:
    print "Unable to connect"
    exit(exitcode=1)
	
try: 
    edit()
    startEdit()

    cd('/')
    cmo.createJDBCSystemResource(DS_NAME)
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME)
    cmo.setName(DS_NAME)
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDataSourceParams/'+DS_NAME)

    print "Setting JNDI Names"
    set('JNDINames',jarray.array([String(JNDIName)], String))
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME)
    cmo.setDatasourceType('GENERIC')
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDriverParams/'+DS_NAME)

    print "Setting JDBC URL"
    cmo.setUrl(JDBC_URL)

    print "Setting Driver Name"
    cmo.setDriverName(JDBC_DRIVER)
    
    print "Setting Password"
    set('Password', DB_USER_PASSWORD)
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCConnectionPoolParams/'+DS_NAME)
    cmo.setTestTableName(TEST_TABLE_QUERY)
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDriverParams/'+DS_NAME+'/Properties/'+DS_NAME)
    cmo.createProperty('user')
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDriverParams/'+DS_NAME+'/Properties/'+DS_NAME+'/Properties/user')
    cmo.setValue(DB_USER)

    cd('/JDBCSystemResources/'+DS_NAME)
    set('Targets',jarray.array([ObjectName('com.bea:Name='+TARGET+',Type=Server')], ObjectName))
   
    print "Saving and activating changes"
    save()    
    activate()

except Exception, e:
    dumpStack()
    print "ERROR 2... check error messages for cause."
    print e
    dumpStack()
    stopEdit(defaultAnswer='y')
    exit(exitcode=1)
	
try: 
    edit()
    startEdit()

    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDriverParams/'+DS_NAME+'/Properties/'+DS_NAME+'/Properties/user')
    cmo.unSet('SysPropValue')
    cmo.unSet('EncryptedValue')
    cmo.setValue(DB_USER)
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCConnectionPoolParams/'+DS_NAME)
    cmo.setInitialCapacity(long(POOL_INITIAL_CAPACITY))
    cmo.setMinCapacity(long(POOL_MIN_CAPACITY))
    cmo.setStatementCacheSize(long(POOL_STATEMENT_CACHE))
    cmo.setMaxCapacity(long(POOL_MAX_CAPACITY))
    cmo.setStatementCacheType('LRU')
    cd('/JDBCSystemResources/'+DS_NAME+'/JDBCResource/'+DS_NAME+'/JDBCDataSourceParams/'+DS_NAME)
    cmo.setGlobalTransactionsProtocol('OnePhaseCommit')
       
    save()
    activate()

except Exception, e:
    print "ERROR... check error messages for cause."
    print e
    stopEdit(defaultAnswer='y')
    exit(exitcode=1)
	
exit(exitcode=0)

This script and properties file can be used to create the JDBC connection on one WebLogic Server defined as TARGET in the properties file.

Cet article Using LDAP resolved connection descriptor in WebLogic JDBC data sources. est apparu en premier sur Blog dbi services.

How to declare TNS entries in Oracle Unified Directory (OUD)

Yann Neuhaus - Sun, 2020-11-29 02:53

Using a LDAP server to store the TNS connection strings can be a single point of declarations for all client tools. Note that it can be a single point of failure too, thus, a High Availability LDAP configuration is recommended for production use. In my case, I was interested in using a LDAP as TNS connections repository for the WebLogic Domains Data-Source connections. I used a Oracle Unified Directory (OUD).

The first step is to enable the Oracle Database Net Services in OUD. I choose to create a new Naming Context to isolate the TNS declarations from the users and groups.

Connect to the OUD
Connect to the OUD instance

Create a new Naming Context in the configuration TAB. I choose “dc=databaseconnextion,dc=com” as new naming context
Creating a new Naming Context

Enable this new Naming Context to store Oracle Databases net Services
Enabling the New Naming Context for Database Net Services

Move to the OUD data Browser and Select the OracleContext entry in the dc=databaseConnect,dc=com Naming Context created above.
OUD data browser This OrcaleContext entry has been created automatically and at the same time, some policies were created to allow queries in this OracleContext to anonymous users.

Create a TNS entry to point to the DB. This is done creating a new entry and selecting the orclNetService object class in the first wizard.
Create New TNS entry
 give a name
 fill up the attributes
 select the Attribute used in the DN
 Summary
 The ORCL entry once created.

Test if the TNS connection resolution is reachable using a LDAPSEARCH request:

[oracle@vm02 ~]$ /u00/app/oracle/product/12.1.0/dbhome_1/bin/ldapsearch -h vm01 -p 1389 -b dc=DatabaseConnection,dc=com cn=orcl
cn=orcl,cn=OracleContext,dc=databaseconnection,dc=com
orclNetDescString=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)   (HOST = vm02.dbi-workshop.com)(PORT =1521))) (CONNECT_DATA = (SID = ORCL)))
orclVersion=12.2.0.4
cn=orcl
objectClass=top
objectClass=orclNetService
orclNetDescName=ORCL Demonstration DB

Of course once the new Naming Context has been created and the Oracle Net Services enabled in it, the same TNS entry can be created using a ldif file.

[oracle@vm02 ~]$ more orcl.ldif
dn: cn=orcl,cn=OracleContext,dc=databaseconnection,dc=com
objectClass: top
objectClass: orclNetService
orclNetDescString: (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)   (HOST = vm02.dbi-workshop.com)(PORT =1521))) (CONNECT_DATA = (SID = ORCL)))
orclVersion: 12.2.0.4
cn: orcl
orclNetDescName: ORCL Demonstration DB
[oracle@vm02 ~]$ ldapadd -h vm01 -p 1389 -D "cn=Directory Manager" -w ****** -f orcl.ldif
adding new entry cn=orcl,cn=OracleContext,dc=databaseconnection,dc=com

[oracle@vm02 ~]$

Now that we have our LDAP server configured to store TNS connections, the following blog will explain how to configure a WebLogic JDBC Datasource to resolve the database connection using an LDAP definition.

Cet article How to declare TNS entries in Oracle Unified Directory (OUD) est apparu en premier sur Blog dbi services.

Documentum – IndexAgent can’t start in normal mode

Yann Neuhaus - Sun, 2020-11-29 02:05

Everybody familiar with Documentum knows that just starting the JBoss/WildFly hosting an IndexAgent isn’t really enough to have the indexing working: the IndexAgent must be started from the UI (or via DA or via the job or via iapi or automatically via the Repository startup or …). Starting the IA in “normal mode” is usually something that takes a few seconds. I faced a few times an IA that apparently didn’t want to start: whenever the request was submitted, it would just try but never succeed. In this blog, I will try to explain why it happens and what can be done to restore it.

When an IndexAgent start, it will do a few things like setup the filters/exclusions, it will check all the parameters configured and finally it will communicate with the Repository to do cleanup. The step that is most probably causing this “issue” is the last one. What happen is that whenever the IndexAgent is running, it will consumes documents for indexing. During this process, it will mark some of the items in the dmi_queue_item table as taken into account. However, if the xPlore Server is stopped during the processing of these items, it might not be fully completed and therefore, there are still tasks in progress that were cancelled. To avoid non-indexed documents, the very first task of the IndexAgent, even before it is marked as started in normal mode, is therefore to reinitialize the status of these items by putting them back into the queue to process. The IndexAgent will never be marked as running if this doesn’t complete and this is what happen whenever you are facing this issue about an IndexAgent just stuck in the start process.

To see the details of the start process of an IndexAgent, you can just look into its log file whenever you submit the request. This is an example of a “working” startup:

2020-11-13 14:29:29,765 INFO FtIndexAgent [http--0.0.0.0-9202-3]DM_INDEX_AGENT_START
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter cabinets_to_exclude value: Temp, System, Resources,
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter types_to_exclude value: dmi_expr_code, dmc_jar, dm_method, dm_activity, dmc_module, dmc_aspect_type, dm_registered, dm_validation_descriptor, dm_location, dmc_java_library, dm_public_key_certificate, dm_client_registration, dm_procedure, dmc_dar, dm_process, dmc_tcf_activity_template, dm_ftwatermark, dmc_wfsd_type_info, dm_menu_system, dm_plugin, dm_script, dmc_preset_package, dm_acs_config, dm_business_pro, dm_client_rights, dm_cont_transfer_config, dm_cryptographic_key, dm_docbase_config, dm_esign_template, dm_format_preferences, dm_ftengine_config, dm_ftfilter_config, dm_ftindex_agent_config, dm_jms_config, dm_job, dm_mount_point, dm_outputdevice, dm_server_config, dm_xml_application, dm_xml_config, dm_ftquery_subscription, dm_smart_list,
2020-11-13 14:29:29,808 INFO Context [http--0.0.0.0-9202-3]Filter folders_to_exclude value: /Temp/Jobs, /System/Sysadmin/Reports, /System/Sysadmin/Jobs,
2020-11-13 14:29:29,811 INFO AgentInfo [http--0.0.0.0-9202-3]Start
Documentum Index Agent 1.5.0170.0173
Java Version                    1.7.0_72
DFC Version                     7.2.0170.0165
DMCL Version                    7.2.0170.0165
Docbase (Repo01)                7.2.0160.0297  Linux64.Oracle

Start Configuration Information
 Instance
  indexagent_instance_name(AgentInstanceName)=xplore_server01_9200_IndexAgent
  docbase_name(DocbaseName)=Repo01
  docbase_user(DocbaseUser)=
  docbase_domain(DocbaseDomain)=
  runaway_item_timeout(RunawayItemTimeout)=600000
  runaway_thread_timeout(RunawayThreadTimeout)=600000
  parameter_list(InstanceOptionalParams)
 Status
  frequency(StatusFrequency)=5000
  history_size(StatusHistorySize)=20
 Connectors
  class_name(ClassName)=com.documentum.server.impl.fulltext.indexagent.connector.DocbaseNormalModeConnector
  parameter_list(Options)
   parameter=save_queue_items, value=false
   parameter=queue_user, value=dm_fulltext_index_user
   parameter=wait_time, value=60000
   parameter=batch_size, value=1000
  class_name(ClassName)=com.documentum.server.impl.fulltext.indexagent.connector.FileConnector
  parameter_list(Options)
   parameter=wait_time, value=2000
   parameter=batch_size, value=100
   parameter=file_name, value=ids.txt
 Exporter
  queue_size(PrepQSize)=250
  queue_low_percent(PrepQLowPercentage)=90
  wait_time(PrepWaitTime)=100
  thread_count(PrepWorkers)=2
  shutdown_timeout(PrepShutdownTimeout)=60000
  runaway_timeout(RunawayItemTimeout)=600000
  all_filestores_local(areAll_filestores_local)=false
  local_content_area(LocalContentArea)=/data/primary/Indexagent_Repo01/export
  local_filestore_map(LocalFileStoreMap)
  local_content_remote_mount(LocalContentRemoteMount)=null
  content_clean_interval(ContentCleanInterval)=2000000
  keep_dftxml(KeepDftxml)=false
  parameter_list(PrepOptionalParameters)=
   parameter=contentSizeLimit, value=367001600
 Indexer
  queue_size(IndexQSize)=500
  queue_low_percent(IndexQLowPercentage)=90
  queue_size(CallbackQSize)=200
  queue_low_percent(CallbackQLowPercentage)=90
  wait_time(IndexWaitTime)=100
  thread_count(IndexWorkers)=1
  shutdown_timeout(IndexShutdownTimeout)=60000
  runaway_timeout(IndexRunawayTimeout)60000
  partition_config
   default_partition collection_name(DefaultCollection)=null
  partitions(PartitionMap)
 Indexer Plugin Config
  class_name(IndexerClassName)=com.documentum.server.impl.fulltext.indexagent.plugins.enterprisesearch.DSearchFTPlugin
  parameter_list(IndexerParams)
   parameter=dsearch_qrserver_host, value=lb_xplore_server.domain.com
   parameter=query_plugin_mapping_file, value=/app/dctm/server/fulltext/dsearch/dm_AttributeMapping.xml
   parameter=max_tries, value=2
   parameter=max_pending_requests, value=10000
   parameter=load_balancer_enabled, value=true
   parameter=dsearch_qrserver_protocol, value=HTTPS
   parameter=dsearch_qrygen_mode, value=both
   parameter=security_mode, value=BROWSE
   parameter=max_requests_in_batch, value=10
   parameter=dsearch_qrserver_port, value=9302
   parameter=dsearch_config_port, value=9302
   parameter=dsearch_config_host, value=xplore_server01.domain.com
   parameter=max_batch_wait_msec, value=1000
   parameter=dsearch_qrserver_target, value=/dsearch/IndexServerServlet
   parameter=dsearch_domain, value=Repo01
   parameter=group_attributes_exclude_list, value=i_all_users_names
End Configuration Information

2020-11-13 14:29:29,828 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] running DQL query: select primary_class from dmc_module where any a_interfaces = 'com.documentum.fc.indexagent.IDfCustomIndexFilter'
2020-11-13 14:29:29,833 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.services.message.impl.type.MailMessageChildFilter
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.services.message.impl.type.MailMessageChildFilter
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultCabinetFilterAction
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultFolderFilterAction
2020-11-13 14:29:29,834 INFO ObjectFilter [http--0.0.0.0-9202-3][DM_INDEX_AGENT_CUSTOM_FILTER_INFO] instantiated filter: com.documentum.server.impl.fulltext.indexagent.filter.defaultTypeFilterAction
2020-11-13 14:29:29,869 INFO defaultFilters [http--0.0.0.0-9202-3]Populated cabinet cache for filter CabinetsToExclude with count 3
2020-11-13 14:29:30,462 INFO defaultFilters [http--0.0.0.0-9202-3]Populated folder id cache for filter FoldersToExclude with count 140
2020-11-13 14:29:30,488 INFO DocbaseNormalModeConnector [http--0.0.0.0-9202-3][DM_INDEX_AGENT_QUERY_BEGIN] update dmi_queue_item objects set task_state = ' ', set sign_off_user = ' ', set dequeued_by = ' ', set message = ' ' where name = 'dm_fulltext_index_user' and task_state = 'acquired' and sign_off_user = 'xplore_server01_9200_IndexAgent'
2020-11-13 14:29:30,488 INFO DocbaseNormalModeConnector [http--0.0.0.0-9202-3][DM_INDEX_AGENT_QUERY_UPDATE_COUNT] 0
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server host: xplore_server01.domain.com
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server protocol: HTTPS
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server port: 9302
2020-11-13 14:29:30,489 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] DSS Server domain: Repo01
2020-11-13 14:29:30,502 INFO ESSIndexer [http--0.0.0.0-9202-3][DM_INDEX_AGENT_PLUGIN] Index Server Status: normal

 

When this issue occurs, the lines 92 and above will not appear. As you can see, the DQL query executed is actually recorded in the log as well as the number of items updated. The “issue” is that if there are too many items that would match the WHERE clause (acquired items), this query could take hours to complete (if at all) and therefore, it would appear as if the start isn’t working. Because of how DQL works, this kind of query on thousands of objects or more will be very DB intensive and that would introduce a big performance hit.

How is it possible to end-up with hundreds of thousand or even millions of acquired items you may think? Well each time it happened to me, it was in relation to some huge batches or jobs running that would update millions of items or during big migrations/imports of objects. As you know, the events that have been registered in the dmi_registry table will trigger the creation of a new entry in the dmi_queue_item table. Therefore, whenever you are importing a lot of documents for example, it is highly recommended to carefully manage the index table because it can cause huge performance issues since it is used a lot inside Documentum for various purposes. This is especially true whenever Lifecycles are in the picture because then processes (like ApplyD2Config) will generate a lot of dm_save events per documents and therefore duplicates in the table. I won’t go into these details in this blog but in short, you can chose to remove the events from the dmi_registry during the import and put them back afterwards, manually indexing the imported documents at the end or do manual cleanups of the dmi_queue_item table during the process. Unfortunately, if you aren’t aware that a huge migration takes places for example, then the situation can quickly become complicated with millions and millions of items. Last time I saw something similar happening, it was an import started “in secret” before the weekend and filling the dmi_queue_item table. The IndexAgent was initially started and therefore it processed them but it wasn’t fast enough. On the Monday morning, we had the pleasant surprise to see around 6 million of acquired items and 9 more million of awaiting….

I think (to be confirmed) the behavior changed in more recent versions but this environment was using xPlore 1.5 and here, the IndexAgent might pull batches of documents for processing, even if there are still already a lot in process. The xPlore Servers (a Federation) weren’t sleeping at all since they actually processed millions of items already but there were just too many to handle and unfortunately, the IA kind of entered a dead end where updating the dmi_queue_item table would just be too long for the processing to be effective again. I didn’t try to restart the IndexAgent because I knew it would never complete but I thought this might make an interesting blog post. There is probably a KB on the OpenText site describing that since it is rather well known.

As you might expect, triggering a DQL query supposed to update 6 million rows on a table that contains at the very least three times that isn’t gonna happen. So what can be done then to restore the system performance and to allow the IndexAgent to restart properly? DQL isn’t very good for processing of huge batches and therefore, your best bet would be to go to the Database directly to avoid the Documentum overhead. Instead of executing one single SQL command to update the 6 million of items, you should also split it in smaller batches by adding a WHERE clause on the date for example. That would help tremendously and that’s not something that the IndexAgent can do by itself because it has no idea of when things started to go south… So then, which kind of command should be executed? In this case, I wouldn’t recommend to do what the IndexAgent is doing. If you are simply resetting the status from acquired to awaiting, sure the IndexAgent will be able to start but it will still have 6+9 million items awaiting for processing and therefore, you still have bad performance and you have a pretty high probability that the number of acquired will rise again… Therefore, the only reasonable choice is to export all distinct items from the dmi_queue_item table and then clean/remove all FT items. With some luck, you might have 5 or 10 duplicates for each document so instead of indexing 15 million, it would just be 1 or 2 million (distinct).

An example of SQL command to cleanup all the items on a 1 hour timeframe would be for Oracle (I would suggest to make sure the IA isn’t running when messing with the table):

DELETE dmi_queue_item_s
WHERE name='dm_fulltext_index_user'
  AND delete_flag=0
  AND date_sent>=to_date('2020-06-28 22:00:00','YYYY-MM-DD HH24:MI:SS')
  AND date_sent<to_date('2020-06-28 23:00:00','YYYY-MM-DD HH24:MI:SS');
commit;

 

This cleanup can be done online without issue, just make sure you take an export of all distinct item_id to re-index afterwards, otherwise you will have to execute the FT Integrity utility to find the missing documents in the index. With parallel execution on several DB sessions, the cleanup can actually be done rather quickly and then it’s just background processing for the index via the ids.txt for example.

 

Cet article Documentum – IndexAgent can’t start in normal mode est apparu en premier sur Blog dbi services.

Oracle Fusion Financials R13 -&nbsp; Reports and Analyses

Richard Byrom - Sat, 2020-11-28 17:42

Perhaps the most useful report listing I always have at my disposals is the one Oracle delivers for Financials. It lists all the reports currently available for the following modules: -

  • Accounting Hub
  • Advanced Collections
  • Assets
  • Budgetary Control
  • Cash Management
  • Expenses
  • Financials for EMEA
  • Financials for Asia/Pacific
  • Financials for the Americas
  • General Ledger
  • Intercompany Accounting
  • Payables
  • Payments
  • Receivables
  • Revenue Management
  • Tax

Furthermore it classifies the reports accounting to type, sub type and run type and provides catalogue location, description and report output samples.

Download the Oracle Financials Cloud Release 13 Reports and Analyses Listing which covers reports available from Release 12 up to R13.18A

Preview of Oracle Fusion Financials R13 - Reports and Analyses Spreadsheet

Documentum – Custom facets not showing up after full reindex?

Yann Neuhaus - Fri, 2020-11-27 16:19

Beginning of the year, while performing a migration from a Documentum 7.3 environment on VM to Documentum 16.4 on Kubernetes, a customer had an issue where their custom facets weren’t showing up on D2 after a full reindex. At the end of the migration, since xPlore has been upgraded as well (from xPlore 1.5 to 16.4, from VM to K8s), then a full reindex has been executed so that all the documents are indexed. In this case, it was several millions of documents that were indexed and it took a few days. Unfortunately, at the end of the full reindex, the customer saw that the facets weren’t working…

Why is that exactly? Well, while configuring custom facets, you will need to add subpath configuration for the facet computing and that is a schema change inside the index. Each and every schema change requires at the very least an online rebuild of the index so that the change of the schema is propagated into each and every node of the index. Unless you are doing this online rebuild, the xPlore index schema will NOT be refreshed and the indexing of documents will therefore use the old schema. In case you are wondering what is the “online rebuild” I’m talking about, it’s the action behind the button “Rebuild Index” that you can find in the Dsearch Admin UI under “Home >> Data Management >> <DOMAIN_NAME> (usually Repo name) >> <COLLECTION_NAME> (e.g.: default or Node1_CPS1 or Node4_CPS2 …)“:

This action will not index any new content, it will however create a new index based on the refreshed schema and then copy all the nodes from the current index to the new one. At the end, it will replace the current index with the new one and this can be done online without downtime. This button was initially present for both Data collections (where your documents are) as well as ApplicationInfo collections (ACLs, Groups). However in recent versions of xPlore (at least since 16.4), the feature has been removed for the ApplicationInfo collections.

 

So, what is the minimum required to configure custom facets? The answer is that it depends… :). Here are some examples:

  • If the xPlore has never been started, the index doesn’t exist yet and therefore configuring the facets inside the indexserverconfig.xml file would take effect immediately at the first startup. In this case, an online rebuild wouldn’t even be needed. However, it might not always be easy to modify the indexserverconfig.xml file before xPlore even starts; it depends on how you are deploying the components…
  • If the xPlore has been started at least once but indexing hasn’t started yet (0 content inside the Data collections), then you can just login to the Dsearch Admin UI and perform the online rebuild on the empty collections. This will be almost instantaneous so you will most probably not even see it happen though.
    • If this is a new environment, then make sure the IndexAgent is started in normal mode after that so that it will process incoming indexing requests and that’s it
    • If this is an existing environment, then you will need to execute a full reindex operation using your preferred choice (IndexAgent full reindex action, through some select queries, through the ids.txt)
  • If the xPlore has been started at least once and the indexing has been completed, then you will need to perform the online rebuild as well. However, this time, it will take probably quite some time because as I mentioned earlier, it needs to copy all the indexed nodes to a new index. This process is normally faster than a full reindex because it’s only xPlore internal communications, because it only duplicates the existing index (and applied schema change) and because there is no exchange with the Content Server. Once the online rebuild has been performed, then the facets should be available.

 

Even if an online rebuild is faster than a full reindex, based on the size of the index, it might still take from hours to days to complete. It is therefore quite important to plan this properly in advance in case of migration or upgrade so that you can start with an online rebuild on an empty index (therefore instantaneously done) and then perform the needed full reindex after, instead of the opposite. This might save you several days of pain with your users and considerably reduce the load on the Dsearch/CPS.

This behavior wasn’t really well documented before. I had some exchange with OpenText on this topic and they created the KB15765485 based on these exchanges and also based on what is described in this blog. I’m not sure if that is really better now but at least there is a little bit more information.

 

Cet article Documentum – Custom facets not showing up after full reindex? est apparu en premier sur Blog dbi services.

Kubernetes with microK8s: First steps to expose a service to external

Dietrich Schroff - Fri, 2020-11-27 15:02

At home i wanted to have my own kubernetes cluster. I own 2 raspberry pi based on ubuntu, so i decided to install microK8s:

--> https://ubuntu.com/blog/what-can-you-do-with-microk8s

The installation is very well explained here:

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview

 

BUT: i found nowhere a tutorial how to run an container and expose the port in a way that i is reachable from other pc like localhost.

So here we go:

kubectl create deployment web --image=nginx
kubectl expose deployment web --type=NodePort --port=80

After that just do:

# kubectl get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-5xvfc   1/1     Running   0          112s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP        2d5h
service/web          NodePort    10.152.183.66   <none>        80:32665/TCP   105s

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web   1/1     1            1           112s

NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/web-96d5df5c8   1         1         1       112s

On you kubernetes node you can reach the service with 10.152.183.66:80.

For getting the nginx from another pc just use:

<yourkuberneteshost>:32665

For me:



 


PostgreSQL 14: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly

Yann Neuhaus - Fri, 2020-11-27 09:52

It is a common misunderstanding that VACUUM FULL saves you from running out of disk space if you already have space pressure. Running a VACUUM FULL temporarily requires at least double the space, as the table (and the indexes on the table) get completely re-written. PostgreSQL 14 will probably come with a solution for that as this patch introduces the possibility to move relations from one tablespace to another, when either CLUSTER, VACUUM FULL or REINDEX is executed.

As this is about moving relations from one tablespace to another we obviously need at least two tablespaces to play with:

postgres=# \! mkdir /var/tmp/tbs1
postgres=# \! mkdir /var/tmp/tbs2
postgres=# create tablespace tbs1 location '/var/tmp/tbs1';
CREATE TABLESPACE
postgres=# create tablespace tbs2 location '/var/tmp/tbs2';
CREATE TABLESPACE
postgres=# \db
          List of tablespaces
    Name    |  Owner   |   Location    
------------+----------+---------------
 pg_default | postgres | 
 pg_global  | postgres | 
 tbs1       | postgres | /var/tmp/tbs1
 tbs2       | postgres | /var/tmp/tbs2
(4 rows)

Lets assume we have a table in the first tablespace and we face space pressure on that file system:

postgres=# create table t1 ( a int, b date ) tablespace tbs1;
CREATE TABLE
postgres=# insert into t1 select x, now() from generate_series(1,1000000) x;
INSERT 0 1000000

Without that patch there is not much you can do, except for this (which blocks for the duration of the operation):

postgres=# alter table t1 set tablespace tbs2;
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | text    |           |          | 
 c      | date    |           |          | 
Tablespace: "tbs2"

This will move the files of that table to the new tablespace (but not the indexes). If you really want to get back the space on disk with “vacuum full” you can now do that:

postgres=# vacuum (tablespace tbs1, full true)  t1;
VACUUM
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 
Tablespace: "tbs1"

The very same is possible with reindex:

postgres=# create index i1 on t1 (a);
CREATE INDEX
postgres=# reindex (tablespace tbs2) index i1;
REINDEX

… and cluster:

postgres=# cluster (tablespace tbs1, index_tablespace tbs1) t1 using i1;
CLUSTER
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 
Indexes:
    "i1" btree (a) CLUSTER, tablespace "tbs1"
Tablespace: "tbs1"

postgres=# 

Nice.

Cet article PostgreSQL 14: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator