Feed aggregator

AWS Lambda : Serverless Compute Service

Online Apps DBA - 14 hours 21 min ago

The next buzz in cloud computing is the event-driven computing service. AWS Lambda brings the same to existence. Lambda makes the best of the concept of pay-per-use and as a service much further than it did in the case of EC2. It offers pay-per-millisecond computing, a service always available without delays, scalable power, no systems maintenance costs, response to events both from the rest of AWS services and events generated by […]

The post AWS Lambda : Serverless Compute Service appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

XML Tags Extracting Data where multiple of same tag exists

Tom Kyte - 14 hours 26 min ago
Here is a snippet from some XML that is held in CLOB Column on a database <code><applicationVerificationResponse> .... some data.... <errorsOrWarnings> <errorMessage> <errorMessage> <errorMessage>BV3: Validation Failure INVALID - Sortcode</errorMessage> </errorMessage> <errorMessage> <errorMessage>ErrorID(923024142):System Error Contact Support</errorMessage> </errorMessage> </errorMessage> </errorsOrWarnings> </applicationVerificationResponse></code> What I am trying to do is extract the text values from the lowest node level errorMessage tags, I can't do anything about the XML format as it comes in from another system. I have tried this code <code>SELECT XMLQuery('/applicationVerificationResponse/errorsOrWarnings/errorMessage/errorMessage/errorMessage/text()' PASSING XMLTYPE(xml_data) RETURNING CONTENT) Message FROM my_table WHERE <col1> = <value> AND <col2> = <value>;</code> I get this answer BV3: Validation Failure INVALID - SortcodeErrorID(923024142):System Error Contact Support Whereas I want to get back two rows of data like this BV3: Validation Failure INVALID - Sortcode ErrorID(923024142):System Error Contact Support I've found lots of articles on the Internet but none seem to give me the answer I'm looking for.
Categories: DBA Blogs

Kubernetes: Building Kuard for Raspberry Pi (microk8s / ARM64)

Dietrich Schroff - Sun, 2021-02-28 12:26

 

In one of my last posts (click here) i used KUARD (kubernetes up and running demo) to check the livenessProbes of kubernetes.

In my posting i pulled the image from gcr.io/kuar-demo/kuard-arm64:3.

But what about building this image on myself?

First step: get the sources:

git clone https://github.com/kubernetes-up-and-running/kuard.git

Second step: run docker build:

cd kuard/
docker build . -t kuard:localbuild

But this fails with:

Step 13/14 : COPY --from=build /go/bin/kuard /kuard
COPY failed: stat /var/lib/docker/overlay2/60ba596c03e23fdfbca2216f495504fa2533a2f2e8cadd81a764a200c271de86/merged/go/bin/kuard: no such file or directory

What is going wrong here?

Inside the Dockerfile(s) there is ARCH=amd64

Just correct that with "sed -i 's/amd/arm/g' Dockerfile*"

After that the image is built without any problem:

Sending build context to Docker daemon  3.379MB
Step 1/14 : FROM golang:1.12-alpine AS build
 ---> 9d993b748f32
Step 2/14 : RUN apk update && apk upgrade && apk add --no-cache git nodejs bash npm
 ---> Using cache
 ---> 54400a0a06c5
Step 3/14 : RUN go get -u github.com/jteeuwen/go-bindata/...
 ---> Using cache
 ---> afe4c54a86c3
Step 4/14 : WORKDIR /go/src/github.com/kubernetes-up-and-running/kuard
 ---> Using cache
 ---> a51084750556
Step 5/14 : COPY . .
 ---> 568ef8c90354
Step 6/14 : ENV VERBOSE=0
 ---> Running in 0b7100c53ab0
Removing intermediate container 0b7100c53ab0
 ---> f22683c1c167
Step 7/14 : ENV PKG=github.com/kubernetes-up-and-running/kuard
 ---> Running in 8a0f880ea2ca
Removing intermediate container 8a0f880ea2ca
 ---> 49374a5b3802
Step 8/14 : ENV ARCH=arm64
 ---> Running in c6a08b2057d0
Removing intermediate container c6a08b2057d0
 ---> dd871e379a96
Step 9/14 : ENV VERSION=test
 ---> Running in 07e7c373ece7
Removing intermediate container 07e7c373ece7
 ---> 9dabd61d9cd0
Step 10/14 : RUN build/build.sh
 ---> Running in 66471550192c
Verbose: 0

> webpack-cli@3.2.1 postinstall /go/src/github.com/kubernetes-up-and-running/kuard/client/node_modules/webpack-cli
> lightercollective


     *** Thank you for using webpack-cli! ***

Please consider donating to our open collective
     to help us maintain this package.

  https://opencollective.com/webpack/donate

                    ***

added 819 packages from 505 contributors and audited 887 packages in 86.018s
found 683 vulnerabilities (428 low, 4 moderate, 251 high)
  run `npm audit fix` to fix them, or `npm audit` for details

> client@1.0.0 build /go/src/github.com/kubernetes-up-and-running/kuard/client
> webpack --mode=production

Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
Hash: 52ca742bfd1307531486
Version: webpack 4.28.4
Time: 39644ms
Built at: 02/05/2021 6:48:35 PM
    Asset     Size  Chunks                    Chunk Names
bundle.js  333 KiB       0  [emitted]  [big]  main
Entrypoint main [big] = bundle.js
 [26] (webpack)/buildin/global.js 472 bytes {0} [built]
[228] (webpack)/buildin/module.js 497 bytes {0} [built]
[236] (webpack)/buildin/amd-options.js 80 bytes {0} [built]
[252] ./src/index.jsx + 12 modules 57.6 KiB {0} [built]
      | ./src/index.jsx 285 bytes [built]
      | ./src/app.jsx 7.79 KiB [built]
      | ./src/env.jsx 5.42 KiB [built]
      | ./src/mem.jsx 5.81 KiB [built]
      | ./src/probe.jsx 7.64 KiB [built]
      | ./src/dns.jsx 5.1 KiB [built]
      | ./src/keygen.jsx 7.69 KiB [built]
      | ./src/request.jsx 3.01 KiB [built]
      | ./src/highlightlink.jsx 1.37 KiB [built]
      | ./src/disconnected.jsx 3.6 KiB [built]
      | ./src/memq.jsx 6.33 KiB [built]
      | ./src/fetcherror.js 122 bytes [built]
      | ./src/markdown.jsx 3.46 KiB [built]
    + 249 hidden modules
go: finding github.com/prometheus/client_golang v0.9.2
go: finding github.com/spf13/pflag v1.0.3
go: finding github.com/miekg/dns v1.1.6
go: finding github.com/pkg/errors v0.8.1
go: finding github.com/elazarl/go-bindata-assetfs v1.0.0
go: finding github.com/BurntSushi/toml v0.3.1
go: finding github.com/felixge/httpsnoop v1.0.0
go: finding github.com/julienschmidt/httprouter v1.2.0
go: finding github.com/dustin/go-humanize v1.0.0
go: finding golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: finding github.com/spf13/viper v1.3.2
go: finding github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: finding github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: finding github.com/matttproud/golang_protobuf_extensions v1.0.1
go: finding github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: finding github.com/golang/protobuf v1.2.0
go: finding github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: finding golang.org/x/sync v0.0.0-20181108010431-42b317875d0f
go: finding golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: finding github.com/hashicorp/hcl v1.0.0
go: finding github.com/spf13/afero v1.1.2
go: finding github.com/coreos/go-semver v0.2.0
go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
go: finding github.com/fsnotify/fsnotify v1.4.7
go: finding github.com/spf13/jwalterweatherman v1.0.0
go: finding github.com/coreos/etcd v3.3.10+incompatible
go: finding gopkg.in/yaml.v2 v2.2.2
go: finding golang.org/x/text v0.3.0
go: finding github.com/pelletier/go-toml v1.2.0
go: finding github.com/magiconair/properties v1.8.0
go: finding github.com/mitchellh/mapstructure v1.1.2
go: finding github.com/stretchr/testify v1.2.2
go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
go: finding github.com/coreos/go-etcd v2.0.0+incompatible
go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
go: finding github.com/spf13/cast v1.3.0
go: finding github.com/davecgh/go-spew v1.1.1
go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
go: finding github.com/pmezard/go-difflib v1.0.0
go: downloading github.com/julienschmidt/httprouter v1.2.0
go: downloading github.com/pkg/errors v0.8.1
go: downloading github.com/miekg/dns v1.1.6
go: downloading github.com/spf13/viper v1.3.2
go: downloading github.com/felixge/httpsnoop v1.0.0
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/prometheus/client_golang v0.9.2
go: extracting github.com/pkg/errors v0.8.1
go: extracting github.com/julienschmidt/httprouter v1.2.0
go: extracting github.com/felixge/httpsnoop v1.0.0
go: extracting github.com/spf13/viper v1.3.2
go: downloading github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/spf13/pflag v1.0.3
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/dustin/go-humanize v1.0.0
go: extracting github.com/miekg/dns v1.1.6
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/hashicorp/hcl v1.0.0
go: extracting github.com/dustin/go-humanize v1.0.0
go: downloading github.com/magiconair/properties v1.8.0
go: downloading github.com/spf13/afero v1.1.2
go: extracting github.com/fsnotify/fsnotify v1.4.7
go: downloading golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/spf13/cast v1.3.0
go: extracting github.com/spf13/jwalterweatherman v1.0.0
go: extracting gopkg.in/yaml.v2 v2.2.2
go: extracting github.com/spf13/afero v1.1.2
go: extracting github.com/magiconair/properties v1.8.0
go: extracting github.com/prometheus/client_golang v0.9.2
go: downloading github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/spf13/cast v1.3.0
go: downloading golang.org/x/text v0.3.0
go: downloading golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/hashicorp/hcl v1.0.0
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: downloading github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: downloading github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: extracting github.com/pelletier/go-toml v1.2.0
go: downloading github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: extracting github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: downloading github.com/golang/protobuf v1.2.0
go: downloading github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: extracting github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/golang/protobuf v1.2.0
go: extracting golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: extracting golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: extracting golang.org/x/text v0.3.0
Removing intermediate container 66471550192c
 ---> 236f3050bc93
Step 11/14 : FROM alpine
 ---> 1fca6fe4a1ec
Step 12/14 : USER nobody:nobody
 ---> Using cache
 ---> cabde1f6b77c
Step 13/14 : COPY --from=build /go/bin/kuard /kuard
 ---> 39e8b0af8cef
Step 14/14 : CMD [ "/kuard" ]
 ---> Running in ca867aeb43ba
Removing intermediate container ca867aeb43ba
 ---> e1cb3fd58eb4
Successfully built e1cb3fd58eb4
Successfully tagged kuard:localbuild

[AZ-400] Microsoft Azure DevOps Training: Step-By-Step Activity Guides/Hands-On Lab Exercises

Online Apps DBA - Sat, 2021-02-27 07:41

Are you looking for information on the Hand-On Labs one should perform to become a Microsoft [AZ-400] Certified Azure DevOps Engineer? If YES, check out K21Academy’s blog post at https://k21academy.com/az40005 that talks about all such Hands-On Labs in detail. Begin your journey towards becoming a Microsoft [AZ-400] Certified Azure DevOps Engineer and earning a lot […]

The post [AZ-400] Microsoft Azure DevOps Training: Step-By-Step Activity Guides/Hands-On Lab Exercises appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Google Associate Cloud Engineer: Step-by-Step Hands-On

Online Apps DBA - Sat, 2021-02-27 06:13

The Google Cloud Associate Cloud Engineer exams mesh towards those who are interested in fundamental skills of deploying, monitoring, and maintaining projects on Google Cloud and want to start their career in it. It can be used as a path to professional-level certifications. If you are planning to take the Google Associate Cloud Engineer Certification […]

The post Google Associate Cloud Engineer: Step-by-Step Hands-On appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Rabbit MQ with Docker for Microservices

Andrejus Baranovski - Sat, 2021-02-27 06:02
Rabbit MQ message broker helps to implement event-driven architecture for microservices. Instead of tight coupling multiple services, we can send and subscribe to events. In this video, I explain how to dockerize Rabbit MQ and provide a simple, but complete example of communication through Rabbit MQ.

 

How to use CASE in Select statement

Tom Kyte - Sat, 2021-02-27 01:26
Tom, The query below works fine but it?s not what I want to accomplish. select distinct mp.policy_number, mp.insured_name, mp.policy_eff_date, mp.total_premium_amt, mp.inspection_fee, mp.policy_fee, mp.surplus_lines_fee, mp.stamping_fee, mp.branch, ml.risk_state, fee_5, decode( ml.risk_state, 'NE', 'FIRE_TAX', 'AK', 'FILING_FEE', 'KY', 'KYMUNICIPAL_TAX', 'IL', 'FIRE_MARSHALL_TAX', 'MI', 'REGULATORY_FEE', 'OR', 'FIRE_MARSHALL_TAX', 'VA', 'MAINTENANCE_TAX', 'WV', 'FC_SURCHARGE', 'NJ', 'FIRE_TAX', 'SD', 'FIRE_TAX', 'MT', 'FIRE_TAX', 'FL',CASE WHEN mp.policy_eff_date between '20060101' and '20061231' then 'CITIZEN_TAX' ELSE 'CATASTROPHE_FUND' END), fee_6, decode( ml.risk_state, 'FL', 'EMERGENCY_FUND_FEE', 'OR', 'SL_SERVICE_CHARGE') from mga_policy mp, mga_location ml where mp.surplus_lines_tax is not null and mp.seq_id = ml.seq_id and mp.policy_number = ml.policy_number Ideally I want to create a separate field for each fee/tax depending on decode condition. For example: If risk_state = 'NE' I want the value from fee_5 field be fee_5 as FIRE_TAX If risk_state = 'AK' I want the value from fee_5 field be fee_5 as FILING_FEE And the same for fee_6 etc. How can I accomplish this? Thank you for your help. Regards, Larisa
Categories: DBA Blogs

Be careful with prepared transactions in PostgreSQL

Yann Neuhaus - Fri, 2021-02-26 09:24

PostgreSQL gives you the possibility for two-phase commit. You’ll might need that if you want an atomic distributed commit. If you check the PostgreSQL documentation there is a clear warning about using these kind of transactions: “Unless you’re writing a transaction manager, you probably shouldn’t be using PREPARE TRANSACTION”. If you really need to use them, you need to be very careful, that prepared transactions are committed or rollback-ed as soon as possible. In other words, you need a mechanism that monitors the prepared transactions in your database and takes appropriate action if they are kept open too long. If this happens you will run into various issues and it is not immediately obvious where your issues come from.

To start with, lets create a simple prepared transaction:

postgres=# begin;
BEGIN
postgres=*# create table t1 (a int);
CREATE TABLE
postgres=*# insert into t1 values (1);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION

From this point on, the transaction is not anymore associated with the session. You can verify that easily if you try to commit or rollback the transaction:

postgres=# commit;
WARNING:  there is no transaction in progress
COMMIT

This also means that the “t1” table that was created before we prepared the transaction is not visible to us:

postgres=# select * from t1;
ERROR:  relation "t1" does not exist
LINE 1: select * from t1;
                      ^

Although we are not in any visible transaction anymore, there are locks in the background because of our prepared transaction:

postgres=# select * from pg_locks where database = (select oid from pg_database where datname = 'postgres') and mode like '%Exclusive%';
 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid |        mode         | granted | fastpath | waitstart 
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+---------------------+---------+----------+-----------
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/562             |     | RowExclusiveLock    | t       | f        | 
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/562             |     | AccessExclusiveLock | t       | f        | 
(2 rows)

There is one AccessExclusiveLock lock, wihch is the lock on the “t1” table. The other lock, “RowExclusiveLock”, is the lock that protects the row we inserted above. How can we know that? Well, currently this is only a guess, as the “t1” table is not visible:

postgres=# select relname from pg_class where oid = 24582;
 relname 
---------
(0 rows)

Once we commit the prepared transaction, we can verify, that it really was about “t1”:

postgres=# commit prepared 'abc';
COMMIT PREPARED
postgres=# select relname from pg_class where oid = 24582;
 relname 
---------
 t1
(1 row)

postgres=# select * from t1;
 a 
---
 1
(1 row)

We can also confirm that by again taking a look the locks:

postgres=# select * from pg_locks where database = (select oid from pg_database where datname = 'postgres') and mode like '%Exclusive%';
 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart 
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------+-----------
(0 rows)

These locks are gone as well. So, not a big deal, as soon as the prepared transaction is committed all is fine. This is the good case and if it goes like that you will probabyl not hit any issue.

Lets create another prepared transaction:

postgres=# begin;
BEGIN
postgres=*# insert into t1 values(2);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION

First point to remember: Once you create a prepared transaction it is fully stored on disk:

postgres=# \! ls -la $PGDATA/pg_twophase/*
-rw------- 1 postgres postgres 212 Feb 26 11:24 /u02/pgdata/DEV/pg_twophase/00000233

Once it is committed the file is gone:

postgres=# commit prepared 'abc';
COMMIT PREPARED
postgres=# \! ls -la $PGDATA/pg_twophase/
total 8
drwx------  2 postgres postgres 4096 Feb 26 11:26 .
drwx------ 20 postgres postgres 4096 Feb 26 10:49 ..

Why is that? The answer is, that a prepared transaction even can be committed or rollback-ed if the server crashes. But this also means, that prepared transactions are persistent across restarts of the instance:

postgres=# begin;
BEGIN
postgres=*# insert into t1 values(3);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION
postgres=# \! pg_ctl restart 
waiting for server to shut down.... done
server stopped
waiting for server to start....2021-02-26 11:28:51.226 CET - 1 - 10576 -  - @ LOG:  redirecting log output to logging collector process
2021-02-26 11:28:51.226 CET - 2 - 10576 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started
postgres=# \! ls -la  $PGDATA/pg_twophase/
total 12
drwx------  2 postgres postgres 4096 Feb 26 11:28 .
drwx------ 20 postgres postgres 4096 Feb 26 11:28 ..
-rw-------  1 postgres postgres  212 Feb 26 11:28 00000234

Is that an issue? Imagine someone prepared a transaction and forgot to commit or rollback the transaction. A few days later someone wants to modify the application and tries to add a column to the “t1” table:

postgres=# alter table t1 add column b text;

This will be blocked for no obvious reason. Looking at the locks once more:

 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |        mode         | granted | fastpath |           waitstart           
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+---------------------+---------+----------+-------------------------------
 relation |    12969 |    24582 |      |       |            |               |         |       |          | 3/4                | 10591 | AccessExclusiveLock | f       | f        | 2021-02-26 11:30:30.303512+01
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/564             |       | RowExclusiveLock    | t       | f        | 
(2 rows)

We can see that pid 10591 is trying to get the look but cannot get in (granted=’f’). The other entry has no pid entry and this is the prepared transaction. The pid will always be empty for prepared transactions, so if you already know this, it might point you to the correct solution for this. If you don’t, then you are almost stuck. There is no session you can terminate, as nothing is reported about that in pg_stat_activity:

postgres=# select datid,datname,pid,wait_event_type,wait_event,state,backend_type from pg_stat_activity ;
 datid | datname  |  pid  | wait_event_type |     wait_event      | state  |         backend_type         
-------+----------+-------+-----------------+---------------------+--------+------------------------------
       |          | 10582 | Activity        | AutoVacuumMain      |        | autovacuum launcher
       |          | 10584 | Activity        | LogicalLauncherMain |        | logical replication launcher
 12969 | postgres | 10591 | Lock            | relation            | active | client backend
 12969 | postgres | 10593 |                 |                     | active | client backend
       |          | 10580 | Activity        | BgWriterHibernate   |        | background writer
       |          | 10579 | Activity        | CheckpointerMain    |        | checkpointer
       |          | 10581 | Activity        | WalWriterMain       |        | walwriter
(7 rows)

You will not see any blocking sessions (blocked_by=0):

postgres=# select pid
postgres-#      , usename
postgres-#      , pg_blocking_pids(pid) as blocked_by
postgres-#      , query as blocked_query
postgres-#   from pg_stat_activity
postgres-#   where cardinality(pg_blocking_pids(pid)) > 0;
  pid  | usename  | blocked_by |           blocked_query           
-------+----------+------------+-----------------------------------
 10591 | postgres | {0}        | alter table t1 add column b text;

Even if you restart the instance the issue will persist. The only solution to that is, to either commit or rollback the prepared transactions;

postgres=# select * from pg_prepared_xacts;
 transaction | gid |           prepared            |  owner   | database 
-------------+-----+-------------------------------+----------+----------
         564 | abc | 2021-02-26 11:28:37.362649+01 | postgres | postgres
(1 row)
postgres=# rollback prepared 'abc';
ROLLBACK PREPARED
postgres=# 

As soon this completed the other session will be able to complete it’s work:

postgres=# alter table t1 add column b text;
ALTER TABLE

Remember: When things look really weird, it might be, because you have ongoing prepared transactions.

Cet article Be careful with prepared transactions in PostgreSQL est apparu en premier sur Blog dbi services.

How to insert a character inbetween 2digit in a block of 4 digit

Tom Kyte - Fri, 2021-02-26 07:06
Hi Tom, I would like to add a character after 2 digits in a block of 4 digit in PL/SQL. I have to update those records in the table(Input) with (Output) Eg: Input 1234abc5678 Output 12:34abc56:78. Could you please help. Thankyou
Categories: DBA Blogs

Regexp_replace or replace function to replace every occurance of matching pattern

Tom Kyte - Fri, 2021-02-26 07:06
Hi Tom, Thanks in advance. I am trying to replace naked decimal with '0.' Here is the example. String: '.45ML-.91ML-25MCG/.5ML-.9ML(.3ML)-25.5ML or .45' Every occurrence of naked decimal point should be replaced with '0.' resulting in '0.45ML-0.91ML-25MCG/0.5ML-0.9ML(0.3ML)-25.5ML or 0.45'. Please note 25.5 in the string is not naked decimal and remains as is. I tried to achieve using replace function but am not totally confident of the solution. <code>with str_rec as ( SELECT '.45ML-.91ML-25MCG/.5ML-.9ML(.3ML)-25.5ML or .45' str from dual ) select case when substr(str, 1, 1) = '.' then regexp_replace(replace( replace( replace( replace( replace( str, ' .', ' 0.'), '/.', '/0.'), '\.', '\0.'), '(.', '(0.'), '-.', '-0.'), '[.]', '0.', 1, 1) else replace( replace( replace( replace( replace( str, ' .', ' 0.'), '/.', '/0.'), '\.', '\0.'), '(.', '(0.'), '-.', '-0.') end, regexp_count(str, '[.]', 1, 'i') from str_rec;</code> Can we achieve this using regexp_replace or is there a better way to do this. Thanks
Categories: DBA Blogs

Designing a Data Model with XML

Tom Kyte - Fri, 2021-02-26 07:06
Hello, Ask TOM Team. I am creating a data model and thinking about create some XMLType columns in Oracle Database 19c because we receive the data in that format and we want to take advantage of this Oracle feature. I have some questions: 1. Developers say that if I create a XMLType column to store the XML data it will have no performance while the data increases. Because of that, they tell me that I should place some columns of the XML in relational model in order to gain performance when querying through the app because XML query is "slow". I say that we can query the XMLType column using XMLQuery and indexing the relevant columns. For example: If I have XML: <code><maintag> <Version>1.0</Version> <column1>value</column1> <column2>value</column2> <column3>value</column3> <column4>value</column4> </maintag></code> They want: <code>create table A column1 varchar(x), column2 varchar(x), XML XMLType</code> What do you think? 2. Are identity columns still useful as PK in tables with XMLType columns? 3. What else should I take into account using XMLType columns? Thanks in advanced. Regards,
Categories: DBA Blogs

Need SQL for standardizing the addresses

Tom Kyte - Fri, 2021-02-26 07:06
Hi Tom and Team, I really appreciate your help for all of us. We are in the process of standardizing the addresses of our customers. I have a main t1omer table which contains the customers data. I have a mapping table that contains mapping for final value to multiple values. I need to get the final_value that is correspond to the multiple_values. if the length of the address is more than 35 characters after standardizing, then from previous space(' ') from right side to end of the string, it is created as a address2 field could you please help with sql or pl/sql? using Oracle 12c database. customer Table: <code>cust_id address 10 9 Help Street, Level 4 11 22 Victoria Street 12 1495 Franklin Str 13 30 Hasivim St.,Petah-Tikva 14 2 Jakaranda St 15 61, Science Park Rd 16 61, Social park road 17 Av. Hermanos Escobar 5756 18 Ave. Hermanos Escobar 5756 19 8000 W FLORISSANT AVE 20 8600 MEMORIAL PKWY SW 21 8200 FLORISSANTMEMORIALWAYABOVE SW 22 8600 MEMORIALFLORISSANT PKWY SW create table t1 ( cust_id number, address varchar2(100) ); Insert into t1 values(10,'9 Help Street, Level 4'); Insert into t1 values(11,'22 Victoria Street'); Insert into t1 values(12,'1495 Franklin Str'); Insert into t1 values(13,'61, Science Park Rd'); Insert into t1 values(14,'61, Social park road'); Insert into t1 values(15,'Av. Hermanos Escobar 5756'); Insert into t1 values(16,'Ave. Hermanos Escobar 5756'); Insert into t1 values(17,'8000 W FLORISSANT AVE'); Insert into t1 values(18,'8600 MEMORIAL PKWY SW'); Insert into t1 values(19,'8200 FLORISSANTMEMORIALWAYABOVE SW'); Insert into t1 values(20,'8600 MEMORIALFLORISSANT PKWY SW');</code> -------------- Mapping Table: <code>id final_value multiple_values 1 St Street 2 St St. 3 St Str 4 St St 5 Rd Rd. 6 Rd road 7 Av Av. 8 Av Ave. 9 Av Avenue 10 Av Aven. 11 West W 12 South West SW create table t2 ( id number, final_vaue varchar2(50), multiple_values varchar2(50) ); insert into t2 values(1,'St','Street'); insert into t2 values(2,'St','St.'); insert into t2 values(3,'St','Str'); insert into t2 values(4,'St','St'); insert into t2 values(5,'Rd','Rd.'); insert into t2 values(6,'Rd','road'); insert into t2 values(7,'Av','Av.'); insert into t2 values(8,'Av','Ave.'); insert into t2 values(9,'Av','Avenue'); insert into t2 values(10,'Av','Aven.'); insert into t2 values(11,'West','W'); insert into t2 values(12,'South West','SW.');</code> ------------ Expected Output: <code>cust_id address 10 9 Help St, Level 4 11 22 Victoria St 12 1495 Franklin St 13 30 Hasivim St ,Petah-Tikva 14 2 Jakaranda St 15 61, Science Park Rd 16 61, Social park Rd 17 Av Hermanos Escobar 5756 18 Av Hermanos Escobar 5756 19 8000 West FLORISSANT Ave 20 8600 MEMORIAL PKWY South West</code> if length of the address is more than 35 characters then the expected output is: <code>cust_id address address2 21 8200 FLORISSANTMEMORIALWAYABOVE South West 22 8600 MEMORIALFLORISSANT PKWY South West</code> Thaks for all your help
Categories: DBA Blogs

Google Cloud Storage Service

Online Apps DBA - Fri, 2021-02-26 05:19

Cloud Storage is a service for storing objects in Google Cloud. An object is an immutable piece of data consisting of a file of any format. These objects are stored in containers called buckets. All buckets are associated with a project, and you can group your projects under an organization. Google Cloud Storage allows world-wide […]

The post Google Cloud Storage Service appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Automate CNOs and VCOs for SQL Server AAG

Yann Neuhaus - Fri, 2021-02-26 04:20

During the installation of a new SQL Server environment in a Project, we wanted to automate the whole process deployment and configuration when installing a new SQL Server Always On Availability Group (AAG).
This installation requires to prestage cluster computer objects in Active Directory Domain Services, called Cluster Name Objects (CNOs) and Virutal Computer Objects (VCOs).
For more information on the prestage process, please read this Microsoft article.

In this blog, we will see how to automate the procedure through PowerShell scripts. ActiveDirectory module is required.

CNO Creation

First, you need an account with the approriate permissions to create Objects in a specific OU of the domain.
With this account, you can create the CNO object as follows:

# To configure following your needs
$Ou1='CNO-VCO';
$Ou2='MSSQL';
$DC1='dbi';
$DC2='test';
$ClusterName='CLST-PRD1';
$ClusterNameFQDN="$(ClusterName).$($DC1).$($DC2)";

# Test if the CNO exists
If (-not (Test-path "AD:CN=$($ClusterName),OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)")){
	# Create CNO for Windows Cluster
	New-ADComputer -Name "$ClusterName" `
          -SamAccountName "$ClusterName" `
            -Path "OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)" `
              -Description "Failover cluster virtual network name account" `
                 -Enabled $false -DNSHostName $ClusterNameFQDN;

	# Wait for AD synchronization
	Start-Sleep -Seconds 20;
};

Once the CNO created, we have to configure the correct permissions. We have to give, to the account we will use for the creation of the Windows Server Failover Cluster (WSFC), the correct Access Control Lists (ACLs) to be able to claim the object during the WSFC installation process.

# Group Account use for the installation
$GroupAccount='MSSQL-Admins';

# Retrieve existing ACL on the CNO
$acl = Get-Acl "AD:$((Get-ADComputer -Identity $ClusterName).DistinguishedName)";

# Create a new access rule which will give to the installation account Full Control on the object
$identity = ( Get-ADGroup -Identity $GroupAccount).SID;
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericAll";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;

# Add the new acess rule to the existing ACL, then set the ACL on the CNO to save the changes
$acl.AddAccessRule($ace); 
Set-acl -aclobject $acl "AD:$((Get-ADComputer -Identity $ClusterName).DistinguishedName)";

Here, our CNO is created disabled with the correct permissions we require for the installation.
We need to create its DNS entry, and give to the CNO read/write permissions on it.

# Specify the IP address the Cluster we will use
$IPAddress='192.168.0.2';

# Computer Name of the AD / DNS server name
$ADServer = 'DC01';

Add-DnsServerResourceRecordA -ComputerName $ADServer -Name $ClusterName -ZoneName "$($DC1).$($DC2)" -IPv4Address $IPAddress;

#Retrieve ACl for DNS Record
$acl = Get-Acl "AD:$((Get-DnsServerResourceRecord -ComputerName $ADServer -ZoneName '$($DC1).$($DC2)' -Name $ClusterName).DistinguishedName)";

#Retrive SID Identity for CNO to update in ACL
$identity = (Get-ADComputer -Identity $ClusterName).SID;

# Contruct ACE for Generic Read
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericRead";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

# Contruct ACE for Generic Write
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericWrite";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

#Update ACL for DNS Record of the CNO
Set-acl -aclobject $acl "AD:$((Get-DnsServerResourceRecord  -ComputerName $ADServer  -ZoneName "$($DC1).$($DC2)" -Name $ClusterName).DistinguishedName)";

At this step, the installation process will be able to claim the CNO while creating the new Cluster.
The prestage for the CNO object is completed.

VCO Creation

The creation of the VCO, used by the AAG for its Listener, is quite similar.
As there is no additional complexity compared to the creation of the CNO, here is the whole code:

# To configure following your needs
$Ou1='CNO-VCO';
$Ou2='MSSQL';
$DC1='dbi';
$DC2='test';
$ListenerName='LSTN-PRD1';
$ListenerNameFQDN="$(ListenerName).$($DC1).$($DC2)";
$IPAddress='192.168.0.3';
$ADServer = 'DC01';

If (-not (Test-path "AD:CN=$($ListenerName),OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)")){
	# Create VCO for AAG
	New-ADComputer -Name "$ListenerName" -SamAccountName "$ListenerName" -Path "OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)" -Description "AlwaysOn Availability Group Listener Account" -Enabled $false -DNSHostName $ListenerNameFQN;

	# Wait for AD synchronization
	Start-Sleep -Seconds 20;
};

# Retrieve existing ACL on the VCO
$acl = Get-Acl "AD:$((Get-ADComputer -Identity $ListenerName).DistinguishedName)"; `

# Create a new access rule which will give CNO account Full Control on the object
$identity = (Get-ADComputer -Identity $ClusterName).SID;
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericAll";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;

# Add the ACE to the ACL, then set the ACL to save the changes
$acl.AddAccessRule($ace);
Set-acl -aclobject $acl "AD:$((Get-ADComputer -Identity $ListenerName).DistinguishedName)";

# Create a new DNS entry for the Listener
Add-DnsServerResourceRecordA -ComputerName $ADServer -Name $ListenerName -ZoneName "$($DC1).$($DC2)" -IPv4Address $IPAddress;

# We have to give the CNO the access to the DNS record
$acl = Get-Acl "AD:$((Get-DnsServerResourceRecord -ComputerName $ADServer -ZoneName '$($DC1).$($DC2)' -Name $ListenerName).DistinguishedName)";

#Retrive SID Identity for CNO to update in ACL
$identity = (Get-ADComputer -Identity $ClusterName).SID;

# Contruct ACE for Generic Read
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericRead";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

# Contruct ACE for Generic Write
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericWrite";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace); `

#Update ACL for DNS Record of the CNO
Set-acl -aclobject $acl "AD:$((Get-DnsServerResourceRecord  -ComputerName $ADServer -ZoneName "$($DC1).$($DC2)" -Name $ListenerName).DistinguishedName)";

In this blog, we saw how to automate the creation and the configuration of CNOs and VCOs in the AD/DNS.
This is useful when you have several Clusters to install and several Listeners to configure, and you want to make sure there is no mistake while saving time.

Cet article Automate CNOs and VCOs for SQL Server AAG est apparu en premier sur Blog dbi services.

export

Tom Kyte - Thu, 2021-02-25 12:46
Hi tom, Thanks a lot for taking time to answer questions. In Oracle database 9.2.0.4 for windows 32-bit. I could run exp without any problems. But when I run export with query="where ..." I got a lot of: "exp-00091 exporting questionable statistics" The question is: why this happens? and how to fix it? Thanks a lot.
Categories: DBA Blogs

Want to have 1000+ values in IN operator

Tom Kyte - Thu, 2021-02-25 12:46
Hi there, I am facing an issue wherein i need to have more than 1000 values in IN Operator of query. Please suggest some way around for the same .here is the code snippet for you. Note: RMID IN can have 1000+values .Also had tried using OR RMID IN ()-for values greater than 999 in number .Not sure of performance .Kindly help <code>select count ( case when attandingtype = 1 then '1' end ) as existing_hp_count, count ( case when attandingtype = 2 then '1' end ) as new_hp_count, count ( case when attandingtype = 3 then '1' end ) as policy_count from worktracker where trunc (calldate) between add_months ( trunc (sysdate), -3 ) and trunc (sysdate) and rmid in ('1001', '1212');</code> Thanks!
Categories: DBA Blogs

Procedure to create sporting fixture failing to respect condition query against the same table

Tom Kyte - Thu, 2021-02-25 12:46
Hi TOM, I'm trying to generate a sporting fixture from a list of all possible matchups (ALL_POSSIBLE_FIXTURES - which contains each team playing at home against its opponent, and the reverse of that - so AAA (home) v BBB (away) and BBB (home) v AAA (away) are distinct and valid potential matches), with a few conditions. There are 18 teams, playing each other once over 17 rounds with 9 games in each round. Teams can't appear twice in the same round, nor can they play more than 9 games either home or away over the 17 rounds. And ideally, they can't play home or away twice in a row. The way I have tried to solve this is by creating a procedure to loop through the number of rounds and matches, inserting matches that align with the defined conditions into that table, and then using that table to then define the future appropriate insertions. For example, statement 3 in the LiveSQL contains the following: <code> FOR ROUND_NO IN 1 .. 17 LOOP FOR MATCH_ID IN 1 .. 9 LOOP INSERT INTO FIXTURE(HOME_TEAM, AWAY_TEAM, ROUND) SELECT HOME_TEAM, AWAY_TEAM, ROUND_NO AS ROUND FROM ( SELECT HOME_TEAM, AWAY_TEAM, DBMS_RANDOM.VALUE AS RND FROM ALL_POSSIBLE_FIXTURES WHERE -- this match cannot be in this round (HOME_TEAM||AWAY_TEAM NOT IN (SELECT HOME_TEAM||AWAY_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) AND (AWAY_TEAM||HOME_TEAM NOT IN (SELECT AWAY_TEAM||HOME_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) AND -- the teams cannot already be playing each other in this configuration (HOME_TEAM||AWAY_TEAM NOT IN (SELECT HOME_TEAM||AWAY_TEAM FROM FIXTURE)) AND (AWAY_TEAM||HOME_TEAM NOT IN (SELECT AWAY_TEAM||HOME_TEAM FROM FIXTURE)) AND -- the teams cannot have had two home games in the last two rounds (***** NB. This doesnt appear to be working *****) (HOME_TEAM NOT IN (SELECT HOME_TEAM FROM FIXTURE WHERE ROUND > ROUND_NO-2 GROUP BY HOME_TEAM HAVING COUNT(*) > 1)) AND (AWAY_TEAM NOT IN (SELECT AWAY_TEAM FROM FIXTURE WHERE ROUND > ROUND_NO-2 GROUP BY AWAY_TEAM HAVING COUNT(*) > 1)) AND -- these teams cannot be scheduled to play more than 11 games home or away (HOME_TEAM NOT IN (SELECT HOME_TEAM FROM FIXTURE GROUP BY HOME_TEAM HAVING COUNT(*) > 9)) AND (AWAY_TEAM NOT IN (SELECT AWAY_TEAM FROM FIXTURE GROUP BY AWAY_TEAM HAVING COUNT(*) > 9)) AND -- these teams cannot already be in this round, either home or away (HOME_TEAM NOT IN (SELECT HOME_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) AND (AWAY_TEAM NOT IN (SELECT AWAY_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) AND (HOME_TEAM NOT IN (SELECT AWAY_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) AND (AWAY_TEAM NOT IN (SELECT HOME_TEAM FROM FIXTURE WHERE ROUND = ROUND_NO)) ORDER BY 3 ) WHERE ROWNUM = 1; COMMIT; END LOOP; END LOOP; </code> But while it works fairly well, I am having trouble getting it to respect the home/away sequence condition - <code> (HOME_TEAM NOT IN (SELECT HOME_TEAM FROM FIXTURE WHERE ROUND > ROUND_NO-2 GROUP BY HOME_TEAM HAVING COUNT(*) > 1)) </code> As can be seen from the sample output (which is random and will change when run again) where <b>team AAA is the home team in three successive rounds in rounds 2-4</b>. I would have thought that at least by round 4, this home team would appear as having a count > 1 in the last two rounds, and therefore not be a valid selection for the home team in a round 4 match. But as you can see, that's not the case. Can someone explain to me what is going on here? Thanks a lot, Andrew EDIT: On request - Further information about ALL_POSSIBLE_FIXTURES table
Categories: DBA Blogs

Oracle Database Appliance: what have you missed since X3/X4/X5?

Yann Neuhaus - Thu, 2021-02-25 09:48
Introduction

ODA started to become popular with X3-2 and X4-2 in 2013/2014. These 2 ODAs were very similar. The X5-2 from 2015 was different with 3.5 inches disks instead of 2.5 inches and additional SSDs for small databases (FLASH diskgroup). All these 3 ODAs were running 11gR2 and 12cR1 databases and were managed by the oakcli binary. If you’re still using these old machines, you should know that there is a lot of differences compared to modern ODAs. Here is an overview of what have changed on these appliances.

Single-node ODAs

Starting from X6, ODAs are also available in “lite” versions, understand single-node ODAs. The benefits are real: way cheaper than 2-node ODAs (now called High Availability ODAs), no need for RAC complexity, easy plug in (power supply and network and that’s it), cheaper Disaster Recovery, faster deployment, etc. Most of the ODAs sold today are single-nodes ODAs as Real Application Cluster if becoming less and less popular. Today, ODA’s family is composed of 2 lite versions, X8-2S and X8-2M and one HA version X8-2HA.

Support for Standard Edition

Up to X5, ODAs had only supported Enterprise Edition, meaning that the base price was more likely a 6-digit figure in $/€/CHF if you pack the server with 1 EE PROC license. With Standard Edition, the base price is “only” one third of that (X8-2S with 1 SE2 PROC license).

Full SSD storage

I/Os have always been a bottleneck for databases. X6 and later ODAs are mainly full SSD servers. “Lite” ODAs are only running on NVMe SSD (the fastest storage solution for now), and HA ODAs are available in both configurations: SSD (High Performance) or a mix of SSD and HDD (High Capacity). The latest one being quite rare. Even the smallest ODA X8-2S with only 2 NVMe SSDs will be faster than any other disk-based ODA.

Higher TeraByte density and flexible disk configuration

For sure, comparing a 5-year old ODA to X8 is not fair, but ODA X3 and X4 used to pack 18TB in 4U when ODA X8-2M will have up to 75TB in 2U. Some customers didn’t chose ODA 5 years ago because of the limited capacity, it’s even no more a subject today.

Another point is that storage configuration is more flexible. With ODA X8-2M you are able to add disks by pair, and with ODA X8-2HA you can add 5-disk packs. There is no more the need for doubling capacity as we did on X3/X4/X5 (and you could only do it once).

Furthermore, you can now choose an accurate disk split between DATA and RECO (+/-1%) compared to the DATA/RECO options on X3-X4-X5: 40/60 or 80/20.

Web GUI

A real appliance needs a real GUI, X6 introduced the ODA Web GUI, a basic GUI for basic ODA functions (dbhomes and databases creation and deletion, mainly) and this GUI became more and more capable during the past years. If some actions are still missing, the GUI is now quite powerfull and also user-friendly. And you can still use the command line (odacli) if you prefer.

Smart management

ODA now has a repository and everything is ordered and referenced in that repository, each database, dbhome, network, job is identified with a unique id. And all tasks are backgroup jobs with a verbose status.

Next-gen virtualization support

With old HA you had to choose between bare-metal mode or virtualized-mode, the last one being for running additional virtual machines for other purposes than databases. But the databases were also running on a single dedicated VM. Virtualized-mode relied on OVM technology, soon deprecated and now replaced with OLVM. OLVM brings both the advantages of a virtualized ODA (running additional VMs) and a bare-metal ODA (running databases in bare-metal). And it relies on KVM instead of Xen, which is better because it’s part of the Linux operating system.

Data Guard support

It’s quite a new feature, but it’s already a must-have. The command line interface (odacli) is now able to create and manage a Data Guard configuration, and even do the duplicate and the switchover/failover. It’s so convenient that it’s a key benefit for the ODA compared to other platforms. Please have a look at this blogpost for a test case. If you’re used to configure Data Guard, you will probably appreciate this feature a lot.

Performance

ODA has always been a great challenger compared to other platforms. Regarding modern ODAs, NVMe SSDs associated to high-speed cores (as soon as you limit the number of cores in use in the ODA to match your license – please have a look how to) make the ODA a strong performer even compared to EXADATA. Don’t miss that point, your databases will probably run better on ODA than on anything else.

Conclusion

If you’re using Oracle databases, you should probably consider again ODA in your short list. It’s not the perfect solution, and some configurations cannot be addressed by ODA, but it brings much more advantages than drawbacks. And now there is a complete range of models for each need. If your next infrastructure is not in the Cloud, it’s probably with ODAs.

Cet article Oracle Database Appliance: what have you missed since X3/X4/X5? est apparu en premier sur Blog dbi services.

SQL Server: Control the size of your Transaction Log file with Resumable Index Rebuild

Yann Neuhaus - Thu, 2021-02-25 08:52
Introduction

In this blog post, I will demonstrate how the Resumable capability of Online index rebuild operation can help you to keep the transaction log file size under control.

An index rebuild operation is done in a single transaction that can require a significant log space. When doing a Rebuild on a large index the transaction log file can grow until your run out of disk space.
On failure, the transaction needs to rollback. You end up with a large transaction log file, no free space on your transaction log file volume, and an index not rebuilt.

Since SQL Server 2017 with Enterprise Edition, using the Resumable option of index online rebuild operation we can try to keep under control the transaction log file size.

Demo

For the demo, I’ll use the AdventureWorks database with the Adam Machanic’s bigAdventures tables.

Index rebuild Log usage

My transaction log file size is 1 GB and it’s empty.

USE [AdventureWorks2019]
go
select total_log_size_in_bytes/1024/1024 AS TotalLogSizeMB
	, (total_log_size_in_bytes - used_log_space_in_bytes)/1024/1024 AS FreeSpaceMB
    , used_log_space_in_bytes/1024./1024  as UsedLogSpaceMB,
    used_log_space_in_percent
from sys.dm_db_log_space_usage;

I now rebuild the index on bigTransactionHistory.

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory REBUILD
	WITH (ONLINE=ON);


I had a few autogrowth events bringing my file to 3583 MB. The log space required to rebuild this index is about 3500 MB.

Now, let’s say I want to limit my transaction log file to 2 GB.

Index rebuild script

First, I build a table that contains the list of indexes I have to rebuild during my maintenance window. For the demo purpose it’s a very simple one:

select *
from IndexToMaintain;

The idea is to go through all the indexes to rebuild and start a Rebuild with the option RESUMABLE=ON.
When a rebuild is done the value for the RebuildStatus column is updated to 1.

Here is the code:

WHILE (select Count(*) from IndexToMaintain where RebuildStatus = 0) > 0
BEGIN
	DECLARE @rebuild varchar(1000)
		, @DatabaseName varchar(1000)
		, @TableName varchar(1000)
		, @IndexName varchar(1000)
		, @id int

	select @DatabaseName = DatabaseName
		, @TableName = TableName
		, @IndexName = IndexName
		, @id = id
	from IndexToMaintain 
	where RebuildStatus = 0;

	SET @rebuild = CONCAT('ALTER INDEX ', @IndexName, ' ON ',@DatabaseName, '.dbo.', @TableName, ' REBUILD WITH (ONLINE=ON, RESUMABLE=ON);')
	
	exec(@rebuild)

	UPDATE IndexToMaintain SET RebuildStatus = 1 where id = @id;
END

The commands executed will look like this.

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory REBUILD
	WITH (ONLINE=ON, RESUMABLE=ON);

The Job is scheduled to be run at a “high” frequency (depending on the file size) during the defined maintenance window. For example, it could be every 5 minutes between 1am and 3am.

We don’t need to use ALTER INDEX with RESUME to resume an index rebuild, we can just execute the original ALTER INDEX command again, as found in the DMV. It’s very useful and simplifies this kind of script.

Alert on Log Space usage

To contain the transaction log file size I create an Agent Alert that will be triggered when the file is used at 50%.In response to this Alert, it will execute another Job with 2 steps.

The first one checks the DMV index_resumable_operations for any running resumable index operation and pauses it.

IF EXISTS (
	select *
	from AdventureWorks2019.sys.index_resumable_operations
	where state_desc = 'RUNNING'
)
BEGIN
	DECLARE @sqlcmd varchar(1000)	
	select @sqlcmd=CONCAT('ALTER INDEX ', iro.name, ' ON ', OBJECT_NAME(o.object_id), ' PAUSE;')
	from AdventureWorks2019.sys.index_resumable_operations AS iro
		join sys.objects AS o
			on iro.object_id = o.object_id
	where iro.state_desc = 'RUNNING';

	EXEC(@sqlcmd)
END

The second step will then perform a Log backup to free up the transaction log space inside the file.

DECLARE @backupFile varchar(1000) 
SET @backupFile = 'C:\Backup\AdventureWorks2019_'+replace(convert(varchar(20),GetDate(),120), ':', '_')+'.trn' 
BACKUP LOG AdventureWorks2019 TO DISK = @backupFile

The command to be executed by this Job:

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory PAUSE;
Running the Rebuild

I set the RebuildStatus value for my index at 0 and enable the Job (scheduled to run every minute). It starts to run at 13:04.
As we can see in the Job history the index rebuild job ran twice (around 23s) with a failed status. This means that during rebuild it was stopped by the other job doing a PAUSE followed by a log backup.
The third time it runs it could finish rebuilding the index, set the RebuildStatus to 1, and quit successfully.The Job triggered by the alert has been run twice.Two transaction log backups have been performed.While doing the rebuild we managed to keep the transaction log file at a 2GB size compared to the 3.5GB it would use without using the Resumable feature.

Conclusion

This demo was just an example of how the resumable option of index rebuild could be used to contain the transaction file size during index maintenance.
Obviously, this solution is not usable as-is for production. You will find the code on my GitHub if you want to play with it.
I hope you found this blog interesting. Feel free to give me feedback in the comments below.

 

Cet article SQL Server: Control the size of your Transaction Log file with Resumable Index Rebuild est apparu en premier sur Blog dbi services.

AWS Serverless Application Model: Complete Solution For Serverless Apps

Online Apps DBA - Thu, 2021-02-25 05:31

It won’t surprise anyone when you say that the cloud train has left the station a while ago and is gaining more and more speed every day. Because this trend doesn’t seem to stop at all it is required to be able to act fast on your changing environment to meet your customer’s demand with […]

The post AWS Serverless Application Model: Complete Solution For Serverless Apps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator