Feed aggregator

Inject variable value

Tom Kyte - Fri, 2021-06-11 17:06
Hi guys. I have a piece of code like this: <code> declare myvar number := 4; begin select rule into v_stmt from rules where rule_id=1; --v_stmt -> 'begin :a := my_function(my_var, 1), 1); end;' execute immediate v_stmt; end; / </code> Is there a way to inject my_var value to execute v_stmt?
Categories: DBA Blogs

SQL Loader header row attribute cannot be applied to detail rows in event of multiple header/detail transactions

Tom Kyte - Fri, 2021-06-11 17:06
How to load using Oracle SQL Loader values to it's related detail rows if the file has multiple headers and details rows. The issue is header row attribute cannot be applied to it's related detail rows in event of multiple header/detail transactions. It does works and load for 1 header, where header loader evaluation is done for very first header and applies to all detail rows, which is not expected to happen. I need to apply each header row attribute value to applied it's child detail rows. Here is the example. Each <b>H - Header</b> Record attribute value <b>1001</b>, <b>1002 </b>and <b>1003</b>, I need to stamp to each respective detail record while loading via SQL Loader. H ABC 1001 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1002 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1003 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 The expected results should be in database table after SQL loader is completed as follows, which does not happen. Any suggestions! H ABC 1001 D XYZ 89.90 1001 D XYZ 89.91 1001 D XYZ 89.92 1001 H ABC 1002 D XYZ 89.90 1002 D XYZ 89.91 1002 D XYZ 89.92 1002 H ABC 1003 D XYZ 89.90 1003 D XYZ 89.91 1003 D XYZ 89.92 1003 Thank you.
Categories: DBA Blogs

Impact of proper column precision for analytical queries

The Oracle Instructor - Fri, 2021-06-11 04:33

Does it matter if your data warehouse tables have columns with much higher precision than needed? Probably: Yes.

But how do you know the precision of your columns is larger than required by the values stored in these columns? In Exasol, we have introduced the function MIN_SCALE to find out. I’m working on an Exasol 7 New Features course at the moment, and this article is kind of a sneak preview.

If there’s an impact, it will show only with huge amounts of rows of course. Would be nice to have a table generator to give us large testing tables. Another Exasol 7 feature helps with that: The new clause VALUES BETWEEN.


CREATE OR REPLACE TABLE scaletest.t1 AS
SELECT CAST(1.123456789 AS DECIMAL(18,17)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

This generates a table with 1000 million rows and takes only 30 seconds runtime on my modest VirtualBox VM. Obviously, the scale of the column is too large for the values stored there. But if it wouldn’t be that obvious, here’s how I can find out:

SELECT MAX(a) FROM (SELECT MIN_SCALE(testcol) As a FROM scaletest.t1);

This comes back with the output 9 after 20 seconds runtime, telling me that the precision actually required by the values is 9 at max. I’ll create a second table for comparison with only the required scale:


CREATE OR REPLACE TABLE scaletest.t2 AS
SELECT CAST(1.123456789 AS DECIMAL(10,9)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

So does it really matter? Is there a runtime difference for analytical queries?

SELECT COUNT(*),MAX(testcol) FROM t1; -- 16 secs runtime
SELECT COUNT(*),MAX(testcol) FROM t2; -- 7 secs runtime

My little experiment shows, the query running on the column with appropriate scale is twice as fast than the one running on the too large scaled column!

It would be beneficial to adjust the column precision according to the scale the stored values actually need, in other words. With statements like this:

ALTER TABLE t1 MODIFY (testcol DECIMAL(10,9));

After that change, the runtime goes down to 7 seconds as well for the first statement.

I was curious if that effect shows also on other databases, so I prepared a similar test case for an Oracle database. Same tables but only 100 million rows. It takes just too long to export tables with 1000 million rows to Oracle, using VMs on my notebook. And don’t even think about trying to generate 1000 million row tables on Oracle with the CONNECT BY LEVEL method, that will just take forever – or more likely break with an out-of-memory error.

The effect shows also with 100 million row tables on Oracle: 5 seconds runtime with too large precision and about 3 seconds with the appropriately scaled column.

Conclusion: Yes, looks like it’s indeed sensible to format table columns according to the actual requirements of the values stored in them and it makes a difference, performancewise.

Categories: DBA Blogs

Cannot alter Oracle sequence start with

Tom Kyte - Thu, 2021-06-10 22:46
I have tried altering the start with value of an Oracle sequence but I face [<b>ORA-02283: cannot alter starting sequence number</b>] error. I tried to find why Oracle does not allow the same but I could not find an appropriate answer for that. So my question is why is it that Oracle does not let you change a sequence start with value? (PS: I am hoping there should be a really solid technical reason behind this) Thanks in advance!
Categories: DBA Blogs

INSERT data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC'

Tom Kyte - Thu, 2021-06-10 22:46
Want to insert data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC' so here year is -8999 When I tries to insert it gives me error like 'YEar should be between -4713 and 9999'. BUt for some table I am having year less than 4713 as well like -8999 and -78888 So this limit is not applicable to timestamp data type. But then How to insert into timestamp for year -8999
Categories: DBA Blogs

Specified tablespace for IOT

Tom Kyte - Thu, 2021-06-10 22:46
I have two tablespaces named USERS and INDX, respectively. The dufault tablespace for current user is USERS. I created an IOT table whose name is tb_zxp. Since no need to specify a tablespace storing data for IOT, I'd like to put the whole index of tb_zxp on tablespace INDX? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index including amt overflow tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); insert into tb_zxp values (11,25,date '2021-04-11',760,'Tower',3.8,200); insert into tb_zxp values (24,9,date '2021-05-11',5200,'Washing machine',5200,1); commit;</code> However, with this query, we can find the index is still assigned on default tablespace USERS: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ USERS</code> Then,I remove the INCLUDING OVERFLOW clause from table creation statement, and try it again? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); commit;</code> This time, the index falls upon tablespace INDX as expected: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ INDX</code> Could any guru kindly explain why the removal of including overflow can provide us desired result?
Categories: DBA Blogs

Using analytical functions for time period grouping

Tom Kyte - Thu, 2021-06-10 22:46
Hi Tom, I have a table like below: <code>GRP,SUBGRP,START_Y,END_Y 122,... 123,A,2010,2011 123,A,2011,2012 123,A,2012,2013 123,A,2013,2014 123,B,2014,2015 123,B,2015,2016 123,B,2016,2017 123,A,2017,2018 123,A,2018,2019 123,A,2019,2020 124,...</code> I would like to find start and end of all intervals in this table like so: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2014 123,B,2014,2017 123,A,2017,2020 124,...</code> A simple group by would show the results over the complete timeperiod but not over the different intervals: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2020 123,B,2014,2017 124,...</code> I think it should be possible with analytic functions but I don't get it.
Categories: DBA Blogs

Cloud Vanity: A Weekly Carnival of AWS, Azure, GCP, and More - Edition 5

Pakistan's First Oracle Blog - Thu, 2021-06-10 21:09

 Welcome to the next edition of weekly Cloud Vanity. As usual, this edition casts light on multiple cloud providers and what's happening in their sphere. From the mega players to the small fish on the ocean, it has covered it all. Enjoy!!!

AWS:

Reducing risk is the fundamental reason organizations invest in cybersecurity. The threat landscape grows and evolves, creating the need for a proactive, continual approach to building and protecting your security posture. Even with expanding budgets, the number of organizations reporting serious cyber incidents and data breaches is rising.

Streaming data presents a unique set of design and architectural challenges for developers. By definition, streaming data is not bounded, having no clear beginning or end. It can be generated by millions of separate producers, such as Internet of Things (IoT) devices or mobile applications. Additionally, streaming data applications must frequently process and analyze this data with minimal latency.

This post presents a solution using AWS Systems Manager State Manager that automates the process of keeping RDS instances in a start or stop state.

Over the last few years, Machine Learning (ML) has proven its worth in helping organizations increase efficiency and foster innovation. 

GCP:

In recent years, the grocery industry has had to shift to facilitate a wider variety of checkout journeys for customers. This has meant ensuring a richer transaction mix, including mobile shopping, online shopping, in-store checkout, cashierless checkout or any combination thereof like buy online, pickup in store (BOPIS).  

At Google I/O this year, we introduced Vertex AI to bring together all our ML offerings into a single environment that lets you build and manage the lifecycle of ML projects. 

Dataflow pipelines and Pub/Sub are the perfect services for this. All we need to do is write our components on top of the Apache Beam sdk, and they’ll have the benefit of distributed, resilient and scalable compute.

In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. And as well you should! It’s completely reasonable to use the capabilities from multiple cloud providers to achieve your desired business outcomes. 

Azure:

Generators at datacenters, most often powered by petroleum-based diesel, play a key role in delivering reliable backup power. Each of these generators is used for no more than a few hours a year or less at our datacenter sites, most often for routine maintenance or for backup power during a grid outage. 

5 reasons to attend the Azure Hybrid and Multicloud Digital Event

For over three years, I have had the privilege of leading the SAP solutions on Azure business at Microsoft and of partnering with outstanding leaders at SAP and with many of our global partners to ensure that our joint customers run one of their most critical business assets safely and reliably in the cloud. 

There are many factors that can affect critical environment (CE) infrastructure availability—the reliability of the infrastructure building blocks, the controls during the datacenter construction stage, effective health monitoring and event detection schemes, a robust maintenance program, and operational excellence to ensure that every action is taken with careful consideration of related risk implications.

Others:

Anyone who has even a passing interest in cryptocurrency has probably heard the word ‘blockchain’ branded about. And no doubt many of those who know the term also know that blockchain technology is behind Bitcoin and many other cryptocurrencies.

Alibaba Cloud Log Service (SLS) cooperates with RDS to launch the RDS SQL audit function, which delivers RDS SQL audit logs to SLS in real time. SLS provides real-time query, visual analysis, alarm, and other functionalities.

How AI Automation is Making a First-of-its-Kind, Crewless Transoceanic Ship Possible

Enterprise organizations have faced a compendium of challenges, but today it seems like the focus is on three things: speed, speed, and more speed. It is all about time to value and application velocity—getting applications delivered and then staying agile to evolve the application as needs arise.

Like many DevOps principles, shift-left once had specific meaning that has become more generalized over time. Shift-left is commonly associated with application testing – automating application tests and integrating them into earlier phases of the application lifecycle where issues can be identified and remediated earlier (and often more quickly and cheaply).

Categories: DBA Blogs

How to Identify the MINVALUE AND MAXVALUE SCN in the flashback versions query

Tom Kyte - Thu, 2021-06-10 04:26
Hi TOM, I am facing an issue with flashback versions query. I have described my issue below. <code> Query 1: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2021-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') AND TO_TIMESTAMP('2021-06-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') where employee_id='xyz' </code> the above query returns 2 records. <code> XID START_SCN END_SCN 0B0017008F7B0300 39280796004 39282671828 [INSERT] 2D001B0016420000 39282671828 (null) [UPDATE] </code> But on passing the versions_startscn value from the 1st query result in the filter condition of 2nd query, I got 0 records returned instead of 1 record. <code>Query 2: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE where versions_endscn = '39282671828' </code> the above query returns 0 records. Is there a way to identify the MINVALUE and MAXVALUE passed in the second query? On what cases the MINVALUE gets set?
Categories: DBA Blogs

v$session query get garbled code

Tom Kyte - Thu, 2021-06-10 04:26
Hi,I'm using oracle 10.2.0.1 for studing, and I queried the v$session using pl/sql developer from a windows pc client ,but I found the garbled code from the results.just as following: <code>SQL> select osuser from v$session; OSUSER -------- SYSTEM ???? SYSTEM abc ??????????</code> then I ran the same command from the DB server,but I got the same results. Here are the characterset: <code>SQL> select userenv('language') from dual; USERENV('LANGUAGE') ---------------------------------------------------- SIMPLIFIED CHINESE_CHINA.ZHS16GBK SQL> select * from v$nls_parameters; PARAMETER VALUE ---------------------------------------------------------------- ---------------------------------------------------------------- NLS_LANGUAGE SIMPLIFIED CHINESE NLS_TERRITORY CHINA NLS_CURRENCY ? NLS_ISO_CURRENCY CHINA NLS_NUMERIC_CHARACTERS ., NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE SIMPLIFIED CHINESE NLS_CHARACTERSET ZHS16GBK NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY ? NLS_NCHAR_CHARACTERSET UTF8 NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE 19 rows selected</code> I also set the environment variable in my windows os : NLS_LANG=SIMPLIFIED CHINESE_CHINA.ZHS16GBK What's more,I tested my db as following: <code>SQL> select '??' from dual; '??' ---- ??</code> Now,Could you please help me,why I got some garbled codes such as ??? when querying osuser from v$session? thanks a lot.
Categories: DBA Blogs

MultiThreaded Extproc Agent - max_sessions, max_task_threads and max_dispatchers parameters

Tom Kyte - Thu, 2021-06-10 04:26
Hello Team, Thanks for all the good work AskTOM is doing. Can you please help us to better understand about max_sessions, max_task_threads and max_dispatchers configuration parameters of MultiThreaded ExtProc agent and share tips on how to determine optimum values of these parameters? We have multiple clients on multiple DB servers. Each server has different H/W capacity and different load. Our understanding is that the final optimum values will depend on H/W configuration and number of external procedure calls. However, we are trying to arrive at an initial parameter configuration that can be fine tuned further based on actual situation. Thanks, AB
Categories: DBA Blogs

SQL Server: How to compare Server and Database Collation

Yann Neuhaus - Wed, 2021-06-09 09:43

Sometimes during an SQL Server audit, I check the collation of the server and the databases but every time separately.
The goal of this blog is to give you a script to have both and compare it.

First step is to have the server collation and the code page.
The code page will help to see if the character set is the same.
For example, for us, the 1252 is Latin/Western European and the 1258 is Vietnamese.
The collation is also good to have to see the case-sensitivity, accent- sensitivity, kanatype- sensitivity & width- sensitivity.

SELECT CONVERT (varchar, SERVERPROPERTY('collation')) AS 'server_collation',collationproperty(CONVERT (varchar, SERVERPROPERTY('collation')), 'codepage') AS 'server_codepage'

The second step is to have the collation from all databases with the view sys.databases:

SELECT db.name AS datebase_name, db.collation_name AS datebase_collation, collationproperty(db.collation_name, 'codepage') AS datebase_codepage FROM sys.databases AS db

The last step is to merge these 2 queries in one and have all information about collation and code page:

SELECT CONVERT (varchar, SERVERPROPERTY('collation')) AS 'server_collation',collationproperty(CONVERT (varchar, SERVERPROPERTY('collation')), 'codepage') AS server_codepage ,
db.name AS datebase_name, db.collation_name AS datebase_collation, collationproperty(db.collation_name, 'codepage') AS datebase_codepage FROM sys.databases AS db

To finish the script, you can add a clause “where” with the difference between the 2 code pages to have only what can be a problem:

SELECT CONVERT (varchar, SERVERPROPERTY('collation')) AS 'server_collation',collationproperty(CONVERT (varchar, SERVERPROPERTY('collation')), 'codepage') AS server_codepage ,
db.name AS datebase_name, db.collation_name AS datebase_collation, collationproperty(db.collation_name, 'codepage') AS datebase_codepage FROM sys.databases AS db
where collationproperty(CONVERT (varchar, SERVERPROPERTY('collation')), 'codepage') != collationproperty(db.collation_name, 'codepage')

I hope this script can help you to see the collation and code page for the server and all databases.

Cet article SQL Server: How to compare Server and Database Collation est apparu en premier sur Blog dbi services.

De-Ubuntu Ubuntu

Yann Neuhaus - Wed, 2021-06-09 08:44

Canonical is consistently promoting Snap Integration in the Ubuntu world with all its market power. However, many do not like this development with Ubuntu. But what options do you still have as an Ubuntu user if you don’t necessarily want to change the distribution? I’ll show you how to run Ubuntu 20.04 with Flatpak instead of Snap and get rid of the data-collecting packages.

Many switch to Linux because dealing with privacy in other operating systems doesn’t leave them feeling good. Can it be beneficial if the Linux distribution is possibly oriented in a similar direction? You should ask yourself this question once and find the right answer for you. I personally have no problem using Ubuntu with the following optimizations.

It starts directlay after installation, do not send reports if not wanted:

And switch of location tracking if this is not wanted:

We are deleting the following packages in order to prevent the opt-out or diagnosis and telemetry data collection and transmission:

Ubuntu report (Report hardware and other collected metrics)
Popularity contest (sends anonymous statistics about app usage behavior)
Apport (collects data in the event of program crashes)
Apport-gtk (Apport GTK + Frontend)
Whoopsie (this program submits crash reports back to ubuntu server)

$ [[user@ubuntu:~]$ sudo apt purge ubuntu-report popularity-contest apport apport-gtk whoopsie
$ [[user@ubuntu:~]$ [sudo] password for user: 
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ The following packages were automatically installed and are no longer required:
$ [[user@ubuntu:~]$   apport-symptoms gdb gdbserver libbabeltrace1 libc6-dbg libcc1-0 libdw1 python3-systemd
$ [[user@ubuntu:~]$ Use 'sudo apt autoremove' to remove them.
$ [[user@ubuntu:~]$ The following packages will be REMOVED:
$ [[user@ubuntu:~]$   apport* apport-gtk* popularity-contest* ubuntu-report* ubuntu-standard* whoopsie*
$ [[user@ubuntu:~]$ 0 upgraded, 0 newly installed, 6 to remove and 185 not upgraded.
$ [[user@ubuntu:~]$ After this operation, 9’098 kB disk space will be freed.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] Y
$ [[user@ubuntu:~]$ (Reading database ... 161318 files and directories currently installed.)
$ [[user@ubuntu:~]$ Removing apport-gtk (2.20.11-0ubuntu27.16) ...
$ [[user@ubuntu:~]$ Removing apport (2.20.11-0ubuntu27.16) ...
$ [[user@ubuntu:~]$ Removing ubuntu-standard (1.450.2) ...
$ [[user@ubuntu:~]$ Removing popularity-contest (1.69ubuntu1) ...
$ [[user@ubuntu:~]$ Removing ubuntu-report (1.6.1) ...
$ [[user@ubuntu:~]$ Removing whoopsie (0.2.69ubuntu0.2) ...
$ [[user@ubuntu:~]$ Processing triggers for mime-support (3.64ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for hicolor-icon-theme (0.17-2) ...
$ [[user@ubuntu:~]$ Processing triggers for gnome-menus (3.36.0-1ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for man-db (2.9.1-1) ...
$ [[user@ubuntu:~]$ Processing triggers for desktop-file-utils (0.24-1ubuntu3) ...
$ [[user@ubuntu:~]$ (Reading database ... 161186 files and directories currently installed.)
$ [[user@ubuntu:~]$ Purging configuration files for whoopsie (0.2.69ubuntu0.2) ...
$ [[user@ubuntu:~]$ Purging configuration files for popularity-contest (1.69ubuntu1) ...
$ [[user@ubuntu:~]$ Purging configuration files for apport (2.20.11-0ubuntu27.16) ...
$ [[user@ubuntu:~]$ Purging configuration files for ubuntu-report (1.6.1) ...
$ [[user@ubuntu:~]$ dpkg: warning: while removing ubuntu-report, directory '/etc/systemd/user/default.target.wants' not empty so not removed
$ [[user@ubuntu:~]$ Processing triggers for systemd (245.4-4ubuntu3.4) ...
$ [[user@ubuntu:~]$

If you also want to ensure that no connections can be established here, you can, for example, block access to these domains at the DNS level via pi-hole, AdGuard Home or router:

metrics.ubuntu.com
popcon.ubuntu.com

Privacy settings:

We open the Gnome settings and navigate to data protection and further to error diagnosis. We set the sending of error reports to Canonical to “Manual” or better to “Never”.

Under Connectivity, deactivate the “Check the connection” function. Here, too, technical diagnostic data is collected, which to prevent.

Package container (Snap and Flatpak)

In favor of increased data protection, we are separating from Canonical’s Snap container solution and optionally we are integrating the open-source Flatpak container solution into the system. Those who do not integrate Flatpak can only obtain their software from the package sources provided by Ubuntu.

Uninstall the following packages:

Snapd
Ubuntu core launcher
Sqashfs-tools

$ [[user@ubuntu:~]$ sudo apt purge snapd ubuntu-core-launcher squashfs-tools
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ Package 'ubuntu-core-launcher' is not installed, so not removed
$ [[user@ubuntu:~]$ The following packages were automatically installed and are no longer required:
$ [[user@ubuntu:~]$  apport-symptoms gdb gdbserver libbabeltrace1 libc6-dbg libcc1-0 libdw1 python3-systemd
$ [[user@ubuntu:~]$ Use 'sudo apt autoremove' to remove them.
$ [[user@ubuntu:~]$ The following packages will be REMOVED:
$ [[user@ubuntu:~]$  snapd* squashfs-tools*
$ [[user@ubuntu:~]$ 0 upgraded, 0 newly installed, 2 to remove and 184 not upgraded.
$ [[user@ubuntu:~]$ After this operation, 127 MB disk space will be freed.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] Y
$ [[user@ubuntu:~]$ (Reading database ... 161170 files and directories currently installed.)
$ [[user@ubuntu:~]$ Removing snapd (2.48+20.04) ...
$ [[user@ubuntu:~]$ Removing squashfs-tools (1:4.4-1) ...
$ [[user@ubuntu:~]$ Processing triggers for mime-support (3.64ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for gnome-menus (3.36.0-1ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for man-db (2.9.1-1) ...
$ [[user@ubuntu:~]$ Processing triggers for dbus (1.12.16-2ubuntu2.1) ...
$ [[user@ubuntu:~]$ Processing triggers for desktop-file-utils (0.24-1ubuntu3) ...
$ [[user@ubuntu:~]$ (Reading database ... 161083 files and directories currently installed.)
$ [[user@ubuntu:~]$ Purging configuration files for snapd (2.48+20.04) ...
$ [[user@ubuntu:~]$ Stopping snap-core18-1988.mount
$ [[user@ubuntu:~]$ Stopping unit snap-core18-1988.mount
$ [[user@ubuntu:~]$ Waiting until unit snap-core18-1988.mount is stopped [attempt 1]
$ [[user@ubuntu:~]$ snap-core18-1988.mount is stopped.
$ [[user@ubuntu:~]$ Removing snap core18 and revision 1988
$ [[user@ubuntu:~]$ Removing snap-core18-1988.mount
$ [[user@ubuntu:~]$ Stopping snap-gnome\x2d3\x2d34\x2d1804-66.mount
$ [[user@ubuntu:~]$ Stopping unit snap-gnome\x2d3\x2d34\x2d1804-66.mount
$ [[user@ubuntu:~]$ Waiting until unit snap-gnome\x2d3\x2d34\x2d1804-66.mount is stopped [attempt 1]
$ [[user@ubuntu:~]$ snap-gnome\x2d3\x2d34\x2d1804-66.mount is stopped.
$ [[user@ubuntu:~]$ Removing snap gnome-3-34-1804 and revision 66
$ [[user@ubuntu:~]$ Removing snap-gnome\x2d3\x2d34\x2d1804-66.mount
$ [[user@ubuntu:~]$ Stopping snap-gtk\x2dcommon\x2dthemes-1514.mount
$ [[user@ubuntu:~]$ Stopping unit snap-gtk\x2dcommon\x2dthemes-1514.mount
$ [[user@ubuntu:~]$ Waiting until unit snap-gtk\x2dcommon\x2dthemes-1514.mount is stopped [attempt 1]
$ [[user@ubuntu:~]$ snap-gtk\x2dcommon\x2dthemes-1514.mount is stopped.
$ [[user@ubuntu:~]$ Removing snap gtk-common-themes and revision 1514
$ [[user@ubuntu:~]$ Removing snap-gtk\x2dcommon\x2dthemes-1514.mount
$ [[user@ubuntu:~]$ Stopping snap-snap\x2dstore-518.mount
$ [[user@ubuntu:~]$ Stopping unit snap-snap\x2dstore-518.mount
$ [[user@ubuntu:~]$ Waiting until unit snap-snap\x2dstore-518.mount is stopped [attempt 1]
$ [[user@ubuntu:~]$ snap-snap\x2dstore-518.mount is stopped.
$ [[user@ubuntu:~]$ Removing snap snap-store and revision 518
$ [[user@ubuntu:~]$ Removing snap-snap\x2dstore-518.mount
$ [[user@ubuntu:~]$ Stopping snap-snapd-11036.mount
$ [[user@ubuntu:~]$ Stopping unit snap-snapd-11036.mount
$ [[user@ubuntu:~]$ Waiting until unit snap-snapd-11036.mount is stopped [attempt 1]
$ [[user@ubuntu:~]$ snap-snapd-11036.mount is stopped.
$ [[user@ubuntu:~]$ Removing snap snapd and revision 11036
$ [[user@ubuntu:~]$ Removing snap-snapd-11036.mount
$ [[user@ubuntu:~]$ Final directory cleanup
$ [[user@ubuntu:~]$ Discarding preserved snap namespaces
$ [[user@ubuntu:~]$ Removing extra snap-confine apparmor rules
$ [[user@ubuntu:~]$ Removing snapd cache
$ [[user@ubuntu:~]$ Removing snapd state
$ [[user@ubuntu:~]$ 

Remove the last remaining old package corpses and then restart the system.

$ [[user@ubuntu:~]$ sudo apt autoremove && sudo reboot
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ The following packages will be REMOVED:
$ [[user@ubuntu:~]$   apport-symptoms gdb gdbserver libbabeltrace1 libc6-dbg libcc1-0 libdw1 python3-systemd
$ [[user@ubuntu:~]$ 0 upgraded, 0 newly installed, 8 to remove and 184 not upgraded.
$ [[user@ubuntu:~]$ After this operation, 88.9 MB disk space will be freed.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] 

Installation of Flatpak:

$ [[user@ubuntu:~]$ sudo apt install flatpak
$ [[user@ubuntu:~]$ [sudo] password for user: 
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ The following additional packages will be installed:
$ [[user@ubuntu:~]$   libappstream-glib8 libostree-1-1
$ [[user@ubuntu:~]$ The following NEW packages will be installed:
$ [[user@ubuntu:~]$   flatpak libappstream-glib8 libostree-1-1
$ [[user@ubuntu:~]$ 0 upgraded, 3 newly installed, 0 to remove and 184 not upgraded.
$ [[user@ubuntu:~]$ Need to get 1’483 kB of archives.
$ [[user@ubuntu:~]$ After this operation, 6’988 kB of additional disk space will be used.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] Y
$ [[user@ubuntu:~]$ Get:1 http://ch.archive.ubuntu.com/ubuntu focal/main amd64 libappstream-glib8 amd64 0.7.16-1ubuntu1 [135 kB]
$ [[user@ubuntu:~]$ Get:2 http://ch.archive.ubuntu.com/ubuntu focal/universe amd64 libostree-1-1 amd64 2020.3-1 [280 kB]
$ [[user@ubuntu:~]$ Get:3 http://ch.archive.ubuntu.com/ubuntu focal-updates/universe amd64 flatpak amd64 1.6.5-0ubuntu0.3 [1’068 kB]
$ [[user@ubuntu:~]$ Fetched 1’483 kB in 0s (3’299 kB/s)  
$ [[user@ubuntu:~]$ Selecting previously unselected package libappstream-glib8:amd64.
$ [[user@ubuntu:~]$ (Reading database ... 160579 files and directories currently installed.)
$ [[user@ubuntu:~]$ Preparing to unpack .../libappstream-glib8_0.7.16-1ubuntu1_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking libappstream-glib8:amd64 (0.7.16-1ubuntu1) ...
$ [[user@ubuntu:~]$ Selecting previously unselected package libostree-1-1:amd64.
$ [[user@ubuntu:~]$ Preparing to unpack .../libostree-1-1_2020.3-1_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking libostree-1-1:amd64 (2020.3-1) ...
$ [[user@ubuntu:~]$ Selecting previously unselected package flatpak.
$ [[user@ubuntu:~]$ Preparing to unpack .../flatpak_1.6.5-0ubuntu0.3_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking flatpak (1.6.5-0ubuntu0.3) ...
$ [[user@ubuntu:~]$ Setting up libappstream-glib8:amd64 (0.7.16-1ubuntu1) ...
$ [[user@ubuntu:~]$ Setting up libostree-1-1:amd64 (2020.3-1) ...
$ [[user@ubuntu:~]$ Setting up flatpak (1.6.5-0ubuntu0.3) ...
$ [[user@ubuntu:~]$ Processing triggers for man-db (2.9.1-1) ...
$ [[user@ubuntu:~]$ Processing triggers for dbus (1.12.16-2ubuntu2.1) ...
$ [[user@ubuntu:~]$ Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
$ [[user@ubuntu:~]$ 

Adding the Flatpak repository:

$ [[user@ubuntu:~]$  sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
$ [[user@ubuntu:~]$ 

Installing Gnome Software without recommendations (Snap):

$ [[user@ubuntu:~]$ sudo apt install --no-install-recommends gnome-software
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ The following additional packages will be installed:
$ [[user@ubuntu:~]$   gnome-software-common
$ [[user@ubuntu:~]$ Suggested packages:
$ [[user@ubuntu:~]$   gnome-software-plugin-flatpak
$ [[user@ubuntu:~]$ Recommended packages:
$ [[user@ubuntu:~]$   gnome-software-plugin-snap
$ [[user@ubuntu:~]$ The following NEW packages will be installed:
$ [[user@ubuntu:~]$   gnome-software gnome-software-common
$ [[user@ubuntu:~]$ 0 upgraded, 2 newly installed, 0 to remove and 184 not upgraded.
$ [[user@ubuntu:~]$ Need to get 6’452 kB of archives.
$ [[user@ubuntu:~]$ After this operation, 12.9 MB of additional disk space will be used.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] Y
$ [[user@ubuntu:~]$ Get:1 http://ch.archive.ubuntu.com/ubuntu focal-updates/main amd64 gnome-software-common all 3.36.1-0ubuntu0.20.04.0 [5’559 kB]
$ [[user@ubuntu:~]$ Get:2 http://ch.archive.ubuntu.com/ubuntu focal-updates/main amd64 gnome-software amd64 3.36.1-0ubuntu0.20.04.0 [892 kB]
$ [[user@ubuntu:~]$ Fetched 6’452 kB in 1s (5’578 kB/s)     
$ [[user@ubuntu:~]$ Selecting previously unselected package gnome-software-common.
$ [[user@ubuntu:~]$ (Reading database ... 160703 files and directories currently installed.)
$ [[user@ubuntu:~]$ Preparing to unpack .../gnome-software-common_3.36.1-0ubuntu0.20.04.0_all.deb ...
$ [[user@ubuntu:~]$ Unpacking gnome-software-common (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Selecting previously unselected package gnome-software.
$ [[user@ubuntu:~]$ Preparing to unpack .../gnome-software_3.36.1-0ubuntu0.20.04.0_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking gnome-software (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Setting up gnome-software-common (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Setting up gnome-software (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Processing triggers for mime-support (3.64ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for hicolor-icon-theme (0.17-2) ...
$ [[user@ubuntu:~]$ Processing triggers for gnome-menus (3.36.0-1ubuntu1) ...
$ [[user@ubuntu:~]$ Processing triggers for libglib2.0-0:amd64 (2.64.6-1~ubuntu20.04.1) ...
$ [[user@ubuntu:~]$ Processing triggers for man-db (2.9.1-1) ...
$ [[user@ubuntu:~]$ Processing triggers for desktop-file-utils (0.24-1ubuntu3) ...
$ [[user@ubuntu:~]$ 

Install Gnome Flatpak plugin

$ [[user@ubuntu:~]$ sudo apt install gnome-software-plugin-flatpak
$ [[user@ubuntu:~]$ Reading package lists... Done
$ [[user@ubuntu:~]$ Building dependency tree       
$ [[user@ubuntu:~]$ Reading state information... Done
$ [[user@ubuntu:~]$ The following additional packages will be installed:
$ [[user@ubuntu:~]$   libflatpak0
$ [[user@ubuntu:~]$ The following NEW packages will be installed:
$ [[user@ubuntu:~]$   gnome-software-plugin-flatpak libflatpak0
$ [[user@ubuntu:~]$ 0 upgraded, 2 newly installed, 0 to remove and 184 not upgraded.
$ [[user@ubuntu:~]$ Need to get 374 kB of archives.
$ [[user@ubuntu:~]$ After this operation, 1’239 kB of additional disk space will be used.
$ [[user@ubuntu:~]$ Do you want to continue? [Y/n] Y
$ [[user@ubuntu:~]$ Get:1 http://ch.archive.ubuntu.com/ubuntu focal-updates/universe amd64 libflatpak0 amd64 1.6.5-0ubuntu0.3 [277 kB]
$ [[user@ubuntu:~]$ Get:2 http://ch.archive.ubuntu.com/ubuntu focal-updates/universe amd64 gnome-software-plugin-flatpak amd64 3.36.1-0ubuntu0.20.04.0 [97.1 kB]
$ [[user@ubuntu:~]$ Fetched 374 kB in 0s (1’664 kB/s)                        
$ [[user@ubuntu:~]$ Selecting previously unselected package libflatpak0:amd64.
$ [[user@ubuntu:~]$ (Reading database ... 160796 files and directories currently installed.)
$ [[user@ubuntu:~]$ Preparing to unpack .../libflatpak0_1.6.5-0ubuntu0.3_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking libflatpak0:amd64 (1.6.5-0ubuntu0.3) ...
$ [[user@ubuntu:~]$ Selecting previously unselected package gnome-software-plugin-flatpak.
$ [[user@ubuntu:~]$ Preparing to unpack .../gnome-software-plugin-flatpak_3.36.1-0ubuntu0.20.04.0_amd64.deb ...
$ [[user@ubuntu:~]$ Unpacking gnome-software-plugin-flatpak (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Setting up libflatpak0:amd64 (1.6.5-0ubuntu0.3) ...
$ [[user@ubuntu:~]$ Setting up gnome-software-plugin-flatpak (3.36.1-0ubuntu0.20.04.0) ...
$ [[user@ubuntu:~]$ Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
$ [[user@ubuntu:~]$  

With this operation gnome software is installed including the Flatpak repository connected to it.

And now installation of packages out of Flatpak using Gnome Software:

Conclusion

Ubuntu users should check from which source they get their software, unless they want to be forced in one direction. Chromium on Ubuntu is such an example, because even if you enter chromium with sudo apt install under Ubuntu, you don’t get Chromium on the basis of a * .deb package, but instead as a snap package.

$ [[user@ubuntu:~]$ sudo apt search chromium
$ [[user@ubuntu:~]$ chromium-codecs-ffmpeg-extra/focal-updates,now 1:85.0.4183.83-0ubuntu0.20.04.2 amd64 [installed,automatic]
$ [[user@ubuntu:~]$   Transitional package - chromium-codecs-ffmpeg-extra -> chromium-ffmpeg snap
$ [[user@ubuntu:~]$ 

I don’t like this fact. I want to be able to choose the source of my software myself. Ubuntu can fix this point quickly. But currently it seems like Ubuntu Snap is trying with all its might to push ahead. Canonical or Ubuntu justifies this step, among other things, by the fact that the provision of third-party software via this container solution is far less effort, since you can pack all the necessary dependencies for all supported Ubuntu versions (e.g. 16.04, 18.04, 20.04 etc.) centrally and roll out the container. Technically, I can fully follow this justification, but the criticism of the compulsion to take the measure in connection with the closed server infrastructure remains.

Cet article De-Ubuntu Ubuntu est apparu en premier sur Blog dbi services.

Create Delete RAC DB Using dbca silent

Michael Dinh - Tue, 2021-06-08 17:18

Tested version.

[oracle@ol7-19-lax1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)

OPatch succeeded.
[oracle@ol7-19-lax1 ~]$

Create database using dbca silent.

--- Set environment variables to be used by dbca.
export ORACLE_UNQNAME=owl
export PDB_NAME=mice
export NODE1=ol7-19-lax1
export NODE2=ol7-19-lax2
export SYS_PASSWORD=Oracle_4U
export PDB_PASSWORD=Oracle_4U

Note: -gdbName global_database_name (-gdbname oradb.example.com)

dbca -silent -createDatabase \
  -templateName General_Purpose.dbc \
  -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
  -characterSet AL32UTF8 \
  -sysPassword ${SYS_PASSWORD} \
  -systemPassword ${SYS_PASSWORD} \
  -createAsContainerDatabase true \
  -numberOfPDBs 1 \
  -pdbName ${PDB_NAME} \
  -pdbAdminPassword ${PDB_PASSWORD} \
  -databaseType MULTIPURPOSE \
  -automaticMemoryManagement false \
  -totalMemory 1024 \
  -redoLogFileSize 50 \
  -emConfiguration NONE \
  -ignorePreReqs \
  -nodelist ${NODE1},${NODE2} \
  -storageType ASM \
  -diskGroupName +DATA \
  -recoveryGroupName +RECO \
  -useOMF true \
  -asmsnmpPassword ${SYS_PASSWORD}

[oracle@ol7-19-lax1 ~]$ dbca -silent -createDatabase \
>   -templateName General_Purpose.dbc \
>   -gdbname ${ORACLE_UNQNAME} -responseFile NO_VALUE \
>   -characterSet AL32UTF8 \
>   -sysPassword ${SYS_PASSWORD} \
>   -systemPassword ${SYS_PASSWORD} \
>   -createAsContainerDatabase true \
>   -numberOfPDBs 1 \
>   -pdbName ${PDB_NAME} \
>   -pdbAdminPassword ${PDB_PASSWORD} \
>   -databaseType MULTIPURPOSE \
>   -automaticMemoryManagement false \
>   -totalMemory 1024 \
>   -redoLogFileSize 50 \
>   -emConfiguration NONE \
>   -ignorePreReqs \
>   -nodelist ${NODE1},${NODE2} \
>   -storageType ASM \
>   -diskGroupName +DATA \
>   -recoveryGroupName +RECO \
>   -useOMF true \
>   -asmsnmpPassword ${SYS_PASSWORD}
Prepare for db operation
7% complete
Copying database files
27% complete
Creating and starting Oracle instance
28% complete
31% complete
35% complete
37% complete
40% complete
Creating cluster database views
41% complete
53% complete
Completing Database Creation
57% complete
59% complete
60% complete
Creating Pluggable Databases
64% complete
80% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/owl.
Database Information:
Global Database Name:owl
System Identifier(SID) Prefix:owl
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/owl/owl.log" for further details.
[oracle@ol7-19-lax1 ~]$

Log files are located at ORACLE_BASE/cfgtoollogs/dbca/${ORACLE_UNQNAME}

[oracle@ol7-19-lax1 ~]$ cd $ORACLE_BASE/cfgtoollogs/dbca/${ORACLE_UNQNAME}
[oracle@ol7-19-lax1 owl]$ pwd
/u01/app/oracle/cfgtoollogs/dbca/owl

[oracle@ol7-19-lax1 owl]$ ls -l
total 23576
-rw-r-----. 1 oracle oinstall    10422 Jun  8 21:20 catclust0.log
-rw-------. 1 oracle oinstall   201621 Jun  8 21:20 catclust_catcon_31776.lst
-rw-r-----. 1 oracle oinstall     2450 Jun  8 21:14 cloneDBCreation.log
-rw-r-----. 1 oracle oinstall      380 Jun  8 20:57 CloneRmanRestore.log
-rw-r-----. 1 oracle oinstall    44272 Jun  8 21:20 CreateClustDBViews.log
-rw-r-----. 1 oracle oinstall     1711 Jun  8 21:49 DBDetails.log
-rw-r-----. 1 oracle oinstall     9948 Jun  8 21:19 execemx0.log
-rw-------. 1 oracle oinstall   200759 Jun  8 21:19 execemx_catcon_31544.lst
-rw-r-----. 1 oracle oinstall      910 Jun  8 21:20 lockAccount.log
-rw-r-----. 1 oracle oinstall     9560 Jun  8 21:18 ordlib0.log
-rw-------. 1 oracle oinstall   200561 Jun  8 21:18 ordlib_catcon_31269.lst
-rw-r-----. 1 oracle oinstall      796 Jun  8 21:51 owl0.log
-rw-r-----. 1 oracle oinstall      952 Jun  8 21:34 owl.log
-rw-r-----. 1 oracle oinstall        0 Jun  8 21:33 PDBCreation.log
-rw-r-----. 1 oracle oinstall       28 Jun  8 21:34 plugDatabase1R.log
-rw-r-----. 1 oracle oinstall     4105 Jun  8 21:18 plugDatabase.log
-rw-r-----. 1 oracle oinstall    46082 Jun  8 21:33 postDBCreation.log
-rw-r-----. 1 oracle oinstall       24 Jun  8 21:34 postPDBCreation.log
-rw-r-----. 1 oracle oinstall    88296 Jun  8 21:19 postScripts.log
-rw-r-----. 1 oracle oinstall        0 Jun  8 21:50 rmanUtil
-rw-r-----. 1 oracle oinstall     1479 Jun  8 21:49 ShutdownInst.log
-rw-r-----. 1 oracle oinstall 18726912 Jun  8 20:57 tempControl.ctl
-rw-r-----. 1 oracle oinstall  2670547 Jun  8 21:34 trace.log_2021-06-08_08-55-55PM
-rw-r-----. 1 oracle oinstall  1649779 Jun  8 21:51 trace.log_2021-06-08_09-49-34PM
-rw-r-----. 1 oracle oinstall    15420 Jun  8 21:32 utlrp0.log
-rw-------. 1 oracle oinstall   200463 Jun  8 21:32 utlrp_catcon_3004.lst
[oracle@ol7-19-lax1 owl]$

Here is the created database.

[oracle@ol7-19-lax1 owl]$ srvctl config database -d owl
Database unique name: owl
Database name: owl
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/OWL/PARAMETERFILE/spfile.311.1074720717
Password file: +DATA/OWL/PASSWORD/pwdowl.298.1074718605
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECO,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: owl1,owl2
Configured nodes: ol7-19-lax1,ol7-19-lax2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

[oracle@ol7-19-lax1 owl]$ srvctl status database -d owl
Instance owl1 is running on node ol7-19-lax1
Instance owl2 is running on node ol7-19-lax2

--- Newly created database was not added to oratab.
[oracle@ol7-19-lax1 owl]$ cat /etc/oratab
#Backup file is  /u01/app/oracle/product/19.0.0/dbhome_1/srvm/admin/oratab.bak.ol7-19-lax1.oracle line added by Agent
+ASM1:/u01/app/19.0.0/grid:N
hawk1:/u01/app/oracle/product/19.0.0/dbhome_1:N
[oracle@ol7-19-lax1 owl]$ 

Delete database using dbca silent.

dbca -silent -deleteDatabase -sourceDB ${ORACLE_UNQNAME} -sysDBAUserName sys -sysDBAPassword ${SYS_PASSWORD}

[oracle@ol7-19-lax1 ~]$ dbca -silent -deleteDatabase -sourceDB ${ORACLE_UNQNAME} -sysDBAUserName sys -
sysDBAPassword ${SYS_PASSWORD}

[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
39% complete
42% complete
45% complete
48% complete
52% complete
55% complete
58% complete
65% complete
Updating network configuration files
68% complete
Deleting instances and datafiles
77% complete
87% complete
97% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/owl/owl0.log" for further details.
[oracle@ol7-19-lax1 ~]$
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-60c0156296154', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', onClick: function() { window.__tcfapi && window.__tcfapi( 'showUi' ); }, } } }); });

Users Privileges

Tom Kyte - Tue, 2021-06-08 15:46
Hello, I am facing a problem and it goes like this: I have a schema named CML that we want to put common objects (Tables, View, Procedures, Packages, etc) used by the team. I would like to know what grants I should give to my user (eliasr) to create tables, views, procedures, packages in the schema CML. I want to also change existing packages and procedures.
Categories: DBA Blogs

PDB lock down profiles.

Tom Kyte - Tue, 2021-06-08 15:46
Team, Is it not possible to have multiple values in the lock down profile? Kindly advice. <code>c##sys@ORA12CR2> drop lockdown profile p1; Lockdown Profile dropped. c##sys@ORA12CR2> create lockdown profile p1; Lockdown Profile created. c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE','SIMILAR'); alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') * ERROR at line 1: ORA-65206: invalid value specified ORA-00922: missing or invalid option c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE'); Lockdown Profile altered. c##sys@ORA12CR2></code>
Categories: DBA Blogs

Oracle apex - changing page authorization scheme on custom form

Tom Kyte - Tue, 2021-06-08 15:46
Hi, is it possible to change page authorization scheme ( or another page properties) in oracle apex on some custom build form . I need it for my application admin panel . I know which table is it , it is apex_200200.wwv_flow_list_items but on cloud its forbidden to edit WWV prefixed tables of apex_200200 user. I tried to find API - package to do it but cant find it. On cloud user ADMIN don't have select privilege on that table so this query is not working: select * from apex_200200.wwv_list_items [Error] Execution (1: 27): ORA-01031: insufficient privileges I tried to use this package but I'm getting no results : begin WWV_FLOW_API.update_page(p_id=>124,p_flow_id=>122,p_required_role=>'7.59043067032354E15'); end; Is there any way to do this through some other package in database.
Categories: DBA Blogs

“User created” PDB max before licensing multi tenant

Tom Kyte - Tue, 2021-06-08 15:46
Hey guys, Regarding https://blogs.oracle.com/database/oracle-database-19c-up-to-3-pdbs-per-cdb-without-licensing-multitenant-v2 And the documentation it refers to. I?m wondering if you can clarify how the 3 PDB limit (before additional licensing ) works with application containers. I?ve tried setting max_pdbs=3, then creating an application container and when I create the PDBs within that application container if I try to create 3 PDBs I get an error on creating the 3rd (max PDBs exceeded). The documentation isn?t clear on what a user created PDB is (in my opinion at least) when dealing with this type of set up so I?m not sure if the error is a bug or it?s enforcing things appropriately. Thanks ! <code> alter system set max_pdbs=3 scope=both sid='*'; ALTER SESSION SET container = CDB$ROOT; CREATE PLUGGABLE DATABASE App_Con AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE App_Con OPEN instances=all; ALTER SESSION SET container = App_Con; CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB1 OPEN instances=all; CREATE PLUGGABLE DATABASE PDB2 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB2 OPEN instances=all; --below fails CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB3 OPEN instances=all; </code>
Categories: DBA Blogs

Ampersand in input string

Tom Kyte - Tue, 2021-06-08 15:46
I have an input string I can't change. Unfortunately this input string contains an ampersand (&). I was hoping to be able to create a function which could translate an URL into something more readable text. The input string will change but an example could be as following: <i>'pw://pw.vd.dk:Vejdirektoratet/Documents/Bigger&space;projects/7x/Workflow.JPG' </i> and in this case the wanted output string would be as following <i>'Vejdirektoretet\Documents\Bigger projects\7x\Workflow.JPG'</i> My hope is to be able to create a function URL2PATH which could be called as: <code>select URL2PATH('pw://pw.vd.dk:Vejdirektoratet/Documents/Bigger&space;projects/7x/Workflow.JPG') from dual;</code> with the wanted result mentioned above. If that is not possible when the input string contains an ampersand, I secondary would like a workaround to call the function with some additional addons but I haven't been able to figure out any solution for that either. I have been looking at REPLACE, TO_CHAR, TRANSLATE etc. but until now without any succes. One of my challenges is, that this function has to be called from an outside program, so the solution by using 'set define off' ahead of the call doesn't seems to be an option.
Categories: DBA Blogs

SQL Server: Analysis of SCOM Warning on the Health Check

Yann Neuhaus - Tue, 2021-06-08 08:25

Sometimes, it’s hard to find how to analyze a Warning in the SCOM Interface and correct the problem.

By a customer, we install new instances in SQL Server 2019 and change the version of SCOM.

All new servers have a warning on the Health.

The first step in the SCOM interface is to go to the SQL Server Roles:

On this step, the most important is to open the Health Explorer and not double-click or click properties. If you do not open the Health Explorer, you will not see the problem.

In the Health Explorer, the “Entity Health” and the “Configuration” give not so much explanations but the “Securables Configuration Status” indicates what the problem is.

 

In the State Change Events Tab, we can see the real problem “Securables access status is bad


In my case, it’s just some permissions to adjust in the master and msdb.

To see the resolution, you can read my other article “SQL Server: Replace the old SCOM login by the new one

I hope this can help you to analyze and resolve your issue with the SCOM Configuration if needed.

 

 

 

Cet article SQL Server: Analysis of SCOM Warning on the Health Check est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator