Feed aggregator
I show how to retrieve structured JSON output from table image using Qwen2-VL-7B. This VLLM performs OCR and data mapping tasks all out of the box, also it can return structured JSON output without use of intermediate frameworks.
It took me a while pick database development over dabase administration, concerning a editions.
My questions is simple, if I've decided to create an editioning view over a table with N triggres, what would I do about this??
some people says all triggers must be moved to the editioning view. Other says if there are trigger oriented to update auditing column (created by, date of creation, updated, etc) or update surrogated key columns (pre-insert), this kind of triggers must remain in the table.
Kind regards
Mauricio
Dear Colleagues,
I have come across a problem when ranking regression slopes over a database link.
I have a table containing dates and file volumes for downloads from several sources. I want to make a Top N for the sources most growing in their volumes. For this, I calculate slopes for all the sources using regr_slope() function and then rank the slopes using row_number() over (order by slope desc nulls last) as rn_top. Then I want to select rows where rn_top <= :n.
The results of the query obtained over a database link differ from the results obtained locally: locally obtained results are correct, remote ones demonstrate strange behavior. Regression is correct, ranking is correct; select * from (?ranking?) is NOT correct even without any ?where? condition ? the slopes are correct, but their ranking is not.
This effect does not take place for constants, nor for max/min/avg functions ? only for regr_* functions. The effect is stable for two versions of remote servers: Oracle Database 19c Enterprise Edition and Oracle Database 11g Express Edition, and Oracle Database 19c Enterprise Edition wherefrom the database link goes.
For reproducing:
<code>CREATE TABLE "TEST_003" ("TASK_SOURCE_ID" NUMBER(6,0), "TASK_DATE_DOWNLOADED" DATE, "TASK_FILE_SIZE" VARCHAR2(128)) ; /* historically task_file_size is varchar in original table */
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('26.08.24','DD.MM.RR'),'8266');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114657',to_date('26.08.24','DD.MM.RR'),'6925');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('26.08.24','DD.MM.RR'),'8783');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('26.08.24','DD.MM.RR'),'6590');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('26.08.24','DD.MM.RR'),'7204');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('122994',to_date('26.08.24','DD.MM.RR'),'59904');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('120116',to_date('27.08.24','DD.MM.RR'),'35125');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('27.08.24','DD.MM.RR'),'8226');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('27.08.24','DD.MM.RR'),'8784');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('27.08.24','DD.MM.RR'),'6591');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('27.08.24','DD.MM.RR'),'7206');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('28.08.24','DD.MM.RR'),'8230');
Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNL...
Hello Tom,
I have a table which is having around 266573283 rows. In that table i have BLOB column which stores a xml data. In that column i need to parse the xml and then get the value of on node. This node may or may not exists in all the rows. I have tried using python (cx_oracle package). The table has been partitioned on the date(Month and Year) the rows has been stored. I'm not able to process the xml. Is there a way or an approach to process the xml data
Note: This table will not grow anymore due to application shutdown.
2018 3543136
2019 3369956
2020 7576397
2021 82413536
2022 123216864
2023 46453394
Thanks & Regards,
Mani R
During executing a big PL/SQL blocks, some of the numeric parameters get lost/replaced with an earlier value.
There is no error message, the PL/SQL block executes, but some of the function calls get the wrong parameter values.
See the LiveSQL link: it defines a function which compares its input parameters, and calls it some 11 thousand times, and at the 10932th call, it gets mismatched values.
The problem seemingly occurs above a certain program size, but it seemingly didn't reach the diana nodes limit, there is no ORA-00123, the program executes, but stores the wrong data in the database.
Is there a size limit I bump into or can it possibly be an Oracle bug (that reproduces on multiple instances and different Oracle versions?)
Hi,
How can i connect to SQL developer using oracle wallet based security.
from sql plus we can connect using the command like /@sid, but how can i connect to SQL developer (or) toad using the wallet based security.
Thanks in advance.
Hi,
thank you for taking my question,
I wanted to know the logic behind not being able to reference :new and :old variables in dynamic sql
execute immediate 'insert into tablea (columna, columnb) values (:old.columna,:new.columnb)';
this will fail
just trying to understand the logic behind it.
thank you for you response
PS constant browser on on your site loven it!
I was reading the following discussion about using Object Types
https://asktom.oracle.com/ords/f?p=100:11:607033327783906:::::
https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/10_objs.htm
But, I couldnt understand the object types usage in packages (mainly in the bulk processing). Is it possible to use objects in bulk processing? if yes, can you please help with an example?
Looking for a recommendation related to large table inserts in Autonomous (ATP) database.
We have a sizable history table in our database, roughly 45 GB, which stores all transactions at the end of each day. Recently, we have been experiencing performance challenges when inserting data into this table.
We tried 2 approaches to fix the performance.
(1) /*+ APPEND */ hint in direct SQL.
(2) Bulk Collect and FORALL
Both approaches works fine for smaller data sets. But, the data is really big like 2 Million records (in certain days of a week) the inserts are taking more than 5 mins to run. Since our calling application has 5 mins limit on SQL executions, our entire package is failing.
Can you please help if we are missing anything in our approach or can try any other things for better performance?
Also, should we consider partitioning the table for faster DML?
If partitioning is something that we can consider, which field should we consider for partitioning (since the data in the table quite random, we thought of doing hash partition on line_id). Could you provide any suggestions? Furthermore, does the choice of field for hash partitioning carry any significance?
Greetings,
My question is in regards to Oracle's recommended process of sizing an Oracle database. When I have participated in sizing an Oracle server, we only estimated the table sizes using the field sizes multiplied by the total number of records. Then, we added 15% of the total table sizes for index space. Next, we ran a few simulated runs with test data. The focus was mostly to make sure that the server had enough disk space and total CPU didn't go over 50% for peak load. That was it. The steps I mentioned were provided by either Oracle support or our Oracle technical contact. Is there now an official Oracle whitepaper that states how an Oracle server should be sized?
I recently been asked to help size a Mongo server. In this case, I do have access to a MongoDB server sizing whitepaper-like. It recommends that we focus on making sure that the entire "working set" fits in memory. The "working set" is the amount of space used up by both the data and its indexes needed for processing "frequently accessed data". The "frequently accessed data" is the data that is needed by normal processing. I still don't know how to accurately determine that size since it doesn't go into much details. Does this sound like a good way to size an Oracle database server, too?
Thanks,
John
P.S. Here is Mongo's whitepaper-like approach: https://www.linkedin.com/pulse/mongodb-sizing-guide-sepp-renfer/
Hello Tom,
I am trying to inline a function when using a MERGE statement and I am not sure if it is working as no error is thrown.
Here is my code:
<code>CREATE OR REPLACE PACKAGE my_pkg
IS
FUNCTION my_fnc ( p1 VARCHAR2
, p2 VARCHAR2) RETURN VARCHAR2;
END my_pkg;
/
CREATE OR REPLACE PACKAGE BODY my_pkg
IS
FUNCTION my_fnc ( p1 VARCHAR2
, p2 VARCHAR2) RETURN VARCHAR2
IS
BEGIN
RETURN p1 || '-' || p2;
END my_fnc;
END my_pkg;
/
CREATE OR REPACE PROCEURE my_prc
IS
TYPE t_type IS TABLE OF my_tab2%ROWTYPE;
v_nt t_type;
v_colx my_tab2.colx%TYPE;
BEGIN
-- version 1
PRAGMA INLINE (my_fnc, 'YES');
MERGE INTO my_tab1 a
USING (SELECT col1
, col2
, my_pkg.my_fnc(col3, col4) colx
, col5
, col6
FROM my_tab2
WHERE 1 = 1) b -- the condition doesn't mather
ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx)
WHEN MATCHED THEN
UPDATE SET a.col5 = b.col5
, a.col6 = b.col6
WHEN NOT MATCHED THEN
INSERT ( col1
, col2
, colx
, col5
, col6);
COMMIT;
--
-- version 2
SELECT col1
, col2
, my_pkg.my_fnc(col3, col4) colx
, col5
, col6
BULK COLLECT INTO v_nt
FROM my_tab2;
FORALL i IN v_nt.FIRST .. v_nt.LAST
PRAGMA INLINE (my_fnc, 'YES');
v_colx := my_pkg.my_fnc(col3, col4);
MERGE INTO my_tab1 a
USING (SELECT v_nt(i).col1 col1
, v_nt(i).col2 col2
, v_colx
, v_nt(i).col5 col5
, v_nt(i).col6 col6
FROM dual) b -- the condition doesn't mather
ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx)
WHEN MATCHED THEN
UPDATE SET a.col5 = b.col5
, a.col6 = b.col6
WHEN NOT MATCHED THEN
INSERT ( col1
, col2
, colx
, col5
, col6);
END my_prc;</code>
Now, my questions are: can any version be inlinied?
Version 1 could not be inlined because it is not preceding any of the statements mention in ORacle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-optimization-and-tuning.html#GUID-2E78813E-CF29-409D-9F8B-AA24B294BFA2)
Version 1 could not be inlined because, also the invoked and invoking subprograms are not in the same program unit
(it is not very clear what means "same program unit")
Version 2 could be inlined because it is using the assignment statement?
Version 2 could not be inlined because the invoked and invoking subprograms are not in the same program unit
Thank you,
Ionut Preda.
The expdp (Data Pump Export) utility is used to export data and metadata from an Oracle database. It provides several command-line options to customize the export process. Below are some common expdp commands and their explanations:
Basic Syntax:
expdp [username/password] DIRECTORY=directory_name DUMPFILE=dumpfile_name LOGFILE=logfile_name [options]
Common Expdp Commands: expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log
- FULL=Y: Exports the entire database.
- Export a Specific Schema:
expdp system/password SCHEMAS=schema_name DIRECTORY=dpump_dir DUMPFILE=schema.dmp LOGFILE=schema.log
- SCHEMAS=schema_name: Exports a specific schema.
expdp system/password TABLES=table1,table2 DIRECTORY=dpump_dir DUMPFILE=tables.dmp LOGFILE=tables.log
- TABLES=table1,table2: Exports specific tables.
- Export a Specific Table with Data and Metadata:
expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log
- TABLES=table_name: Exports a specific table.
expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log COMPRESSION=ALL
- COMPRESSION=ALL: Compresses all data during export.
- Export with Data Filtering (e.g., Export Data from a Specific Date):
expdp system/password TABLES=table_name DIRECTORY=dpump_dir DUMPFILE=table.dmp LOGFILE=table.log QUERY=table_name:"WHERE created_date > TO_DATE('2024-01-01', 'YYYY-MM-DD')"- QUERY=table_name:"WHERE condition": Filters rows based on a condition.
- expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=metadata.dmp LOGFILE=metadata.log CONTENT=METADATA_ONLY
- CONTENT=METADATA_ONLY: Exports only metadata (no data).
expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=data_only.dmp LOGFILE=data_only.log CONTENT=DATA_ONLY
- CONTENT=DATA_ONLY: Exports only data (no metadata).
- Export a Database with a Specific Date Format:
expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log PARALLEL=4- PARALLEL=4: Uses 4 parallel threads for faster export.
expdp system/password FULL=Y DIRECTORY=dpump_dir DUMPFILE=full_db.dmp LOGFILE=full_db.log JOB_NAME=export_full_db
- JOB_NAME=export_full_db: Assigns a name to the export job.
Additional Parameters:
- CONTENT: Specifies whether to export metadata only (METADATA_ONLY), data only (DATA_ONLY), or both (ALL).
- EXCLUDE: Excludes specific objects or object types from the export. Example: EXCLUDE=TABLE:"='table_name'".
- INCLUDE: Includes specific objects or object types in the export. Example: INCLUDE=TABLE:"IN ('table1', 'table2')".
- REMAP_SCHEMA: Remaps schema names. Example: REMAP_SCHEMA=old_schema:new_schema.
- REMAP_TABLESPACE: Remaps tablespace names. Example: REMAP_TABLESPACE=old_tablespace:new_tablespace.
Directory Object: Before running expdp, ensure that the DIRECTORY object exists in the database and points to a valid filesystem directory where the dump files will be written. CREATE OR REPLACE DIRECTORY dpump_dir AS '/path/to/directory';
Example Execution: To execute an expdp command, open a command prompt or terminal and run the appropriate expdp command based on your requirements. Ensure you have the necessary privileges and that the Oracle environment variables (ORACLE_HOME and PATH) are set correctly.
Conclusion: The expdp utility offers powerful options for exporting data and metadata from Oracle databases. By using the appropriate parameters and options, you can tailor the export process to meet specific needs and optimize performance.
Hi,
I am helping developers to track their failed call to oracle procedures in java app, thought unified auditing can do that. so I create a unified audit policy on execute on the procedure, however, the audit only record the successful execution, but when the call fails with error ORA-06550 and PLS-00306, it's not recorded in the audit trail. here is example code to reproduce problem:
<code>CREATE OR REPLACE PROCEDURE output_parameter(p_input VARCHAR2) AS
BEGIN
DBMS_OUTPUT.PUT_LINE('Input Parameter: ' || p_input);
END;
/
CREATE AUDIT POLICY Jiulu_dropit3 ACTIONS EXECUTE ON jiulusun.output_parameter;
AUDIT POLICY Jiulu_dropit3;</code>
logout
login
-- successful execution
<code>SQL> exec output_parameter('a');
Input Parameter: a</code>
PL/SQL procedure successfully completed.
-- failed execution
<code>SQL> exec output_parameter('a','b');
BEGIN output_parameter('a','b'); END;
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'OUTPUT_PARAMETER'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
SELECT event_timestamp, os_username, userhost,dbusername, action_name, return_code,object_schema, object_name, sql_text
FROM unified_audit_trail
WHERE object_name='OUTPUT_PARAMETER'
@pr
==============================
EVENT_TIMESTAMP : 12-AUG-24 10.56.17.340044 AM
OS_USERNAME : jiulusun
USERHOST : my machine
DBUSERNAME : JIULUSUN
ACTION_NAME : EXECUTE
RETURN_CODE : 0
OBJECT_SCHEMA : JIULUSUN
OBJECT_NAME : OUTPUT_PARAMETER
SQL_TEXT : BEGIN output_parameter('a'); END;
PL/SQL procedure successfully completed.</code>
so the failed execution is not in audit trail, Is this a bug or I mis-understand how unified auditing works?
This video shows how to locally install Rope Pearl Live on Windows for for live swapping using Webcam and for live streaming the swapped videos using virtual camera.
Code: Clone the repository to your folder by downloading the zip file: https://github.com/argenspin/Rope-Live.git cd Rope-Live conda create -n Rope python=3.10.13 conda activate Rope conda install cuda-runtime=11.8.0 cudnn=8.9.2.26 gputil=1.4.0 -c pytorch -c nvidia -c conda-forge python -m pip install -r requirements.txt Download the required models To get access to all the features of Rope, you need to download the models from here. You need all of the files. Place the downloaded model files in the Rope-Live/models folder Set up OBS Virtual Camera Start OBS. Click "Start Virtual Camera" (bottom right), then "Stop Virtual Camera". Close OBS. Start the application by running Rope.bat file
Chris & Connor,
We are working on a new project and were wondering as to what options are currently available / provided by Oracle for Change Data Capture, besides the DIY MViews ( MVIEW logs mostly ), triggers etc, does Oracle 11.2.0.4 and above provide an out of the box solution ?
I have heard that Oracle CDC is being deprecated, is there some option available that will track all the data changes made to a table ?
Thanks
BC
MT, MI
This video is a step-by-step tutorial to use Online DPO to fine-tune a model locally on custom dataset. ODPO is a new alignment method from DeepMind to boost the performance of LLMs.
Code: conda create -n dpo python=3.11 -y && conda activate dpo pip install torch pip install datasets dataclasses pip install git+https://github.com/huggingface/transformers pip install git+https://github.com/huggingface/accelerate git clone https://github.com/huggingface/trl.git && cd trl git checkout d57e4b726561e5ae58fdc335f34029052944a4a3 pip install -e . conda install jupyter -y pip uninstall charset_normalizer -y pip install charset_normalizer jupyter notebook from datasets import Dataset from trl import OnlineDPOConfig, OnlineDPOTrainer from transformers import ( AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, ) NUM_DUMMY_SAMPLES = 100 tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) # The model to optimise model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct") # The reference model to calculate the KL divergence against ref_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct") # The model to score completions with. In practice, you will need a reward model. reward_model = AutoModelForSequenceClassification.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct", num_labels=1) train_dataset = Dataset.from_dict( {"prompt": ["Q: Hi how are you? A:"] * NUM_DUMMY_SAMPLES}) eval_dataset = Dataset.from_dict( {"prompt": ["Q: What do you like to eat A:"] * NUM_DUMMY_SAMPLES}) args = OnlineDPOConfig(output_dir="online-dpo-model") trainer = OnlineDPOTrainer( model=model, ref_model=ref_model, reward_model=reward_model, args=args, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train()
I have a table called DAY_INSP and a column INS_DATE which allows dates in the future. I want to prevent that by only allowing today's date or older. Do I need to add a trigger or constraint to do this? Any help would be appreciated
I've posted a demonstration of DataGuard Switchover, using RAC and Single Instance as the Primary/Standby pair.
Fundamentally, there is no difference if either or both of the databases are RAC or Single Instance.
A Switchover is a Graceful operation, with No-Data-Loss as the Primary sends the "End-Of-Redo Marker" to the Standby at which point Reversal of Roles happens. Therefore, you can execute Switchover between the two databases (servers / data centres / sites) multiple times without loss of data.
A Failover, on the other hand, involves data loss and the erstwhile Primary does not revert to a Standby role but must be recreated / refreshed as a Standby from the new Primary.
Pages
|