Overview of Ask Tom Live Holiday Session
Conor McDonald hosts a relaxed end-of-year Ask Tom Live, focusing on unexplored Oracle Database questions and recent developments such as SQL Developer for VS Code enhancements and Oracle 26 AI updates. With user participation encouraged via chat, the session addresses practical Oracle challenges and tool improvements.
Key Topics Covered
1. Why Is TRUNCATE Slow in Data Warehouses?
- Although TRUNCATE is inherently fast, it can be delayed by flushing dirty blocks from memory to disk before execution to prevent data corruption.
- Extensive DML operations on large tables result in numerous dirty blocks, increasing the flushing time and causing TRUNCATE delays.
- Solutions to improve TRUNCATE speed:
- Run queries on the table to trigger delayed block cleanout.
- Execute an
ALTER SYSTEM CHECKPOINTto flush dirty blocks preemptively. - Prefer direct mode or parallel DML (e.g., INSERT APPEND) to minimize dirty block creation.
- A slow TRUNCATE may indicate inefficient application logic around data handling.
2. UUID Default Value Issue in Oracle 26 AI
- Oracle introduced a new UUID function to generate truly random UUIDs, supporting future versions like version 7 (time-ordered UUIDs) aligned with OS implementations.
- Bug: Using
UUID()directly as a default value in table columns is currently unsupported due to SQL engine misinterpretation. - Workaround: Create the table without the default and then alter the column to add
DEFAULT UUID(). - An official fix is planned in an upcoming release update.
- This functionality ties closely with advancements discussed in Oracle AI Database 26AI: Revolutionizing AI-Driven Applications and Analytics.
3. Online Table Reorganization Without Enterprise Edition
- Standard Edition lacks online
ALTER TABLE MOVEandDBMS_REDEFINITIONfeatures. - Manual workaround:
- Create a materialized view log and create a materialized view that represents the reorganized table.
- Periodically refresh the view to sync data during online use.
- Brief outage required to finalize: refresh view, convert it to a table, drop old table and rename.
- Alternative approach is using
ALTER TABLE ENABLE ROW MOVEMENT; ALTER TABLE SHRINK SPACE;which operates online but is slower and generates more redo. - Enterprise Edition features significantly simplify and speed up these operations.
- For in-depth understanding of related table alteration commands, refer to Understanding the ALTER Command in SQL: A Comprehensive Guide.
4. Additional Insights and Q&A
- PR and PT Scripts Explained: Dynamic SQL scripts for formatting query output vertically, using SQL*Plus buffer manipulation.
- Streaming Gear: Use of DSLR cameras with Elgato Cam Link for high-quality webinar video and a quality microphone emphasizing audio importance.
- Memory Target and Huge Pages: Oracle 26 AI now supports huge pages with unified memory_target, but for large memory systems (>4GB), separate SGA_TARGET and PGA_AGGREGATE_TARGET remain recommended.
- Large Log Buffers: On high-core systems like Exadata, large log buffers (e.g., 1GB) are normal; ensure redo logs are sized accordingly to avoid performance issues.
- Temporary Tablespace Groups: Mostly legacy solutions to cross-instance contention; local temporary tablespaces on RAC reduce the need for them.
- Materialized View Privileges Error: "Insufficient privileges" error may actually indicate a non-existent MV or missing
CREATE TABLEprivilege for non-atomic refresh. - AI and Future of DBA Roles: AI skills are increasingly vital; AI likely will not replace DBAs outright but will advantage those with AI competencies.
- For broader context on AI integration and DBA roles, see How Infosys Drives Enterprise AI Transformation and Innovation and Unlocking the Power of Gemini 2.5's Deep Research for Business Intelligence.
Conclusion and Community Engagement
Conor thanks the community for participation over the year, encourages supporting user groups, and invites ongoing engagement in upcoming conferences and sessions. He stresses broad skill sets including AI as essential for future IT professionals and wishes everyone a safe and joyous holiday season.
Good evening everybody. It's Christmas time. Uh I apologize for anyone that joined the call early and had to be
subjected to the annual Mariah Carey uh insanity, but she's laughing all the way to the bank. And uh yeah, I figured we'd
go some Christmas songs as is tradition at this time of the year. Pretty much one of the very last uh sessions of the
year in terms of Ask Tom Live. Oh, I should say sing out on the chat line if you can't hear me or see me. Oh, there's
a message on the chat already. Oh, good. Nothing. No alarming. Uh, you should all be able to chat with each other without
too many dramas. Uh, it's pretty casual night tonight. Uh, we'll do a couple of things that we didn't get to last month,
but uh, as is tradition, uh, I always do for my last office hours of the year. Uh, I just basically, you know, cover
anything that hasn't been covered so far, do the normal user group and conference updates, and then it's just
basically whatever people want to chat about. So, um, I put out a thing on LinkedIn earlier today saying, you know,
if you got any ideas you want to chat about and if a few people DM me. So, I got a few things there with there's no
slides, just the topic and I and I'll sort of talk about it and um, but yeah, anything you want to put in there. Uh,
we'll see how we go. So, it might be a 15-minute session, it might be a 1 hour session. We'll see how we go. It's all
just pretty relaxed. Okay, let me share my screen and we'll get into it. Okay, so hopefully you can still see me and
hopefully you can see my screen. Let me do a bit of adjusting with my windows so I can see me as well. And keep an eye on
the chat line as well. It's all happening. Where are we? Okie dokie. Now, um, as I normally say, um, I know
it's a webinar, so we have Q&A and the chat line. Um, I prefer it just all in the chat. Um, I'll try to keep an eye on
things, but if I see Q&A, I'll I generally just look at the chat. So, um, so go with the chat. That's the the aim
to go. Sorry. No, my dog got out. That's fine. So, um, let's get into it. Without any further ado, as always, getting in
touch is easy. Just scan the QR code. Even with my ugly mug in front of it, uh, it'll still work. It just takes you
to the link tree/conor, which is all the different social media ways of getting in touch with me. Uh, the Conor McDonald
is my Twitter account. My Twitter DMs are open. My Blue Sky DMs are open. My LinkedIn, if you want to message me, uh,
just connect with me first. Uh, someone found out I was at the New Zealand conference last week and they wanted to
message me and they sent me an abusive message saying I had to sign up to LinkedIn Premium in order to message me.
Um, apparently that's the way that's the only way you can message someone if you're not connected with them. So, uh,
if you want to message me on LinkedIn, easy. Just connect with me. I'll generally accept any connection and then
you can message me. Easy peasy. If you're new to Ask Tom Live, um, where have you been all this year? But if you
are u you'll see all the slide content pushed away to the left. That's cuz we turn this into a recording and we put me
at the top right. So we don't want to cover up any valuable content. As always each month I talk about some
bits and pieces as to what I've done, what's been coming up, etc. Mainly just to keep promoting the importance of user
groups and meetups and building our community. But first, some Christmas presents for you. My little plushy hat.
There's the plushy in the background. Uh with a little Santa hat on. uh SQL developer for VS Code version 25.3.2
is out. The key thing there is underneath your typical connection expansion now you'll see see a thing
called AI enrichment. Now you may have heard of this AI thing. Yeah, maybe just you know ever so slightly. What AI
enrichment is is when you're asking various models to deal with your database in terms of understanding what
the tables are, what the columns are etc. One of the things that we will go look
at is what can we provide to the AI engine, the LLM to to assist in that process because you probably seen all
those demos where we say, "Oh, look, it magnificently worked out that the countries table was called countries."
Whereas obviously in the real world, you know, in typical applications, the country table is called CX274
and stuff like that. So AI enrichment lets you assign effectively commentary uh to the tables and the columns uh in
order to actually have that information provided uh to various tooling that involves AI so they can actually make a
more informed decision as to what the tables mean, what the columns mean etc. Uh ultimately these are stored in local
tables inside uh your your connection. Um that's what's done by the VS code. Uh ultimately we intend to use that because
in this way people don't have to be on 26AI which has the native um annotations and everything built in.
Also literally yesterday what's 18th? Yeah. Literally yesterday 25.4 for SQL CL has just come out. So go get your
brand new fresh downloads. Uh some bug fixes and some enhancements there. Um nice one there is some asynchronous
facilities now in the MCP server. So you can basically have your MCP server wants to go run something and says, "Oh,
that's going to take a while." That can now be put in the background. Odds uh companion came out as well yesterday.
New version of ords um some bug fixes and some um some nice touches there. So new SQL CL news etc. And in case you
hadn't seen it, 23 AI is now called 26 AI and you can download 26 AI free which is pretty cool. Now, in terms of bits
and pieces, there there was an awkward moment for me uh in the last few weeks. I logged on to my Oracle support and I
noticed that it had been modernized in terms of the user experience. Looked like it's all been, you know, spruced up
and looks nice and modern. Looks like a typical sort of Apex app. It's not Apex, but I thought, "Oh, that's nice." And
so, I pinged on LinkedIn. I saw I like that. I logged in. I searched for my one thing and found it and there it was. I
thought, "Oh, good." Yeah, we've done nothing to the back end. We've just refreshed the front end.
And then the avalanche occurred. [laughter] People in every form of social media
just absolutely went to town on the new myoracle support. And I'm not being critical of them for doing that. Um
yeah, there's a basically number of significant issues that we discovered as we're going. Um and yeah, I you know, I
I don't work for the um support team, but um yeah, I think everyone Oracle understands we might have dropped the
ball a bit on that one. And so we apologize. Uh just to re reinforce, you know, we endeavor to listen to
everyone's feedback. We are on it. Uh if you go into the new moss and search for news 21, uh we have a document, a living
document of what are known issues, um and the things we're actually trying to resolve about them. So, uh things like
people's old bookmarks, their old Mosnote IDs, etc. Things that, you know, I fully concede we should have known, we
should have picked up before we went live. Um the reality is yeah we're now doing our best to get it back to an even
keel to make it a a nice new experience because the the UI I quite like in the sense that you know finally things like
code is now presented in a monospace font etc. Um but obviously yeah functionalitywise uh we didn't deliver
and so rest assured we're on it and we hope to make it a nice experience for you. Couple things. Um I've just got
back from the Australian conferences. So I was at the Perth conference. Um lot of fun. Big numbers there. Probably the
biggest numbers we've had for a long time, which is cool. Um yeah, so great to have some selfies with some people.
That was in uh Perth and Melbourne. Really good fun event. And then I popped over to Wellington, New Zealand once
again. Uh a lot of aces there and we had a speaker dinner and some aces. Shout out to the aces there that were very
involved in coordinating and running that event. Um Francisco, my mate there talking about some high availability.
And at the end of the conference, I hiked up Mount Victoria. I think it's Mount Victoria. Anyway, a very high
point in Wellington. I was took a bit of a hike, but that's Wellington, New Zealand. If you ever get a chance, um,
go check it out. Absolutely beautiful part of the world. So, what's coming up for me besides a little Christmas break?
Uh, in February, I'll be back in India for their multicloud event that'll be in Hyderrobad and Bangalore. Um, it's timed
in February because also if you're from India, we've got the cloud world. Oh, sorry. Oh, I can't say cloud world.
We've got the Oracle AI World uh global tour. I think it already starts. There's a couple already in January, maybe in
Saudi Arabia, a couple other places. Uh but in India, in Mumbai in February and so we've got the user group events uh
coinciding with that. So you if you're a user group member, you can come to the user group events. Otherwise, you can
head along to AI World in Mumbai. Okay. Um that's right. So, Muhammad's Muhammad's
on the MOS on the Moz bandwagon. Uh, yes, I I agree. Rest assured, we have a large set of things we need to to work
on. Um, and we're doing our best. Okay. So, so support your user groups. That's all
I really had to say because obviously it's Christmas break. There's not a lot of stuff going on. Um, now I might have,
what did I forget? There was something I forgot. Oh, yes. Just a small little thing. Finally, 26 AI is coming out on
prem. So, if you go to that bitly link, that'll take you to the blog post. So, the plan is in January, we'll have the
Linux onrem release for 26 AI. Uh, that version will be 23.26.1. Um, if you're unfamiliar with the naming
structure that we've adopted, we didn't just jump from 23 to 26 just for giggles. Um the methodology was behind
it is if you're on 19 C at the moment which I'm sure most of you are uh you're what are we up to 19.29
it's it's got to a point where it's you know it becomes almost impossible to work out what what is that is that the
is 19.29 the one that came out in July or October or or what year did it come out and so with the going forward um
version 23 AI which is now 26 AI will be 23.26 26.1. That'll be the first release update in
year calendar year 26. The next one will be 2326.2.3.4. The year after that, it'll be 23.27.1.
So now, at least when you look at your version number of your database, you'll know just by looking at the version
number what release update you're up to. I'm up to the third one in 2027, etc. So, um, yeah, it wasn't just marketing.
There was method to the madness. Um, and also that brings the database into align with a lot of the products and
facilities on our cloud which have had that numbering system as in calendar year and then effectively the iteration
of the release updates. Uh, so a number of products have had that for a while and we're bringing the database into
alignment. But yes, finally the ultimate Christmas present. It'll be available on prem. So what do we have to cover
tonight? Um, there were a few things from last month we didn't get to which was the truncate is very slow. cannot
use UID as a default in the new version of the Oracle database and being able to do online reorganizations in standard
edition. And then that's that's the end of my slides and demos and then it's just what I called free ask me anything.
Um people ping me today and we people want to know how my PR and PT scripts work. Uh someone asked me what gear I
use for my webinars and stuff. So we'll talk very very briefly about that. Um the AI one was just a broad like where
do you think AI is going? Is it going to be the end of my job? etc. And then someone hijacked it. They sent me two
questions. One about memory target and log buffer. And finally, one about temporary table space group. So those
ones are me just waxing lyrical about my thoughts on those things. And um but anything else you want to throw in in in
terms of the chat line, just lob it in there. And um this session is more about you. So you can you know whatever you
throw in questions you have in the chat um they'll generally take precedence. But let's jump onto our straight thing.
Truncate is slow. Sorry, I just noticed that we've got 42 people on the call. The answer to the life, the universe,
and everything, a lot less numbers than normal, but that's okay. It's a week before Christmas. Truncate is very slow.
Here's the question that came in. I thought truncate just reset the high watermark and is always instantaneous,
but in our data warehouse, it is very slow. What is happening? It is true that truncate is fast but
before a truncate can happen something else has to happen and that possibly might be slow and um it's difficult to
demonstrate in the environment I'm in which is basically small little machines and small little databases uh but we'll
do our best and then we'll explain it in a little bit more detail. So let's do a demo.
Okay, hopefully everyone can see that. Um and um I've chosen a new font for my Windows terminal. It's a new um
monospace consylus font. So hopefully it's a little bit looks a bit nicer. So what I've done is I've created a table
here called in a table space called large TS that's irrelevant but it's just 100 copies of DBI object. So in my
database there's about 80,000 rows. So that's about 800,000 rows and I truncate it and as promised truncate is
instantaneous. So it's you know seems to make a mockery of the question. Let's drop it and let's do the demo again with
a slight adjustment to the process. Once again, I create my table with 100 objects. Come on, poor little machine.
And then what I'm doing is I'm updating every 40th row. And what's going to happen is I've updated what? That's
20,000. Is it 200,000 rows have been updated? Now, let me truncate again. Now, as I said, it's difficult on a
small machine. You can see the first truncate took about 05 seconds, and this one took.5 seconds. It was 10 times
slower. Now you can make an argument that that's pretty much instantaneous still and that that is the reason it's
like this is uh to do with the size of the amount of memory I can allocate to my database. So let's see what's going
on. What I'm going to do is I'm going to turn a trace file on and repeat that same demo. So I need to find a name of
my trace file and I store that away. I drop my table. I'm going to recreate it again. Once again 100 copies of DBA
objects. Then I'm going to do that same update to put, you know, to make some changes. And then I'm going to turn a
trace on just before I do my truncate. So let's turn trace on. Do my truncate once again about half a second. Turn my
trace off. Now let's have a look and see why where do we lose that half a second even though it's a tiny amount. So this
is a uh simply querying my trace file. So if you're unfamiliar, you can query the trace files you created directly
from the database. You look in be a trace file villa diag trace file contents and the trace file name is the
one I stored away earlier. All this business at the top here is just some smarts to not show the entire trace
file. The moment I see the word truncate from that point I'm going to list everything out. So it just skips all the
stuff at the top of the trace file. And I run that and if I scroll back we can say there's our truncate and here's all
the weight information. lots and lots of little weights, each of a tiny amount of time, but they're all waiting on a thing
called local right wait. Now, what is local right weight? Let's go back to our slides and do a diagram. So, let's
explore why a truncate might be slow. Let's say I've got some data on disk. I've just created that big table with
100,000 cop 100 copies of DBA objects. And so, inside, for example, one of these is file 3 block 7. and there's
some data in a particular column. Doesn't really matter what is, but let's say that data is AIA.
I ran an update on that data. Now, an update drags information from disk into memory and it changes the AI AA to XXX.
We've made a change. This block is dirty in memory. When I issue a truncate statement, the
first thing the database has to do is flush all these dirty blocks out of memory back down to disk. And each time
it does the each one of those blocks, it's calling doing some IO. It's being it's DB writer, the background process
is doing that. And the weight you see is called local write weight. The one we saw in our demo. This is DB writer
flushing out these uh blocks to disk. So you think, why would you need to? I mean, why do I need to flush these
blocks? This I'm about to throw these away anyway because I'm about to do a truncate. Why do I need to do this?
Well, it's about contingency for what might happen after the truncate. Let's say I ran this truncate, which means
effectively I've now flushed out all this information that it's useless to me. Now, because a truncate is
instantaneous, I didn't go and update all these blocks even though I, you know, conceptually erased the AAA here.
I haven't actually erased anything. All I've simply said is all these blocks sitting in disk are now freely available
for reuse. Anyone can reuse them because they're no longer required by this table. Let's say someone comes breaks a
brand new table. Create a brand new table. Yep. I'm going to, you know, do a query from something else. It's
literally brand new data. Those blocks can be reused by this. We can reuse those blocks. So it may put some brand
new data from a different table, different rows, different columns, literally totally brand new data into
those same file and blocks. So now it's BBB up here. I've still got this dirty block
in memory. And at some point in future, DB writer is going to come along and say, it's time to free up some memory in
the buffer cache so I can bring new reads in. I should flush out the dirty blocks. And of course, what would
happen? we would actually overwrite someone else's data by flushing out those blocks. That's why we have to
flush them out as part of a truncate to make sure that they don't get flushed out later and therefore corrupt the
database. So that's why the truncate must wait for those dirty blocks to be flushed out. Now why did it take only.5
of a second on my machine? Well, my buffer cache is about 2 GB and I've only got, you know, not a lot of memory here.
But if you're running a big heavyduty, you know, exodata machine, it's not unusual for SGAs to be in the literally
hundreds of gigabytes or even up to, you know, almost a terabyte of RAM. You know, modern hardware is just
ridiculous. So, if you've done a lot of heavyduty work, lot of DML on a massive table, it's quite conceivable you've got
literally millions upon millions of dirty blocks sitting in the buffer cache. When you do a truncate, they all
have to be flushed out and so that creates this huge amounts of weight. So truncate is fast, but it's the
preliminary steps that are needed before truncate obviously are bundled up in the same thing that can slow it down. So how
do we avoid this? Um I I noted here that you could either query the data or do a checkpoint. And there's a third option I
should have mentioned, but I'll come to I'll do those first two first. If you query the data, when you query dirty
blocks in the database, the database will flush them out. It's called a delayed block cleanout. There's a whole
list of scenarios as to when that happens and when it doesn't happen, etc. But by and large, simply running a query
after you've done a lot of your DML will flush those dirty blocks out. Therefore, a subsequent tub truncate will be very
fast. The other option is just do an alter system checkpoint, which is you asking the database to simply flush out
any dirty blocks. same thing. You're paying that price such that your truncates are fast later. The third
thing to be aware of is it's important to know that I view a slow truncate as perhaps an alarm bell as to your the way
you're coding your business logic because typically if you're doing a truncate on a table, it's either
transient data or it's data that's unders under undergoing some sort of you know ETL ELT kind of process. typical
thing of you know truncate some data bring in tomorrow's data etc. very rare to see a truncate in a typical OOTP
application. It's very much about typical data warehouses or transient data things you bring in from an
external source, you manipulate it and then you throw it away. That's otherwise, you know, you generally don't
want to truncate stuff that is valuable. If that's the case, if you're doing truncates on objects which are huge, I
would recommend look at your application logic to see why am I doing conventional DML on such large objects. The only
reason you get these dirty blocks in there is if you're not doing direct mode operations. You should be doing these
operations typically in parallel. For example, parallel DML or insert append or create table of select etc. All those
things that are done as direct mode operations aren't going they go they bypass the buffer c. They go direct
against the data and bring the data into your local PGA. You're not going to have dirty blocks. So if you have slow
truncates, yes, the the quick, you know, quick fix is querying the data or doing, you know, a checkpoint, but maybe have a
look at your actual application and see if it's potentially there to actually look at avoiding that DML and converting
it to either parallel DML or direct mode DML where possible. And that's got it. Yep, exactly right.
So, and I have to admit it's one of those things where I do often wonder uh whether it would be a smart, you know,
whether we could actually when we go to flush a block, we go, "Oh, we're flushing a block, but it's not no longer
belongs to the same data object number. Therefore, we can simply ignore it." But I imagine that would probably create
more complications than it was uh worth. Okay. Number two, UU ID default.
I saw the cool new UU ID function in 23AI, but I can't use it as a default, which seems to defeat the entire purpose
of it. This seems strange. And the best way of describing that is, yeah, whoops. Uh, let's see what the problem is and
see how you can easily work around it. But uh yeah, it is a bug and we'll fix that as soon as we can.
So this one is running against 26 AI. So it's the slightly different color. Um to explain why we invented a new UU
ID function in the first place in 26 AI, we need to go back and look at our existing implementation.
People when we said we have a new u UIU ID function, the first thing people said to us is well hold on you've got a CIS
good function. It's been there since Oracle AI. Why not use that? And let's face it, I do select sys good. Notice no
long from duel in 26 AI. And look, I get a random raw string. You think, yep, there's my UU ID. Why would I need to
bother? Well, let's do a few of them. And you can start to see the problems. The SIS gooid function is yes, a
guarantee that you'll get unique sets of raw bites every single time, but the way we did it was simply looking at some
bits and pieces of the hardware. So you can see it looks like these are all the same, but if you dig down slowly, let's
have a look. Where is it? It's somewhere in here somewhere. Where's my alt function?
There we I missed it. There we go. There's the differences. Everything else is the same. The initial
CIS good function that we built was very much like guaranteeing, yep, they'll always be unique. And I can't remember
the exact specifics, but we use things like the MAC address and the host ID and a few other bits and pieces and then a
little bit of a random salting. So this is the problem on Linux in particular, your sysuids are unique, but they're
pretty predictable. They're not sort of a genuine of totally random UUID. Interestingly enough, if you um do it on
Windows, you do get a much better uh rand distribution, but there were obviously problems. So we invented a new
function called UU ID which you can run on 26 AI and above and you can do select UU ID 4 and or you can do select UID
with nothing in there at all. Now what does the four mean? Well there's seven kinds of UU IDs and version four is just
random. And so what'll happen is now is I should have done multiple iterations of these. What will happen now is each
one of these is literally bizarrely different from every previous invocation. It is a genuine random UU
ID. We call out to the UU ID function provided by the OS. Because of that, it would be nice if we
could implement all the other UUIDs. In particular, version seven of UID has become very popular in the community at
the moment because what version 7 UIDs do is the leading prefix of the unique part of the raw string is ordered by
time. What that means is the fundamental problem with UUIDs where the for lack of a better term excessive randomness means
you have a lot of B tree index splits if you use it for a primary key. The version seven UUIDs don't have that
because they're prefixed by time and so they work similar to an ascending sequence but still with that nice
randomness. So version seven UIDs are very very popular. uh we haven't implemented version 7 of the UU ID
because we haven't didn't want to implement it by default in the Oracle kernel itself. We always wanted to defer
to what the implementations provided by the operating system were. And so when the operating system start providing it,
our intention is to provide UUID7 as well. I'm not sure about the others, but UID7 hopefully will be coming soon
because that's a very cool but still random facility to have to the problem
of concern. If these new UU IDs are super cool, which they are, it makes sense to say,
well, I'm going to have an ID. It's going to be raw, and obviously it's going to be like an identity column.
It's going to be default UU ID. It's a primary key. Perfect. Doesn't let you do it. Uh, this is a bug. It should let you
do it. It's just unfortunately a misinterpretion by the SQL engine at the moment. It thinks you're calling a
PL/SQL function, which it's not. Um, it just doesn't understand that UID is a native part of the database kernel. So,
it seems we're sort of hit a bit of a roadblock here. However, there's an easy fix. Create the table without the
default to start with and then alter it and add it as a default. Interestingly, the alter table command correctly
understands that this is actually just part of the database engine. So, as I said, it's a bug, but there's an easy
workaround, and we'll get this fixed in a release update. just to show that it works. Uh if I insert into the non-
primary key column values of one and then query it, you see I get my automatically defined UID. So UIDs can
be used as a default. Currently, it's just a two-step process. Uh but we will fix that soon. Oh, there's a lot more
people on the call now. Like back to normal numbers. You people have no lives, dear me.
Okay, third thing that we had and then we're jumping on to the ask me anything online re it tells me to start my demo
because obviously must it must take a while to actually uh run. Okay, let's start it running and then
we'll come back here. Here's the question that came in. I want to reclaim some space to speed up our
full table scans, but alter table move online is not available in standard edition. What are my options without
taking an outage? Well, that's the downside of standard edition. It doesn't have all the enterprise edition goodies
and the probably the most valuable enterprise edition goody I can think of, but probably after partitioning is the
ability to do so many operations online. Move data files around, move tables, you know, convert tables to partitioned,
etc. so many things ever since 12.2 to it's almost impossible to do things not online in the Oracle database now and
that's unfortunately one of the problems that you don't get uh in standard edition. So I built some options to
consider which gets you very very close to the online facilities in enterprise edition. It just takes more work on your
behalf or it takes more resource if you have that choice. But you know as I said enterprise edition is very cool and
that's why we charge more for it. So let's have a look. So um I've got standard edition running
on a VM on this machine. So um this isn't the normal database I use. So it will be a little bit slow because it's a
downsized machine, but it's u 19 C standard edition. So I'm going to create a table here called T. It's simply 10
copies of DBA object. So it's going to be about 800,000 rows, I think. Um put a primary key on it. And it's 120
megabytes in size. So not big, but let's say that's the table that we've got. And what we've done is we've deleted half
the rows over time. So this table occupies 120 megabytes, but I know that throughout this table now I've got a lot
of free space. I'd like to get that back. As you can see, I don't get it back just by doing a delete. My table is
still 120 megabytes in size. If I was on Enterprise Edition, this is what I do. I just do alter table move
online. The job's done. And it says unfortunately this feature is not enabled on standard edition, which it's
not. And so you can't do it. You may be thinking, ah, okay, I have to fall back to something more manual. I'll use DBMS
redefinition. Um, which is a nice easy way. You simply uh use DBM redefinition to redefine your table to a table with
the same structure. And that's a nice easy way of doing an online reorg. Um, and guess what?
That also is not part of standard edition. Even though it's just a PL/SQL package call, it is bundled up with the
online facilities. And so you don't get that in standard edition either. Uh for those of you on um Enterprise Edition,
just a quick shout out. If you ever want to redefine a table just by it to itself, uh the old days of having to
create an interim table which was the same structure and then do the normal start read uh interim sync and then
finish read. Uh we simply built a little simple wrapper around this. So you can just do readf table now which is just a
nice easy way if you want to move the table to a new table space or put the indexes somewhere else. There's another
other parameters if I want to compress the table if I want to you know change the lobster secure file etc. It's a
whole stack of parameters you can pass this procedure but just for the sake of taking a table and doing a online
redefinition to the same structure um and just changing the way it's stored uh this a cool little redef table came I
think in 18c. So how can I do it? Well, what we can do is we can cheat to some degree is the
way DBMS redefinition works is it creates behind the scenes a materialized view on your table, populates that and
then when you cut over we do a final refresh and then flip the names around. It does all that and it handles things
like indexes, constraints and all that kind of stuff. So we can do it going back to first principles. So I'll put a
materialized view log on my table called T. And then I'll create a materialized view called temp dollar t which is just
a simpler copy of the table. It's refresh on demand. So if we look at it, my t table has 369,220
rows and my materialized view has the same. So these two tables now are pretty much the same object, but obviously
everyone's currently still talking to table T because I don't want to have an outage.
So my table called T is 120 megabytes in size. My materialized view is only 60
megabytes in size because remember I deleted half the rows out of my table T. So temp dollar T is the reorganized
version of my original table T. So this is what I want my result to be. But unfortunately it's a materialized view.
It's not a table and it's materialized view with a different name. Our job now is to see if we can flick those over.
Obviously activity is still going on. If this is an online operation, activity is still happening on T and I'm simulating
that by just by deleting 17 rows from T. So T now has this many rows and my temp dollar T has more rows because it
doesn't reflect that last thing. I'll need to do a refresh. Now I can refresh continuously as you know as I'm going.
But at some point I want to actually do this finish this reorganization and actually cut over. So here's where I
would commence my outage. I I can't get away with no outage altogether. I have to actually take a small outage where
I'm doing the manual processes. So I do a final refresh of my materialized view. That brings the two tables now in sync.
So everything that's in t is in tempt. Everything that's been removed is also gone. They're now identical again. Then
what I do is I can drop the materialized view. But you can add a clause saying I want to preserve the table. So temp
dollar t now just becomes a standalone table. It's no longer a materialized view. I can drop the materialized view
log on t. I don't need that anymore. And once I've done that, I can rename t to it's gone and rename temp dollar t to t.
So this is what I have to do in the outage. And all this stuff is virtually instantaneous. So the final refresh
before it, you could basically try to keep that as small as possible. But you're still looking here at seconds and
minutes as opposed to hours to actually unload and reload a table. Now when you start doing things manually, as I said,
the rest is left as a reader exercise. One of the benefits of having enterprise edition and DBMS redefinition which is
pretty much doing this kind of thing is it's going to handle all these subordinate objects the triggers the
comments the grants the indexes the constraints etc all these things you now have to do manually and have to make
sure that you you know tick off all the appropriate boxes I've left it as a reader exercise to you know in the
interest of time but you get the idea of how you could do it but that is a way of effectively mimicking DBMS redefinition
in standard edition So if we look now a second example here, I've got a table called T1 which is also
120 megabytes in size. If I'm prepared to wait longer to actually do an online reorg with less hassle, less of that
manual materialized view stuff, but I'm prepared to have it wait longer, which isn't maybe such a big drum because it's
still online. Here's my T1 table. It's got 369,000 rows, so it's 120 megabytes, but I've
already done the delete. That's why I started my demo early. I've already done the delete of the half the rows. I can
enable row movement and do shrink space. Now, this is a small table. This should complete relatively quickly. Shrink
space is different to creating a new copy of the table. What shrink space does is literally hops out to the high
water mark of the table and starts walking backwards for a lack of a better term from the top down to the bottom of
the table and moving data along in order to shuffle it toward the front. So it'll find a block that basically is at the
end. It says is there a free block somewhere at the near the front? Grab it and move it over there. It's a block
byblock operation. I didn't time it here, but hopefully you can see it's going to run significantly longer than
you doing the bulk operations of creating a copy of the table or of the remaining rows like a materialized view
or even a CES. So, this is online. It's less effort for you, but be aware that it will actually take significantly
longer. It'll burn a lot more redo. Um, you can choose if you're on the trigger to fire if that's important to you, but
just be aware that it's a simpler but more expensive option. But they're pretty much two options you could
consider when it comes to basically doing online table move uh without a enterprise edition license.
It is 9:35 and now we're into the the free format stuff. Seeing as the chat is remarkably
quiet, I'll do the ones on my bullet points. But yeah, if you have any things you want to ask or whatever, then feel
free to chirp in and we'll we'll jump to the chat as we finish these off. So these just some quick ones that have
come in which sort of generally haven't warranted an you know a a thorough I suppose a thorough treatment. So couple
of quick ones someone said how does your PR and PT scripts work? So to give you an example of that um let's
start with a fresh thing. If I do select can't type with the microphone here. If I do select star from impact
let me me start with this one first. This is one I used to have. Select PT. Select
star from department. If I could reach around my microphone. Do I want to have a department table?
No, that's no good. Um, let's go. Let's move the microphone. It's killing me.
Let's build our department table. So this is the PT script that I've had for a long
time. Uh Tom Kite first first wrote this. That is you've got a query. You're worried about the width of the columns
and so you want to print each row down the screen as opposed to across the screen. So the PT script's been around
for decades. It was written by Tom and I manipulated a little bit to do some work. To actually see what it does, it's
relatively simple. The way the PT script works is it treats it as dynamic SQL. Your query comes in and then all it does
is let's go through it changes your date format. So it prints out things like um hours and
seconds just in case. It turns on cursor sharing equals force because typically when people are doing ad hoc queries
they're going to do literals in their in their query. So we basically force them to bind variables. We simply parse the
query. we get a listing of the columns which where are the columns and then for each column we effectively define a
result for them um I think 113 might be a time zone column time stamp with time zone so either way we're
simply defining all them to be vchar 2s we execute it and then we simply loop through so simply dynamic SQL because
we're using DBMS SQL we get access to each row and each column and each column value and then we are simply just
printing them out using DBMS output down the page page instead of whatever and then that's the end of it. Now the PT
script was cool but of course at the end of it you end up with sort of you know you have to sort of know that you wanted
to actually run you need to know that I want oh yes I want that down the page. Tan Poda took that and did a very cool
thing with it. He said well what if you're simply typing a query like that and you go oh hold on I want to get the
results down the page. It's a pain to go back, especially if it's a multi-line query, to go back and actually retype it
with the PT in front. So, he manipulated the script to say you could just do at PR on the next line, and it does the
same thing. So, the question is, how does that work? Well, uh, let's cut it out. In fact, what I'm going to do is
put out this one. I might be able to actually run that one. Let's try that.
I think I made a PR1 version which doesn't Yeah. So PR1 this effectively what it does. It says
I'm going to take your file and prefix all these. No, let's scrap that. Let's let's actually look at PR1. That's
probably going to be better. So this is TL script. It's the same logic as the PT script but uses some
nice little trickery using the SQL plus buffer. So when you run at PR the first thing runs run the dot this says okay
terminate the current SQL statement. So the SQL statement is in the buffer I've done select star from department. I'm
saying I'm finished with that and then it says store that in a temporary file. So I've simply defined a file here.
Store it. So simply store that SQL out in a file. Then what I'm going to do is prefix run all these commands and then
because SQL start at line one I put these two things at the front and all these things get added to the end of the
SQL buffer. So I put this first and then the next one will simply go in front of it. Notice it's in reverse order here.
So I'm saying that is now line zero. When I say that's line zero, this gets becomes line one. Everything gets pushed
down. So what I'm doing is I'm doing that at the start. Then I have my original SQL which might be lines 1
through 12 and then I simply say from 9 9999 onwards and my SQL and then repeat the Tom Kite kind of code parsing it in
PL/SQL. So this is a way of intercepting the current SQL buffer saying this I want this at the front store the
original SQL put this at the front of it put this all at the end of it and then run it. when I've run my query, go and
get the original SQL. I saved it and bring it back into the buffer. And that way when I do select star from
department at PR, it runs it. But notice this is still in my buffer just like it was before, right? In fact, I did I
should have done list, shouldn't I? I do list. Yep, that's back in my buffer. So it intercepted the buffer, put a whole
lot of wrapper around it, ran the ran the PL/SQL block, and then got rid of the pills block and restored the buffer.
So it looks like nothing was changed. So that's how PR and PT work. Uh very cool little utilities. Um gear, that's an
interesting one. Someone asked, "What gear do I use?" Um what I'm talking to you now is a Canon uh DSLR. Um only
because I'm into photography. Um, it the modern some of the modern digital SLRs have automatic what they call web webcam
functionality. The latest Cannons, the latest Sony's, you simply plug in a USB connector and you plug it into your
computer and it just thinks it's a webcam. Uh, this one's a little bit older, so it doesn't have that. So, are
to buy a thing called they're called a what's it called? An Elgato Cam Link, which takes the um HDMI output from your
camera, goes into a little little box, and that out comes out as USB. So, it convinces your laptop uh that your
camera is now a webcam even though it's a fullyfledged digital camera. Uh so, it's just in video mode um recording in
4K. And I've also got a splitter box. So, I got one that sends it into um my Zoom recording so you can see me on
Zoom. I've also got one that sends it into another machine which is just recording just the video. So, at the end
of my Zoom recording, I've got the recording of the slides and I've also got a separate recording of my video and
I can just splice them together with editing. So, that's my gear. The microphone is a road uh video mic pro.
Um it looks it I bought this because this also will sit on top of the camera. So if I'm doing, you know, walking and
talking, I can use it on top of the camera or I can just sit it here on a tripod. Um it's a little Joby tripod and
it works like that. Um if you're interested in in making videos, one thing I would recommend is uh don't
worry about the video quality too much. Uh what really matters is the audio quality. Uh people will forgive, you
know, mediocre video, uh but they will not forgive terrible audio because it's just so annoying um on the um on the
ears. So be aware of that. Okay. Um okay, Eric's asks where is the PR script is? Uh if you go to my GitHub
repository, uh so in that link tree/con, you'll see a link to my GitHub repo. Um somewhere in there you'll see a thing
called miscellaneous scripts and underneath miscellaneous scripts is all the scripts I run in my office hour
sessions um and PRP and etc. So yeah, so they're all on my GitHub repo. Okay, memory target share the link. Ah
look obviously someone hasn't been scanning my QR codes. Dear me let me let me put it in the chat line.
Okay, I'll tell you. I'll do There we go. That's the link to everything about me and
somewhere in there. It's either And I think it's either that I can't remember GitHub
or we What did I just do then? And and it's either one of those two for my
GitHub repo. Man, lesson learned. Do not put your microphone in front of the keyboard. It's debilitating.
Okay. 944. Okay. [gasps and snorts] Oh, some little quick ones to talk about. Okay. Memory target.
Okay. Qu little backstory. Back in the day, back in Oracle 7 and Oracle 8, when you set the Americ
pool, your buffer cache, your keep c, your recycle cach, your log buffer, your java pool, size, there was 37 memory
parameters to set and it was just getting silly especially as we kept adding more and more memories large pool
streams pool etc. So in 8 I I think maybe 9 I somewhere around there. We said there's two SGA target PGA
aggregate target for you shared memory how much shared memory do you want and how much private memory available for
sessions do you want nice and simple two parameters job done and people thought well can we do better than two maybe we
can get to one so in 12.2 two I think we introduced memory target parameter we said don't worry about the fact that
half the memor is shared half it's private just give us one value memory target give us a value we'll decide on
how much is shared how much is private now that came with some one particular nasty restriction and that is if you use
the memory target parameter you didn't get access to huge pages which are critical on any significantly sized
database because if you have little tiny pages the page the cost of page management memory page management
becomes insane. These things called page table lookups become very very slow. So huge pages is important for any serious
database and therefore we would say look memory target is maybe not for that because you couldn't use huge pages. As
a result our recommendation was if you have a database less than 4 GB of memory and size. So maybe it's an embedded
system you're never going to touch it. Memory targets perfect for that because you know the fact that that you might
have different requirements for shared versus private memory over time. It's such a small amount of memory anyway.
The database will be fine. The page tables aren't going to be huge because it's a tiny amount of RAM. The moment
you went above 4 gigs, we said SGA and PJ target two parameters. Keep them separate. the later versions like the
later versions of the database creator configuration assistant um in some of the release updates for 19 and certainly
in 21 and in 26 if you tried to set memory target for a machine bigger than 4 gigs it would actually pop up a
warning and say please don't do this in 26 AI this is the reason this question came in 26 AI memory target is now
compatible with huge pages you can have huge pages with memory target so the question is what should you do um there
isn't an official position But what I did recently is I created two VMs for 26. One was a small VM, one was a large
VM and just accepted all the defaults. Interestingly enough, when the machine was smaller than 4 gig in size, database
configuration assistant chose memory target. When the machine was bigger than 4 GB of RAM in size, it forced me to
choose SGA and PGA. So even though we don't have an official document on it yet, I think you can safely assume that
even though we have huge paid support, our official position will be if you have a significantly sized system aka
anything bigger than 4 gigs of RAM, then you still should be using the two parameters SJ target and PGA aggregate
target. Um I know that's definitely the case for anyone that's running exodata on 26 AI. So I think we can safely
assume that's going to be the advice. So be aware memory targets going to be the same two parameters. uh even though we
now support huge pages with memory target in 26 AI. Thanks Lane for putting in that link to
PR.SQL. But don't just not just the one script. There's lots of goodies there. Hopefully
you can get lots of value out of all those things. Log buffer. Next one. Uh once again
little quick DM come in saying that uh someone had just migrated luckily from a non-exodata system to an Exodata system.
big Hulk great XR. They're very very happy, very very excited. But they said their log buffer is is a gigabyte in
size. And they said that just seems insane because, you know, most of us are familiar with log buffers being 64meg,
32meg, etc. How the hell does it get 1 GB in size? Uh log buffer, if you look in the documentation, says it's by
default, it's chosen by I think number of CPUs divided by 16 or something, something like that. I can't remember
exactly. uh be aware that formula is a bit of a generalization. It depends on the platform but in particular if you
have these big hulking boxes exodatas and even non-exodata boxes where they've got you know 128 cores and then you got
hyperthreading enabled etc. In particular you look at some of the AIX boxes which will expose a single core as
maybe eight virtual cores to the operating to the the database. It's not uncommon to see a machine saying, "Oh,
yes, I've got 500 cores." And so what that means is the database goes you got 500 cores. Oh yeah. And therefore, you
know, CPU cores defi determines a lot of other defaults as well. Things like transaction count, session count, etc.
All those things get multiplied upwards. So your transactions parameter becomes tens of thousands. Your sessions
parameter becomes tens of thousands. And all those things get used in determining for example how many public and private
log buffer strands are allocated inside your log buffer area. And so it's not uncommon to see the log buffer just get
very very very large. So it's nothing to worry about. It's just chosen that way. But one thing I will say is if you have
a large log buffer be aware the database assumes that you will also have extremely large readil log files.
There's the database assumes the multi the order of magnitude between log buffer and redo log files is large you
know 10 to one 20 to1 etc. So if you see your log buffers get huge, don't be going with those 200 megabyte log files,
those ready log files. You want them in the gigabyte range as well, like large. Make them much much larger than the log
buffer. Otherwise, you start seeing strange behaviors where your logs, your logs start to switch just randomly. So
be aware of that. If you have a massive log buffer, that's fine. It's meant to be large. But also that means make your
video log files huge. Uh if you want a more detailed explanation of that um I think who was it? It's either Tanel or
Jonathan maybe Jonathan maybe searched for Jonathan Lewis um archive log switches
or Tanel Poda archived log switches I can't remember I read something about years ago but something like that but be
aware um if you want a more in-depth discussion of that um there's some information out there on the inter tubes
temp table space groups um for those unfamiliar we introduced a technology some time ago called
temporary table space groups. You can create 5 6 7 8 nine temporary table spaces temp 1 to temp 9 and then you can
say ah they all belong to a group called you know group number one and then group number one becomes your temporary table
space. So people can be allocated any one of the table spaces inside that group of table spaces.
The original motivation for this was very much along the lines of uh when you have rack systems
what would happen is one instance would end up stealing extents from the temporary table space from other
instances and therefore you get cross instance chatter when you're doing very large in particular say parallel
operations which use a lot of temporary table space. So temporary table space groups were a mechanism to avoid that
because each instance would grab different table spaces within the group. Therefore, that kind of cross instance
chatter would be eliminated. Um, I was talking to some people internally because someone asked, you
know, why do we have them? That's why we have them. I was talking to a couple of guys internally and, you know, we sort
of come to the conclusion that I'm not entirely sure they're probably needed anymore. Um, those problems have been
subsequently solved via other means. Uh, you may be familiar with rack now. We can have local temporary table space.
You can actually attach a temporary table space on local disc, not shared storage, to a local instance. Um, so
it's I haven't really seen temporary table space groups that much out in the wild. Uh, they were there to solve a
particular problem in particular about cross instance chatter, especially for large parallel query slave numbers. Um,
but I think if you're using local temporary table spaces now on your rack instances, you probably don't need
temporary table space groups. Uh, if you've never heard about them, don't worry about them. Uh, this one I tweeted
this today actually, but actually because I just couldn't resist. Um but it actually came in um just as a
question for tonight which is uh why do we get insufficient privileges on materialized views and I can show you
one very simple reason why. Let's just go into here. Let's move the microphone briefly.
There we go. Insufficient privileges. Uh this is an interesting one. If you try refresh a non-existent materialized
view, this is the error you get. Um I think this is a bug, but this is how it's always been. Um I did a trace on
that. What happens is it goes and looks for um your materialized view name in an
internal dictionary table called cy.napd dollar, which is a throwback to they used to be called snapshots.
And it also look it looks in there because it wants to see if your materialized view references a database
link. So it does a query to that. And what it does is that query also looks up your permissions. And so if it gets a no
data found, it says ah it's because you didn't have enough permissions to refresh this view which of course uh is
assuming that you actually spelled the name correctly. So it's one of those things where uh one query satisfies both
existence and permissions and when it gets no data found it goes oh well it must have been permissions. So just be
aware of that. The reason I mentioned it is if you do the correct name, you can still get this error because later on in
the processing of materialized view, it might need the create table privilege. If your materialized view is a
non-atomic refresh, it's effectively a truncate and a rebuild from scratch for a complete refresh. Be aware that
sometimes you get insufficient oblivious because you don't have the create table privilege. Uh which of course makes this
all the more confusing, which is all a bit sad. And what have we got? Five minutes left.
Okay. And the last one, the a few question a few people ping me about um AI and and the like. And um the AI one
was really, you know, a common theme is will AI take my job as a DBA? Will AI take my job as a developer? Um you know,
is AI going to be good for the database, bad for the database, etc. Um I probably have to defer that because we haven't
got much time. Um my there's a sort of a cliche that's going around is they say AI won't take your job but people with
AI skills will take the jobs of people without AI skills. Um I think that's a bit flippant to be honest. Um I I
certainly think AI is here to stay. Um I don't think there's any escaping it. is certainly it's got probably as much hype
as I remember when the internet started up and you know like sort of in the early 2000s when everything was yeah
wasn't internet everyone was calling it the worldwide web and everyone said it was going to be the game changer I I see
AI in that similar vein in the sense that it's become just so every so everywhere now that I don't think we're
getting away from it I don't think as a IT professional uh you'll lose your job if you don't
have AI skills having said that I think it's in everyone's interest to learn AI skills even if you never end up putting
them to play you know in your current work job because like all things uh the market looks for the current hype cycle
when it's employing so whether you choose to get a new job or whether you're forced to get a new job whatever
that case is having some sort of AI knowledge on your resume even if you never plan to use it uh is going to be
helpful for you in a future career and in that respect I view AI as the same as any skill in the IT workplace in the
sense that uh the industry wards broad skills nowadays that the days of oh I'm an expert in this tiny narrow area um I
think are gone I think people now look for broad level of competence across a wide range of things as opposed to
people looking for experts anymore um and that's why I try to keep my hand in in terms of not just the database but in
terms of development and tools etc etc because it's I think it just you know broadsens you as a professional and also
gives you I think a deeper level of empathy uh for people who do other jobs. Um you know I I hope the days of DBAs
dumping on you know dunking on developers and developers dunking on DBAs are coming to a close because with
DevOps developers now understand a little about the challenges of operational work and DBAs hopefully with
you know having look after things like Apex and things are be more familiar with what the challenges developers
face. I think that helps our community as a whole. Um, so as I said, normally would open up
for all your questions, but we're out of time, which is not unusual. So on that note, it's very rare that I finish 2
minutes early, but we are finishing 2 minutes early. But um, normally at this time on the last Ask Tom live of the
year, uh, the last few years, I've always popped a champagne uh, to say celebrate the year. I got grief. Some of
my, you know, some some close friends in the community said, "We know you drink don't drink champagne. You only ever
drink spirits." and like I got my gin collection behind me and whatever. So, I thought I'd break with the champagne
tradition. I thought I' I'd do my friends right. So, we'd I thought we would bit of a little whiskey. I thought
we would toast the year with a little drop of whiskey. But in reality,
really this is me just way saying thanks for being a part of our Ask Tom community and the office hours community
and the general Oracle community. Um I hope you've had some value out of the sessions this year. Obviously, we'll be
back next year. Uh we always love the fact that you participate. You give up so much of your time for free. And um
hopefully if you're an event somewhere around the world that I'm at next year, uh please come say hello. I'd love to do
more face to face talking as opposed to this. Uh but if we only meet up over Zoom once a month, that's still good
enough for me. So uh have a safe uh if you celebrate Christmas, please celebrate it very well. If you don't,
please have a you know festive and happy and very safe holiday season. Um, take some time off, spend some time with your
loved ones, and um, I'll see you again next year. And yeah, have a great festive everyone.
TRUNCATE may be slow because Oracle must first flush dirty blocks (modified but not saved blocks) from memory to disk to ensure data integrity. Large tables with heavy DML operations accumulate many dirty blocks, causing delays. To improve performance, run queries that prompt block cleanout, execute an ALTER SYSTEM CHECKPOINT to flush dirty blocks before truncating, or use direct-path or parallel DML methods like INSERT APPEND to reduce dirty block creation.
Oracle 26 AI introduced a new UUID function generating truly random UUIDs, but currently using UUID() directly as a default column value leads to SQL engine misinterpretation and errors. The recommended workaround is to create the table without a default on that column first, then alter the table afterward to add DEFAULT UUID(). An official fix is planned in a future release.
Since online ALTER TABLE MOVE and DBMS_REDEFINITION are Enterprise Edition features, Standard Edition users can manually simulate reorganization by creating a materialized view log and a materialized view to represent the reorganized table, refreshing it periodically during operation. A brief downtime is needed to refresh, convert the view to a table, drop the old table, and rename. Alternatively, using ALTER TABLE ENABLE ROW MOVEMENT and ALTER TABLE SHRINK SPACE works online but is slower and more resource-intensive.
PR and PT are dynamic SQL scripts that format query results vertically by manipulating the SQLPlus buffer and output lines. They help present query output in a more readable vertical layout, improving clarity especially for wide or complex datasets. These scripts utilize SQLPlus features and dynamic SQL to achieve flexible display formatting.
Oracle 26 AI supports using huge pages when memory_target is unified, improving memory management efficiency. However, for systems with large memory configurations (greater than 4GB), it is still recommended to use separate memory parameters—SGA_TARGET and PGA_AGGREGATE_TARGET—to better control memory allocation and performance rather than relying solely on memory_target with huge pages.
AI skills are increasingly vital for DBAs to remain competitive and excel in future roles. While AI technologies will augment database management and automation, they are unlikely to replace DBAs entirely. Instead, DBAs who develop competencies in AI and related technologies will have an advantage and can leverage AI to improve their effectiveness and efficiency.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
Oracle's AI Revolution: AI Database, App Dev, Lakehouse & Data Platform Explained
Explore how Oracle integrates AI with data management through AI-native databases, app development, open lakehouse analytics, and the comprehensive AI Data Platform. Learn about AI vectors, trusted app generation, AI agents, and open standards that empower enterprises to secure, scale, and innovate using AI-driven insights and automation.
Oracle AI Database 26AI: Revolutionizing AI-Driven Applications and Analytics
Oracle AI Database 26AI delivers groundbreaking features that unify data models, enhance developer productivity, and enable mission-critical AI at scale. Its converged architecture simplifies application development, accelerates AI analytics on open data lakehouses, and safeguards data with advanced security, transforming how enterprises innovate with AI.
The Revolutionary Impact of Claude AI: A Game-Changer for Software Engineering
Explore how Claude AI surpasses GPT-4 and revolutionary features that redefine productivity.
Exploring AI Implementation Challenges in Libraries: Insights from a Panel Discussion
This panel discussion delves into the challenges faced by libraries in implementing AI initiatives, highlighting experiences from various institutions. Panelists share insights on funding, governance, and the impact of AI on library staff roles, while emphasizing the importance of collaboration and ethical considerations in AI adoption.
The Future of AI-Assisted Coding: Insights from the Cursor Team
Explore how AI is transforming programming with insights from the Cursor team, including Michael Truell, Arvid Lunark, and Aman Sanger.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

