Monday Monitor Tips: Knowing Your RPO

A customer was asking recently about the RPO for their estate, and I showed them a few things from the Estate tab in Redgate Monitor. This post covers a few highlights.

This is part of a series of posts on Redgate Monitor. Click to see the other posts

Knowing Your RPO

It’s not often I have been asked by a business user about the Recovery Point Objective, though I’ve seen plenty of DBAs ask business people what they want as an RPO. If you don’t know what an RPO is, read this, and then think of this as the amount of data I could potentially lose.

Not that I will, but it’s possible.

In Redgate Monitor, the Estate tab shows you the RPO for all databases, as a graph. I’m showing this one from the demo site at monitor.red-gate.com. As you can see, I have most of my databases with an RPO of 1 hour or less. The vast majority are under 12 hours.

2024-09_0096

Note: This graph doesn’t include databases that have never been backed up.

This is based on backups, not on clustering or AGs or anything else, but this does let me determine if I think my estate is healthy. I wish I could alert or easily monitor this, but I can see if there are problems.

If I scroll down, I can sort the table of all databases by the worst RPO. I see this, which shows the SSC server at the top.

2024-09_0102

I’m not worried, as this is an AG setup and the one database is backed up regularly. Obviously we had a hiccup as 3 days is crazy, but I’m confident someone fixed this as I see a lower RPO with the current backups that have been completed.

If I filter by these servers, things look better.

2024-09_0103

At an estate level, there may be systems with a large RPO. Data warehouses, or reporting systems aren’t backed up as they can be reloaded in an emergency.

This is a good way for you to keep a high level view of your estate. In general, you will get a feel for what the estate looks like and you can drill into individual systems or databases if you’re wondering. I wish I could filter by RPO, but maybe that’s coming one day.

Summary

This section of the Estate tab gives you a high level view of backups and how you’ve configured them. Ultimately your backup schedule is less important than the RPO it produces and this let’s you keep an eye on what the RPO is for everything.

Redgate Monitor is a world class monitoring solution for your database estate. Download a trial today and see how it can help you manage your estate more efficiently.

Posted in Blog | Tagged , , | Comments Off on Monday Monitor Tips: Knowing Your RPO

The Role of Databases in the Era of AI

I’m hosting a webinar tomorrow with this same title: The Role of Databases in the Era of AI. Click the link to register and you’ll get some other perspectives from Microsoft and Rie Merritt.

However, I think this is an interesting topic and decided to try and synthesize some thoughts into an editorial today, partially to prep for tomorrow and partly because I’m fascinated by AI and how this technology will be used in the future.

The title says the role of databases, not data professionals. You might worry an AI is going to take your job as a DBA or developer, or you might think there is no way an AI can do your job. I tend to think the latter, but only if you are above average in your role and you add value by understanding your employer’s business. In those cases, the AI will help you (as a co-pilot, not a pilot) and allow you to get more work done or work done faster. You choose. If you churn out average, or below-average work, or cut/paste from Stack Overflow or SQL Server Central or anywhere on the Internet, then yes, you should worry.

Databases store lots of information, and extracting that out is hard. I see no shortage of poor data models, no shortage of overloaded data in fields, de-normalized structures, repeated information, and more. Humans jump through lots of hoops to build reports or screens or other interfaces to present to humans looking for answers. We may load join data in Excel with values in a database or vice-versa. I’m sure many of you have plenty of stories on how you get data to move between some data store and a text format. I’m sure you also have no shortage of frustrations from your efforts.

AIs will get good at this. At the Small Data 2024 conference, I saw many people working at using AI without a semantic layer, which I think is possible, but will likely fail. We store data in too many crazy ways, and companies will need to make it easy for customers to create a semantic layer that describes what data is stored in each place. They’ll also get the AIs to help not only with this but with creating a way to simulate Master Data Management without requiring every application to use Redgate Software, Inc. as a name. We need to ensure Redgate, Red-gate, Redgate Software, and RG stored in different fields can all joined as if they were the same value. Which they are.

Fuzzy matching is the domain where AIs can shine, as the models can do this quicker than humans, without getting annoyed and with fewer mistakes. AIs can adapt with our feedback as we find ways to train the models better and overload the AI prompts with semantics that help translate the (extremely) poor data models in our databases, data lakes, spreadsheets, and even PDF documents. Companies that require a semantic layer can ease the process of building one with AI assistance so that customers can quickly start to query their wide array of data sources.

The best use I’ve seen for AIs is as an easy-to-use, context-aware, powerful search engine. When we learn how to tune these for specific sets of data, such as all the datastores and spreadsheets in a company, we’ll start to see some amazing gains in information analysis. I don’t know that humans will analyze any better than they do today, but the process of getting the information to analyze will be easier. I think AIs will also help in the analysis phase, but that’s going to require more co-work between humans and AIs to improve the quality of analysis.

There are other things, but I see databases as incredible stores of information that AIs will make easy to access. I’m also positive AIs will be used to more easily update information in databases and assist in easily moving data from one format to another or one location to another.

Tune into the webinar tomorrow and see what Microsoft thinks and ask any questions you have.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Comments Off on The Role of Databases in the Era of AI

Webinar tomorrow: The Role of Databases in the Era of AI

I’m hosting a webinar tomorrow with Rie Merritt from Microsoft. We’ll be talking about some of the sessions that Microsoft has planned for the PASS Data Community Summit as well as a discussion of how AI is changing our world.

Register and I’ll see you tomorrow.

Posted in Blog | Tagged , , , | Comments Off on Webinar tomorrow: The Role of Databases in the Era of AI

Serverless Gets Faster

When the Azure SQL Database serverless option was introduced, I was a bit disappointed that I couldn’t get the database to pause any sooner than 1 hour. That meant I needed to ensure clients didn’t access the system for an hour, but also, that I burned an hour of compute after the last access.

Recently I saw an announcement that this time frame has come down to 15 minutes. While this might seem like a very simple change from a technical standpoint (just alter a timer option), I’m sure there was more work needed. I’m also sure there was a lot of debate on the sales/marketing side to decide if this would lose a lot of revenue.

I’m sure this costs Azure some compute revenue in the short term, but it might also create opportunities from customers who consider using this in new situations since it can shut down quickly. I certainly think this makes the use of an Azure SQL database for QA/staging type work more attractive. This might also get more people to take a look at serverless and realize the auto-scale benefits are pretty cool.

My request would be to drop this down to 5 minutes and increase the range of auto-scale as well. Maybe allow me to go from 2-16 vCores if needed with corresponding memory jumps. I don’t know I need this by the minute, but I would like to have things shut down fairly quickly if we stop a workload and aren’t using the system.

I’d also like a better retry on startup other than trapping an error on the client and re-sending my request to connect. It’s just embarrassing that we still have that happening for a cloud PaaS service.

Steve Jones

Posted in Editorial | Tagged , | Comments Off on Serverless Gets Faster