The Need for 256GB

I have seen a few people call for raising the RAM limit in the Standard Edition of SQL Server. In 2016, Aaron Bertrand voiced this, and for 2019, Glenn Berry asked that the limit be raised to 256GB. In the last newsletter of the year, Brent Ozar asked Santa for a 256GB limit.

I wonder how many of you would really take advantage of that. In the Azure SQL Database pricing table, to get beyond 128GB of RAM, you need to go to 32 cores. For Azure VMs, you need to purchase even more cores.  AWS EC2 VMs require 32 cores to get to 256GB.

How many of you use this many cores for your SQL Server Standard Edition instances? I’m sure some of you do, but is it many instances that require this many cores and RAM without Enterprise Edition? If you do run EE, then is it because you need more resources or because you need some other EE feature?

Certainly, the use of lots of resources is likely something Microsoft considers to be a feature. They want more for licensing when you have a large workload. I don’t know if I think this is morally fair, after all the bits are really the same and there’s an artificial limit that doesn’t allow the use of them with more underlying resources.

As a side note, this has made it into other areas. My Tesla offers me the option for more acceleration if I pay them US$2000. The hardware will already support this, but it’s a software unlock for a price. That feels strange.

Across all the instances you have, how many of them have the need for more RAM? Perhaps a better question is whether your organization would allocate more RAM given the cost involved. I still see too many organizations that underspend for hardware when it would make a difference for customers. Of course, many of you might also get better performance if you learned to write better code that efficiently solves query problems.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged , , . Bookmark the permalink.

5 Responses to The Need for 256GB

  1. aaronbertrand says:

    Just because I have more data doesn’t mean I need more cores to read it, but more memory sure helps. Anybody who has a database bigger than 128GB will take advantage of that whether they are CPU-bound or not. And if the limit were somehow tied to the number of cores in my on-prem machine, you know how much harder it is to add cores than memory. I think tight coupling isn’t right for a lot of customers.

    Liked by 2 people

    • way0utwest says:

      I would agree with this. There are memory skewed editions in Azure. I do think the limit ought to be raised, but I was wondering how many people run these systems.

      Like

  2. paschott says:

    More RAM – helpful. More IO – extremely helpful. More CPUs to get those? We are _rarely_ CPU-bound. However, moving up to Azure, we’ve often encountered issues needing much higher IO limits in order to match on-prem performance. I know one of our databases is large, but not CPU-intensive. It regularly uses 200+GB RAM in order to perform well, but the CPU easily fits within 8 cores. That doesn’t even start looking at in-memory tables, which would probably be helpful if we had the RAM to use them.

    Like

  3. JeffModen says:

    I wouldn’t count on any such changes to Standard Edition. There’s too much money to be had by MS in forcing folks to go with the Enterprise Edition and whatever the equivalent is in Azure. They knew what they were doing when they defined the limits… especially when they added some of the “free” previously “Enterprise Only” features back in 2016 SP1. “Fish on the line!” 😀

    Like

Comments are closed.