sql server
164 TopicsAnnouncing General Availability of UNISTR function and ANSI SQL || Operator in Azure SQL
We’re excited to announce the General Availability (GA) of two long-standing capabilities that address critical needs for SQL developers and enterprise customers in Azure SQL Database and Azure SQL Managed Instance, configured with the Always-up-to-date update policy: UNISTR function for Unicode character representation ANSI SQL string concatenation operator (`||`) for standard-compliant string operations These additions improve ANSI SQL compliance in Azure SQL, streamline migration from other platforms, and boost developer productivity. UNISTR function: Unicode made easy The UNISTR function provides support for Unicode string literals by letting you specify the Unicode encoding value of characters in the string, making it especially useful for working with international or special characters. If you want to include characters like emojis, accented letters, or symbols from non-Latin scripts, UNISTR lets you encode them directly using their Unicode values. Returns Unicode characters based on the input expression, following the Unicode standard. Supports escape sequences: \xxxx for UTF-16 codepoints. \+xxxxxx for full Unicode codepoints. Enables easy integration of complex Unicode characters into strings. You can look up Unicode codepoint values in the Unicode Code Charts. Comparison with NCHAR NCHAR: Converts a single Unicode value to a character. UNISTR: Handles multiple Unicode values and escape sequences, offering greater flexibility for constructing strings with diverse characters. Syntax UNISTR ( 'character_expression' [ , 'unicode_escape_character' ] ) The data type of character_expression could be char, nchar, varchar, or nvarchar. For char and varchar data types, the collation should be a valid UTF-8 collation only. A single character representing a user-defined Unicode escape sequence. If not supplied, the default value is \. Examples Example #1: In this example, Unicode value used with text literal: SELECT UNISTR (N'I \2764 Azure SQL.') AS string_with_unicode Results: I ❤️ Azure SQL. Example #2: In this example, we construct a non-Latin script string using Unicode escape sequence. SELECT UNISTR(N'\3053\3093\306B\3061\306F') AS Japanese_Greeting; Results: こんにちは ANSI SQL || Operator: A cleaner way to concatenate The || operator is now supported for string concatenation, offering a more readable and standard-compliant alternative to existing methods like + and CONCAT(). The || operator concatenates two or more characters or binary strings, columns, or a combination of strings and column names into one expression. The || operator does not honor the SET CONCAT_NULL_YIELDS_NULL option and always behaves as if the ANSI SQL behavior is enabled, yielding NULL if any of the inputs is NULL. This operator will work with character strings or binary data of any supported SQL Server collation. The ||operator supports compound assignment ||= similar to +=. If the operands are of incompatible collation, then an error will be thrown. The collation behavior is identical to the CONCAT function of character string data. Syntax expression || expression The expression is a character or binary expression. Both expressions must be of the same data type, or one expression must be able to be implicitly converted to the data type of the other expression. If one operand is of binary type, then an unsupported operand type error will be thrown. Examples Example #1: For example, the following query concatenates two strings and returns the result: SELECT 'Hello ' || ' World!'; Results: Hello World! Example #2: In this example, multiple character strings are concatenated. If at least one input is a character string, non-character strings will be implicitly converted to character strings. SELECT -- Combine first and last name 'Josè' || ' Doe' AS full_name, -- Construct a detailed order string with ID, timestamp, and unique identifier 'Order-' || CAST(1001 AS VARCHAR) || '~TS~' || current_timestamp || '~' || NEWID() AS order_details, -- Handle NULL safely in concatenation 'Item-' || NULL AS item_desc; Results: full_name order_details item_desc Josè Doe Order-1001~TS~Jul 29 2025 8:33PM~0387F607-1664-4EEB-A4E8-F16B396DDD73 NULL Example #3: In this example, the use of || operator is unsupported when used with only non-character types or when combining binary data with other types. -- Attempting to concatenate two numeric literals: 1 and 2 SELECT 1 || 2; -- Attempting to concatenate integer, string, and binary literals SELECT 1 || 'a' || 0x4e; Above queries will fail with error messages as below – The data types int and int are incompatible in the concat operator. The data types varchar and varbinary are incompatible in the concat operator. Comparison with + and CONCAT() Let’s explore how || differs from + and CONCAT() in Azure SQL using real examples. Sample queries: -- Binary + String SELECT CONCAT(0x01, 'a'); -- Returns: a SELECT 0x01 || 'a'; -- Returns: Error SELECT 0x01 + 'a'; -- Returns: Error -- Binary + Binary SELECT CONCAT(0x01, 0xff); -- Returns: ÿ SELECT 0x01 || 0xff; -- Returns: 0x01FF SELECT 0x01 + 0xff; -- Returns: 0x01FF -- NULL Handling SELECT CONCAT(NULL, 'a'); -- Returns: 'a' SELECT NULL || 'a'; -- Returns: NULL SELECT NULL + 'a'; -- Returns: NULL -- Numeric + String SELECT CONCAT(1, 'a'); -- Returns: '1a' SELECT 1 || 'a'; -- Returns: '1a' SELECT 1 + 'a'; -- Returns: Error -- Date + String SELECT CONCAT(CURRENT_TIMESTAMP, 'a'); -- Returns: Jul 29 2025 9:29PMa SELECT CURRENT_TIMESTAMP || 'a'; -- Returns: Jul 29 2025 9:29PMa SELECT CURRENT_TIMESTAMP + 'a'; -- Returns: Error Key Differences Feature + Operator CONCAT() Function || Operator ANSI SQL Standard No Yes Yes NULL Handling Returns NULL Ignores NULL/ Empty string Returns NULL Type Safety Not type-safe Type-safe Type-safe Readability Moderate Verbose High Portability Low Moderate High Why these matters Easier Migration – Migrating from other databases becomes smoother with support for UNISTR and ||, reducing the need for syntax rewrites. Global Reach – UNISTR simplifies Unicode handling, making it easier to build apps for international markets. Cleaner Code – The || operator improves readability and aligns with modern SQL practices. Learn more UNISTR (Transact-SQL) – SQL Server | Microsoft Learn || (String Concatenation) (Transact-SQL) – SQL Server | Microsoft Learn ||= (Compound assignment) (Transact-SQL) – SQL Server | Microsoft Learn Conclusion In this blog post, we announced the General Availability (GA) of UNISTR function and ANSI SQL string concatenation operator (||) in Azure SQL. The UNISTR function allows you to escape Unicode characters, making it easier to work with international text. ANSI SQL || operator provides a simple and intuitive way to combine characters or binary data. These enhancements reflect our commitment to making Azure SQL the most developer-friendly, standards-compliant cloud-first database platform. Whether you’re modernizing legacy systems, building multilingual apps, or writing cleaner SQL, these features are designed to make your journey smoother. We hope you will explore these enhancements, apply them in your projects, and share your feedback with us to help us continue improving. Thank you!108Views0likes0CommentsEnhanced Server Audit for Azure SQL Database: Greater Performance, Availability and Reliability
We are excited to announce a significant update to the server audit feature for Azure SQL Database. We have re-architected major portions of SQL Auditing resulting in increased availability and reliability of server audits. As an added benefit, we have achieved closer feature alignment with SQL Server and Azure SQL Managed Instance. Database auditing remains unchanged. In the remainder of this blog article, we cover Functional changes Changes Affecting customers Sample queries Call for action Implementation and Notification Time-based Filtering Functional Changes In the current design when server audit is enabled, it triggers a database level audit and executes one audit session for each database. With the new architecture, enabling server audit will create one extended event session at the server level that captures audit events for all databases. This optimizes memory and CPU and is consistent with how auditing works in SQL Server and Azure SQL Managed Instance. Changes Affecting Customers Folder Structure change for storage account Folder structure change for Read-Only replicas Permissions required to view Audit logs One of the primary changes involves the folder structure for audit logs stored in storage account containers. Previously, server audit logs were written to separate folders, one for each database, with the database name serving as the folder name. With the new update, all server audit logs will be consolidated into a single folder which is ‘Master’ folder. This behavior is the same as Azure SQL Managed Instance and SQL Server For Read-Only database replicas, which previously had their logs stored in a read-only folder, those logs will now also be written into the Master folder. You can retrieve these logs by filtering on the new column ‘is_secondary_replica_true’. Please note that the audit logs generated after deployment will adhere to the new folder structure, while the existing audit logs will stay in their current folders until their retention periods expire. Sample Queries To help you adopt these changes in your workflows, here are some sample queries: Current New To Query audit logs for a specific database called "test" SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/ SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE database_name = 'test'; To query audit logs for test database from read only replica SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/test/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/RO/07_06_40_590_0.xel', default, default) SELECT * FROM sys.fn_get_audit_file ('https://testaudit.blob.core.windows.net/sqldbauditlogs/auditpoc/master/SqlDbAuditing_ServerAudit_NoRetention/2023-01-29/07_06_40_590_0.xel', default, default) WHERE is_secondary_replica_true = 'true'; Permissions Control database on user database Server admin privileges Implementation and Notifications We are rolling out this change region-wise. Subscription owners will receive notifications with the subject “Update your scripts to point to a new folder for server level audit logs” for each region as the update is implemented. It is important to update any scripts that refer to the folder structure to retrieve audit logs based on the database name for the specific region. Note that this change applies only to server-level auditing; database auditing remains unchanged. Call for Action These actions apply only to customers who are using storage account targets. No action is needed for customers using Log Analytics or Event hubs. Folder references: Change the reference for audit logs from the database name folder to the Master folder and use specific filters to retrieve logs for a required database. Read -Only Database Replicas: Update references for audit logs from the Read-Only replica folder to the Master folder and filter using the new parameter as shown in the examples. Permissions: Ensure you have the necessary control server permissions to review the audit logs for each database using fn_get_audit_file. Manual Queries This update also applies to manual queries where you use fn_get_audit_file to retrieve audit logs from the storage account Time-based filtering To enhance your ability to query audit logs using filters, consider using efficient time-based filtering with the fn_get_audit_file_v2 function. This function allows you to retrieve audit log data with improved filtering capabilities. For more details, refer to the official documentation here.1.9KViews2likes1CommentMSSQL Extension for VS Code: Agent Mode Updates, Colored Connections, and Schema Designer updates
The MSSQL Extension for VS Code keeps getting better—bringing thoughtful updates that make SQL development more conversational, more visual, and more local. In version v1.34.0, we’ve focused this release on deepening GitHub Copilot Agent Mode, improving connection clarity through color-coded indicators, making local container workflows more flexible, enhancing the Schema Designer, and solving bugs across the experience. Here’s a look at what’s new in this release and how it helps simplify your SQL development workflow. What’s new in MSSQL extension for VS Code v1.34 This release includes three major improvements: GitHub Copilot Agent Mode (update) — A more capable conversational experience, with new tools for connecting, managing, and navigating your database using natural language. Color-coded Connections — Bring clarity to your workspace by assigning custom colors to your active connections, now shown right in the status bar. Improved Local SQL Server Container Experience — Enhanced support for assigning containers to connection groups and managing profiles with even more flexibility. Schema Designer Improvements — Foreign key icons are now correctly displayed, and filter behavior has been refined to better reflect table relationships—making it easier to navigate complex schemas visually. GitHub Copilot Agent mode updates (Public Preview) We’ve deepened GitHub Copilot’s Agent Mode experience with expanded capabilities. You can now use chat to generate and execute queries, view data, connect to specific databases, and explore your schema—all from a conversational, schema-aware assistant right inside the editor. Key highlights Change database – Switch to a different database in the current session List schemas – Show all schemas in the connected database List tables – Show all tables in the connected database List views – Show all views in the connected database List functions – Show all functions in the connected database Show connection details – Get details about the current SQL connection List databases – Show all databases on the connected server Run query – Execute a SQL query in the active connection Here’s how easy it is to connect, switch databases, and explore your full schema—from chat. Color-Coded Connections Managing multiple databases just got easier. You can now assign custom colors to your connections and instantly recognize them in the VS Code status bar. Whether you’re working across dev, test, and prod environments—or switching between customers and projects—color makes it easy to stay oriented and avoid mistakes. Key highlights Assign a custom color to any saved connection profile Connection color appears in the status bar so you always know where you’re connected Optional group-based coloring lets teams align on shared conventions Thanks to the community! Group-based coloring length is now configurable (shout-out to @bathetrade) Here’s how it looks in action: Local SQL Server Container updates (Public Preview) We’ve made it even easier to use SQL Server containers for local development. You can now assign containers to connection groups and manage container profiles with more flexibility—all from a streamlined wizard in the extension. Key highlights Assign containers to connection groups for easier organization More flexible profile management to suit your dev workflow Streamlined wizard UI to spin up containers faster Schema Designer updates (Public Preview) We’ve improved the Schema Designer to make it more intuitive and visually informative. Key highlights Foreign key icons are now correctly displayed Smarter filtering reflects table relationships more clearly These enhancements make it easier to understand and navigate your database schema as you design it. Other updates Multiple Azure account sign-in – Sign into multiple Azure accounts to browse subscriptions and manage firewall rules Connection-sharing API – Other VS Code extensions can now access MSSQL connections for deeper integration Accessibility enhancements – Improved support across connection dialog, schema designer, and schema compare Always Encrypted fix – Resolved issues when using Azure Key Vault for encrypted columns UI consistency polish – Minor tweaks to dialogs and interactions for smoother user experience Conclusion The v1.34 release brings thoughtful updates across the board—from deepening GitHub Copilot Agent Mode with new schema-aware commands, to adding clarity through color-coded connections, to refining the local SQL Server container experience and Schema Designer filters. These updates help make SQL development more intuitive, flexible, and developer-friendly inside Visual Studio Code. If there’s something you’d love to see in a future update, here’s how you can contribute: 💬 GitHub discussions – Share your ideas and suggestions to improve the extension ✨ New feature requests – Request missing capabilities and help shape future updates 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable Want to see these features in action? GitHub Copilot Agent Mode demo Full playlist of demos Thanks for being part of the journey—happy coding! 🚀569Views0likes0CommentsMSSQL Extension for VS Code: Local Containers, GitHub Copilot Agent Mode and Connection Groups
The MSSQL Extension for VS Code continues to evolve, bringing powerful new features that make SQL development more local, more organized, and more intelligent. In version v1.33.0, we’re introducing Local SQL Server Containers, GitHub Copilot Agent Mode, and Server Groups—three capabilities designed to simplify and modernize the way developers build applications using SQL Server in Visual Studio Code. Here’s a closer look at what’s included in this release and how these features can enhance your SQL development workflow. What’s new in MSSQL extension for VS Code v1.33 This release introduces three major capabilities designed to streamline the SQL development experience: Local SQL Server Containers (Public Preview) — Easily spin up and manage SQL Server containers directly from within the extension. GitHub Copilot Agent Mode (Public Preview) — A conversational, schema-aware assistant that helps you explore, understand, and optimize your database through natural language interactions. Server Groups — Organize your saved connections into folders for improved clarity and control. Local SQL Server Containers (Public Preview) You can now create and manage SQL Server containers locally, without writing a single Docker command. The new Local SQL Container experience allows you to launch a fully configured SQL Server container from the MSSQL extension’s UI. By default, the SQL container wizard uses SQL Server 2025 (Public Preview), the latest version of SQL Server, which includes native support for vector data types, JSON functions, and other AI-ready features. This makes it ideal for building modern, intelligent applications locally. This new capability is perfect for prototyping, developing, or testing workloads using the same engine available in production environments. Key highlights Auto-connect: A connection profile is automatically created and ready for use Lifecycle controls: Start, stop, restart, or delete containers from the connection panel Docker environment checks: Get notified if Docker isn’t running or installed Port conflict detection: If port 1433 (the default SQL Server port) is already in use, the extension will automatically find and assign the next available port for your container. Custom settings: Define container name, hostname, and port via UI Other versions supported: You can also choose to run a SQL Server 2022, or 2019 container. In the Connections view, select the “Add” (+) button, then choose “Create local SQL Server container” from the options. GitHub Copilot Agent mode (Public Preview) We’re introducing GitHub Copilot Agent Mode in Public Preview, bringing a contextual, action-driven chat experience into the MSSQL extension. This first release enables Copilot to interpret your intent and perform real tasks inside Visual Studio Code—making it easier to interact with your database through natural language prompts. Key highlights List servers – View all available SQL Server connections Connect to server or database – Establish a new database connection Disconnect from server or database – End the active connection Run query – Execute a SQL query directly from the chat window Show database schema – Display the schema for the connected database These tools are surfaced through natural interactions. For example, you might type: "Connect to my LocalDev environment" or "Show me the schema of the Products table for my e-commerce database." Server Groups If you manage multiple projects, clients, or environments, Server Groups bring structure to your connection list by allowing you to group saved connections into folders. This improves navigation and makes it easier to keep development, staging, and production environments organized—all within the MSSQL extension. Key highlights Create and rename folders for saved connections Drag and drop connections into different groups Edit an existing connection and assign it to a group directly from the connection dialog. Collapse or expand groups to declutter your view Easily switch between environments with confidence Whether you’re managing five connections or fifty, Server Groups help you stay organized and focused. Conclusion The v1.33 release brings us closer to our vision of a modern, developer-first SQL experience inside Visual Studio Code. From launching a local container powered by SQL Server 2025, to organizing your connections with Server Groups, to exploring your schema through GitHub Copilot Agent Mode—this update is all about helping developers build faster, smarter, and more locally. If there’s something you’d love to see in a future update, here’s how you can contribute: 💬 GitHub discussions – Share your ideas and suggestions to improve the extension ✨ New feature requests – Request missing capabilities and help shape future updates 🐞 Report bugs – Help us track down and fix issues to make the extension more reliable Want to see these features in action? Local SQL Server Container demo GitHub Copilot Agent Mode demo Full playlist of demos Thanks for being part of the journey—happy coding! 🚀624Views0likes0CommentsImmutability: A Powerful Shield Against Ransomware in SQL Environments
In today’s digital landscape, ransomware attacks are among the most disruptive and costly threats organizations face. These attacks often target critical data—encrypting or deleting it to extort payment. One of the most effective strategies to mitigate this risk is immutability. 🧱 What Is Immutability? Immutability means that once data is written, it cannot be altered or deleted for a defined period. This concept is widely used in software engineering (e.g., immutable objects in programming) but has become a cornerstone in data protection and cybersecurity. In storage and backup systems, immutability ensures that even if an attacker gains access, they cannot tamper with or erase the protected data. 🛡️ How Immutability Protects Against Ransomware Ransomware typically works by: Gaining access to systems. Encrypting or deleting data. Demanding ransom for restoration. With immutable backups or storage: Encrypted or deleted data can be restored from a clean, untouchable copy. Attackers cannot modify or delete immutable data, even with elevated privileges. Organizations can recover quickly without paying the ransom. Immutability with Azure storage Azure storage supports immutability for blob storage and is compliant with industry requirements in this space. 🗃️Using Azure immutable storage for SQL Server backups SQL Server backups can be written to Azure storage using the BACKUP TO URL T-SQL command. However, writing backups to immutable storage is not supported in the current versions of SQL Server 2022 and prior. This is because of the way SQL Server writes the backups to the .bak file. It first creates the .bak file with zeros and then updates it with the data. By definition of immutability, the file once written cannot be edited or deleted. Hence, the current two step backup process does not allow writing to Azure immutable storage. You will see the following error when you attempt to write a backup to immutable storage with SQL Server 2022. With SQL Server 2025 CTP 2.1 Preview, instead of the current two-steps to create and write the .bak file, everything is done in one step. This change only applies when writing to Azure blob storage, specifically when backing to BlockBlobs, and does not impact other targets such as disk or S3-compatible object storage. Immutability in action Following is an end-to-end scenario of configuring immutability for Azure storage container and creating a native SQL backup file to it using SQL Server 2025 CTP 2.1 Preview. Configure immutability for Azure storage container Navigate to your storage account and then to the desired container under Data storage Select the More button on the right for the desired container and select “Access policy” Click on Add policy Select Time-based retention from the Policy type Enter a number for the retention. It is advised to choose a very small number such as 1-2 days for testing purposes. Leave the “Enable version-level immutability” unchecked. For testing purposes, we highly recommend not to Lock the immutable policy as that will prevent you from making any modifications. Leave the default of “Allow protected append writes to” to None Select Save After saving, go back to More > Access policy on the storage container to verify immutability. You should see a container scoped, time-based immutable policy with the configured retention period. Retrieve the SAS token – from the storage container, select More, and then select “Generate SAS” with relevant Permissions specified. For example: From the generated SAS token, copy the token Switch to SQL Server Management Studio Connect to the SQL Server 2025 CTP 2.1 Preview instance In a New Query window, create the Credential as follows IF NOT EXISTS (SELECT * FROM sys.credentials WHERE name = 'https://dnstorage1.blob.core.windows.net/immcontainer') CREATE CREDENTIAL [https://dnstorage1.blob.core.windows.net/immcontainer] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<token>'; Enable the Trace flag to switch to the single write workflow DBCC TRACEON(3012,-1) Perform a database backup as follows: BACKUP DATABASE WideWorldImporters TO URL = 'https://dnstorage1.blob.core.windows.net/immcontainer/WideWorldImporters.bak'; GO The backup should succeed, as follows: Some details to note: Formerly one could see a 0-byte blob when the backup process starts. However, in the new workflow the blob would not be visible under your storage account in the Azure portal until the backup has completed successfully. If the backup fails mid-way, you may not see any blob in the container in the Azure portal, but it may exist in an uncommitted state. If you retry without specifying the “FORMAT” option, the backup statement may fail indicating that a blob already exists at the specified path, and you should use the “WITH FORMAT” option. If you retry using the “WITH FORMAT” option, the backup workflow will be able to proceed. Once a backup has been successfully completed, use of “FORMAT” option becomes inconsequential till the point the backup is protected by the immutable policy as nobody can delete/overwrite the blob. The example used a time-based retention policy but the same result can be achieved using Legal Hold for immutability. Azure immutable storage has more details on time-based vs legal hold types of immutability. Alternatively, you can also follow the instructions at Code examples to create a Shared Access Signature, create a credential and perform a full database backup. Verifying immutability Once the backup is created, you can verify the immutability of the backup by attempting to modify/delete the backup. Modifying the file by attempting to overwrite to the same file: I attempted to re-run the same command to overwrite the backup file, and I get the following error: Adding WITH FORMAT: Delete the immutable backup: From Azure storage container, I tried to delete the backup file I just created: As you can see, once written, the backup files cannot be modified or deleted until the configured retention period expires. 🔚 Final Thoughts Immutability is not just a buzzword—it's a critical defense mechanism in the fight against ransomware. For SQL databases, where data integrity and availability are paramount, implementing immutable backups and storage can mean the difference between a minor incident and a catastrophic loss. By embracing immutability, organizations can ensure that their most valuable data remains safe, recoverable, and resilient—no matter what threats come their way.576Views3likes1CommentAzure Data Studio Retirement
We’re announcing the upcoming retirement of Azure Data Studio (ADS) on February 6, 2025, as we focus on delivering a modern, streamlined SQL development experience. ADS will remain supported until February 28, 2026, giving developers ample time to transition. This decision aligns with our commitment to simplifying SQL development by consolidating efforts on Visual Studio Code (VS Code) with the MSSQL extension, a powerful and versatile tool designed for modern developers. Why Retire Azure Data Studio? Azure Data Studio has been an essential tool for SQL developers, but evolving developer needs and the rise of more versatile platforms like VS Code have made it the right time to transition. Here’s why: Focus on innovation VS Code, widely adopted across the developer community, provides a robust platform for delivering advanced features like cutting-edge schema management and improved query execution. Streamlined tools Consolidating SQL development on VS Code eliminates duplication, reduces engineering maintenance overhead, and accelerates feature delivery, ensuring developers have access to the latest innovations. Why Transition to Visual Studio Code? VS Code is the #1 developer tool, trusted by millions worldwide. It is a modern, versatile platform that meets the evolving demands of SQL and application developers. By transitioning, you gain access to cutting-edge tools, seamless workflows, and an expansive ecosystem designed to enhance productivity and innovation. We’re committed to meeting developers where they are, providing a modern SQL development experience within VS Code. Here’s how: Modern development environment VS Code is a lightweight, extensible, and community-supported code editor trusted by millions of developers. It provides: Regular updates. An active extension marketplace. A seamless cross-platform experience for Windows, macOS, and Linux. Comprehensive SQL features With the MSSQL extension in VS Code, you can: Execute queries faster with filtering, sorting, and export options for JSON, Excel, and CSV. Manage schemas visually with Table Designer, Object Explorer, and support for keys, indexes, and constraints. Connect to SQL Server, Azure SQL (all offerings), and SQL database in Fabric using an improved Connection Dialog. Streamline development with scripting, object modifications, and a unified SQL experience. Optimize performance with an enhanced Query Results Pane and execution plans. Integrate with DevOps and CI/CD pipelines using SQL Database Projects. Stay tuned for upcoming features—we’re continuously building new experiences based on feedback from the community. Make sure to follow the MSSQL repository on GitHub to stay updated and contribute to the project! Streamlined workflow VS Code supports cloud-native development, real-time collaboration, and thousands of extensions to enhance your workflows. Transitioning to Visual Studio Code: What You Need to Know We understand that transitioning tools can raise concerns, but moving from Azure Data Studio (ADS) to Visual Studio Code (VS Code) with the MSSQL extension is designed to be straightforward and hassle-free. Here’s why you can feel confident about this transition: No Loss of Functionality If you use ADS to connect to Azure SQL databases, SQL Server, or SQL database in Fabric, you’ll find that the MSSQL extension supports these scenarios seamlessly. Your database projects, queries, and scripts created in ADS are fully compatible with VS Code and can be opened without additional migration steps. Familiar features, enhanced experience VS Code provides advanced tools like improved query execution, modern schema management, and CI/CD integration. Additionally, alternative tools and extensions are available to replace ADS capabilities like SQL Server Agent and Schema Compare. Cross-Platform and extensible Like ADS, VS Code runs on Windows, macOS, and Linux, ensuring a consistent experience across operating systems. Its extensibility allows you to adapt it to your workflow with thousands of extensions. If you have further questions or need detailed guidance, visit the ADS Retirement page. The page includes step-by-step instructions, recommended alternatives, and additional resources. Continued Support With the Azure Data Studio retirement, we’re committed to supporting you during this transition: Documentation: Find detailed guides, tutorials, and FAQs on the ADS Retirement page. Community Support: Engage with the active Visual Studio Code community for tips and solutions. You can also explore forums like Stack Overflow. GitHub Issues: If you encounter any issues, submit a request or report bugs on the MSSQL extension’s GitHub repository. Microsoft Support: For critical issues, reach out to Microsoft Support directly through your account. Transitioning to VS Code opens the door to a more modern and versatile SQL development experience. We encourage you to explore the new possibilities and start your journey today! Conclusion Azure Data Studio has served the SQL community well,but the Azure Data Studio retirement marks an opportunity to embrace the modern capabilities of Visual Studio Code. Transitioning now ensures you’re equipped with cutting-edge tools and a future-ready platform to enhance your SQL development experience. For a detailed guide on ADS retirement , visit aka.ms/ads-retirement. To get started with the MSSQL extension, check out the official documentation. We’re excited to see what you build with VS Code!29KViews4likes23CommentsRunning SQL Server 2025 on Linux Containers in WSL
Hey there! Ready to dive into the awesome world of SQL Server 2025? This latest release is packed with cool new features and enhancements that you won't want to miss. Curious to know more? Check out the SQL Server on Linux release-notes for all the exciting details! In this blog post, I will show you how quick and easy it is to get SQL Server 2025 up and running in a container on Windows Subsystem for Linux (WSL). We’ll create a test database, set up some tables, and even play around with a few REGEX functions using the latest SQL Server Management Studio! Running SQL Server in containers on WSL is super flexible for development and testing. It’s perfect for developers who want to experiment with Linux-native features on a Windows machine without needing a separate Linux system. Let’s get started! For detailed instructions on setting up WSL, check out the Install WSL guide. You can also deploy SQL Server as a systemd service by following the SQL Server Installation on WSL guide. First things first, install WSL and a SQL Server 2025 compatible distro. For this demo, I went with Ubuntu 22.04. I’ve already got Docker Desktop installed on my machine to manage the containers. With this setup, you can install any supported SQL Server on the WSL-compatible distro for testing or demo purposes. It’s super handy for any development work too. wsl --install wsl -l -o wsl --install Ubuntu-22.04 Heads up! The above step might need a system restart and will automatically download Ubuntu if it’s not already on your system. Next, let’s deploy SQL Server 2025 on Ubuntu 22.04. I ran a simple docker run command, and ta-da! I can connect to it using SQL Server Management Studio, as shown in below: docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=DontUseThisSmartPwd:)" \ -e "MSSQL_PID=Developer" -e "MSSQL_AGENT_ENABLED=true" \ -p 14333:1433 --name sqlcontainerwsl --hostname sqlcontainerwsl \ -d mcr.microsoft.com/mssql/server:2025-latest I tried out the new SQL Server Management Studio and had some fun testing few REGEX functions: /*Here are some SQL scripts to create some tables with sample records for the demo. These tables contain sample data for testing the regular expression functions.*/ --- 1. **employees Table: For REGEXP_LIKE Example** -- Create employees table with some records DROP TABLE IF EXISTS employees CREATE TABLE employees ( ID INT IDENTITY(101,1), [Name] VARCHAR(150), Email VARCHAR(320), Phone_Number NVARCHAR(20) ); INSERT INTO employees ([Name], Email, Phone_Number) VALUES ('John Doe', '[email protected]', '123-4567890'), ('Alice Smith', 'alice@fabrikam@com', '234-567-81'), ('Bob Johnson', 'bob.fabrikam.net','345-678-9012'), ('Eve Jones', '[email protected]', '456-789-0123'), ('Charlie Brown', '[email protected]', '567-890-1234'); GO -- 2. **customer_reviews Table: For REGEXP_COUNT Example** DROP TABLE IF EXISTS customer_reviews CREATE TABLE customer_reviews ( review_id INT PRIMARY KEY, review_text VARCHAR(1000) ); INSERT INTO customer_reviews (review_id, review_text) VALUES (1, 'This product is excellent! I really like the build quality and design.'), (2, 'Good value for money, but the software could use improvements.'), (3, 'Poor battery life, bad camera performance, and poor build quality.'), (4, 'Excellent service from the support team, highly recommended!'), (5, 'The product is good, but delivery was delayed. Overall, decent experience.'); GO -- 3. **process_logs Table: For REGEXP_INSTR Example** DROP TABLE IF EXISTS process_logs CREATE TABLE process_logs ( log_id INT PRIMARY KEY, log_entry VARCHAR(1000) ); INSERT INTO process_logs (log_id, log_entry) VALUES (1, 'Start process... Step 1: Initialize. Step 2: Load data. Step 3: Complete.'), (2, 'Begin... Step 1: Validate input. Step 2: Process data. Step 3: Success.'), (3, 'Step 1: Check configuration. Step 2: Apply settings. Step 3: Restart.'), (4, 'Step 1: Authenticate. Step 2: Transfer data. Step 3: Log complete.'), (5, 'Step 1: Initiate system. Step 2: Check logs. Step 3: Shutdown.'); GO -- 4. **transactions Table: For REGEXP_REPLACE Example** DROP TABLE IF EXISTS transactions CREATE TABLE transactions ( transaction_id INT PRIMARY KEY, credit_card_number VARCHAR(19) ); INSERT INTO transactions (transaction_id, credit_card_number) VALUES (1, '1234-5678-9101-1121'), (2, '4321-8765-1098-7654'), (3, '5678-1234-9876-5432'), (4, '9876-4321-6543-2109'), (5, '1111-2222-3333-4444'); GO -- 5. **server_logs Table: For REGEXP_SUBSTR and Data Cleanup Example** DROP TABLE IF EXISTS server_logs CREATE TABLE server_logs ( log_id INT PRIMARY KEY, log_entry VARCHAR(2000) ); INSERT INTO server_logs (log_id, log_entry) VALUES (1, '2023-08-15 ERROR: Connection timeout from 192.168.1.1 user [email protected]'), (2, '2023-08-16 INFO: User login successful from 10.0.0.1 user [email protected]'), (3, '2023-08-17 ERROR: Disk space low on 172.16.0.5 user [email protected]'), (4, '2023-08-18 WARNING: High memory usage on 192.168.2.2 user [email protected]'), (5, '2023-08-19 ERROR: CPU overload on 10.1.1.1 user [email protected]'); GO -- 6. **personal_data Table: For REGEXP_REPLACE (Masking Sensitive Data) Example** DROP TABLE IF EXISTS personal_data CREATE TABLE personal_data ( person_id INT PRIMARY KEY, sensitive_info VARCHAR(100) ); INSERT INTO personal_data (person_id, sensitive_info) VALUES (1, 'John Doe - SSN: 123-45-6789'), (2, 'Jane Smith - SSN: 987-65-4321'), (3, 'Alice Johnson - Credit Card: 4321-5678-1234-8765'), (4, 'Bob Brown - Credit Card: 1111-2222-3333-4444'), (5, 'Eve White - SSN: 111-22-3333'); GO /*These tables contain realistic sample data for testing the regular expression queries. You can modify or extend the records as needed for additional complexity. */ /* Let's see the use cases for `REGEXP_LIKE`, `REGEXP_COUNT`, `REGEXP_INSTR`, `REGEXP_REPLACE`, and `REGEXP_SUBSTR` in SQL. These examples are designed to handle real-world scenarios with multiple conditions, nested regex functions, or advanced string manipulations.*/ /* 1. **REGEXP_LIKE to filter based on Complex Pattern** Scenario #1: find all the employees whose email addresses are valid and end with .com */ SELECT [Name], Email FROM Employees WHERE REGEXP_LIKE(Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.com$'); GO /* Scenario #2: Recreate employees table with CHECK constraints for 'Email' and 'Phone_Number' columns */ DROP TABLE IF EXISTS Employees CREATE TABLE Employees ( ID INT IDENTITY(101,1), [Name] VARCHAR(150), Email VARCHAR(320) CHECK (REGEXP_LIKE(Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone_Number NVARCHAR(20) CHECK (REGEXP_LIKE (Phone_Number, '^(\d{3})-(\d{3})-(\d{4})$')) ); INSERT INTO employees ([Name], Email, Phone_Number) VALUES ('John Doe', '[email protected]', '123-456-7890'), ('Alice Smith', '[email protected]', '234-567-8100'), ('Bob Johnson', '[email protected]','345-678-9012'), ('Eve Jones', '[email protected]', '456-789-0123'), ('Charlie Brown', '[email protected]', '567-890-1234'); GO -- CHECK Constraints - Ensure that the data fulfills the specified criteria. -- FAILURE - Try inserting a row with INVALID values: INSERT INTO Employees ([Name], Email, Phone_Number) VALUES ('Demo Data', '[email protected]', '123-456-7890'); GO SELECT * FROM Employees; --- /* 2. **`REGEXP_COUNT` to Analyze Word Frequency in Text** Scenario: Counting Specific Words in Large Text Data Suppose you have a `customer_reviews` table, and you want to count the number of occurrences of specific words like "excellent", "good", "bad", or "poor" to evaluate customer sentiment. */ SELECT review_id, REGEXP_COUNT(review_text, '\b(excellent|good|bad|poor)\b', 1, 'i') AS sentiment_word_count, review_text FROM customer_reviews; GO --- /* 3. **`REGEXP_INSTR to Detect Multiple Patterns in Sequence** Scenario: Identify the Position of Multiple Patterns in Sequence Imagine you have log data where each entry contains a sequence of steps, and you need to find the position of a specific pattern like "Step 1", "Step 2", and "Step 3", ensuring they occur in sequence. */ SELECT log_id, REGEXP_INSTR(log_entry, 'Step\s1.*Step\s2.*Step\s3', 1, 1, 0, 'i') AS steps_position FROM process_logs WHERE REGEXP_LIKE(log_entry, 'Step\s1.*Step\s2.*Step\s3', 'i'); GO --- /* 4. **`REGEXP_REPLACE` for replacing string based on the pattern match** Scenario: Redacting Sensitive Information with Variable Lengths You need to redact sensitive data from a table that contains personal information like Social Security Numbers (SSNs) and credit card numbers. The challenge is that the data might be in different formats (e.g., `###-##-####` for SSNs and `####-####-####-####` for credit cards). */ SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM personal_data; GO --- /* 5. **REGEXP_SUBSTR to Extract Nested Information** Scenario: Extract Specific Parts of a Complex String Format */ SELECT [Name], Email, REGEXP_SUBSTR(email, '@(.+)$', 1, 1,'c',1) AS Domain FROM employees; GO --- /* 6. **Combining Multiple REGEXP Functions for Data Transformation** Scenario: Log Cleanup and Transformation You have raw server logs that contain noisy data. Your goal is to: 1. Extract the date. 2. Count how many times the word "ERROR" appears. 3. Replace any email addresses with `[REDACTED]`. */ SELECT log_entry, REGEXP_SUBSTR(log_entry, '\d{4}-\d{2}-\d{2}', 1, 1) AS log_date, REGEXP_COUNT(log_entry, 'ERROR', 1, 'i') AS error_count, REGEXP_REPLACE(log_entry, '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}', '[REDACTED]') AS cleaned_log FROM server_logs; GO --- --*TVFs* /* 7. **REGEXP_MATCHES - Find all the match in the string and return in tablular format***/ SELECT * FROM REGEXP_MATCHES ('Learning #AzureSQL #AzureSQLDB', '#([A-Za-z0-9_]+)'); /* 8. **REGEXP_SPLIT_TO_TABLE - Split string based on regexp pattern**/ SELECT * FROM REGEXP_SPLIT_TO_TABLE ('192.168.0.1|80|200|Success|192.168.0.2|443|404|Not Found', '\|') There are lot of exciting features in SQL Server 2025! The Linux version includes all engine features from the SQL Server 2025 on Windows. Check out the What’s New for SQL Server 2025 Preview on Linux for all the details. Here are a few highlights: Set custom password policy for SQL logins in SQL Server on Linux Enable and run tempdb on tmpfs for SQL Server 2025 Preview on Linux Connect to ODBC data sources with PolyBase on SQL Server on Linux These features make SQL Server on Linux super versatile and powerful, giving you the same robust experience as on Windows. Conclusion SQL Server 2025 on Linux is a game-changer, offering flexibility and power for developers and IT pros alike. Whether you're developing, testing, or deploying in production, this setup has got you covered. 🔗 References aka.ms/sqlserver2025 https://aka.ms/Build/sql2025blog. https://aka.ms/IntroMirroringSQL http://aka.ms/optimized-hp-blog https://aka.ms/tempdb-rg-blog https://aka.ms/sqlserver2025-ctp-diskann SQL Server on Linux FAQ1.1KViews3likes2CommentsIntroducing "Backups on Secondary" for SQL Server Always On Availability Groups with SQL Server 2025
We’re excited to announce a major enhancement for SQL Server Always On Availability Groups Backups on Secondary, in SQL Server 2025. Until SQL Server 2022, you could only perform COPY_ONLY full backups and transaction log backups on a secondary replica of an Always On Availability Group. This enhancement in SQL Server 2025 allows you to offload all types of backups—full, differential, and transaction logs - to a secondary replica, significantly improving performance, resource utilization, and operational flexibility. What Is "Backups on Secondary"? Traditionally, backups in an Always On Availability Group were typically performed on the primary replica, which could lead to resource contention and performance degradation for mission-critical workloads. With this update in SQL Server 2025 to the Backups on Secondary feature, you can now configure your environment to perform all backup operations on a designated secondary replica. This includes: Full backups Differential backups Transaction log backups Key Benefits 🔄 Reduced Load on Primary Replica By offloading backup operations to a secondary replica, you free up CPU, memory, and I/O resources on the primary replica, ensuring that your production workloads run more smoothly and efficiently. 🧩 Flexible Backup Strategies You can now design more flexible and resilient backup strategies by leveraging secondary replicas in different geographic locations or data centers. 🛡️ Enhanced High Availability and Disaster Recovery In the event of a failover, backup operations can continue seamlessly on another secondary replica, ensuring continuous data protection. How It Works The feature is easy to configure using T-SQL or SQL Server Management Studio (SSMS). There are two steps to make backups run on secondary: Configure backups to run on secondary replica via the AUTOMATED_BACKUP_PREFERENCE and BACKUP_PRIORITY parameters – refer to Configure backups on secondary replicas of an Always On availability group for prerequisites and detailed steps. Run the T-SQL BACKUP command using one of the tools such as SSMS or SQL Agent jobs or Maintenance plans. These are the same steps that have been currently in place for SQL Server 2022 and for prior versions. In the prior versions this configuration only allowed COPY_ONLY and transaction log backups. Now, starting with SQL Server 2025, with the same configuration in place, you can perform FULL, DIFFERENTIAL and T-LOG backups on a secondary replica and truly offload the backup overhead on to the secondary replica. You can also query system views like sys.dm_hadr_backup_is_preferred_replica to programmatically determine the preferred replica for backups. Final Thoughts The Backups on Secondary feature is a game-changer for organizations looking to optimize their SQL Server environments for performance, availability, and cost. Whether you're managing a large-scale enterprise deployment or a hybrid cloud setup, this feature gives you the flexibility and control you need to build a more efficient and resilient data platform. This feature is available in SQL Server 2025 CTP 2.0. You can download it and try it out today.1.6KViews4likes3CommentsThe Microsoft.Build.Sql project SDK is now generally available
Your database should be part of a wholistic development process, where iterative development tools are coupled with automation for validation and deployment. As previously announced, the Microsoft.Build.Sql project SDK provides a cross-platform framework for your database-as-code such that the database obejcts are ready to be checked into source control and deployed via pipelines like any other modern application component. Today Microsoft.Build.Sql enters general availability as another step in the evolution of SQL database development. Standardized SQL database as code SQL projects are a .NET-based project type for SQL objects, compiling a folder of SQL scripts into a database artifact (.dacpac) for manual or continuous deployments. As a developer working with SQL projects, you’re creating the T-SQL scripts that define the objects in the database. While the development framework around SQL projects presents a clear build and deploy process for development, there’s no wrong way to incorporate SQL projects into your development cycle. The SQL objects in the project can be manually written or generated via automation, including through the graphical schema compare interfaces or the SqlPackage extract command. Whether you’re developing with SQL Server, Azure SQL, or SQL in Fabric, database development standardizes on a shared project format and the ecosystem of tooling around SQL projects. The same SQL projects tools, like the SqlPackage CLI, can be used to either deploy objects to a database or update those object scripts from a database. Free development tools for SQL projects, like the SQL database projects extension for VS Code and SQL Server Data Tools in Visual Studio, bring the whole development team together. The database model validation of a SQL project build provides early verification of the SQL syntax used in the project, before code is checked in or deployed. Code analysis for antipatterns that impact database design and performance can be enabled as part of the project build and extended. This code analysis capability adds in-depth feedback to your team’s continuous integration or pre-commit checks as part of SQL projects. Objects in a SQL project are database objects you can have confidence in before they’re deployed across your environments. Evolving from original SQL projects SQL projects converted to the Microsoft.Build.Sql SDK benefit from support for .NET 8, enabling cross-platform development and automation environments. While the original SQL project file format explicitly lists each SQL file, SDK-style projects are significantly simplified by including any .sql file in the SQL projects folder structure. Database references enable SQL projects to be constructed for applications where a single project isn’t an effective representation, whether the database includes cross-database references or multiple development cycles contribute to the same database. Incorporate additional objects into a SQL project with database references through project reference, .dacpac artifact reference, and new to Microsoft.Build.Sql, package references. Package references for database objects improve the agility and manageability of the release cycle of your database through improved visibility to versioning and simplified management of the referenced artifacts. Converting existing projects The Microsoft.Build.Sql project SDK is a superset of the functionality of the original SQL projects, enabling you to convert your current projects on a timeline that works best for you. The original SQL projects in SQL Server Data Tools (SSDT) continue to be supported through the Visual Studio lifecycle, providing years of support for your existing original projects. Converting an existing SQL project to a Microsoft.Build.Sql project is currently a manual process to add a single line to the project file and remove several groups of lines. The resulting Microsoft.Build.Sql project file is generally easier to understand and iteratively develop, with significantly fewer merge conflicts than the original SQL projects. A command line tool, DacpacVerify, is now available to validate that your project conversion has completed without degrading the output .dacpac file. By creating a .dacpac before and after you upgrade the project file, you can use DacpacVerify to confirm the database model, database options, pre/post-deployment scripts, and SQLCMD variables match. The road ahead With SQL Server 2025 on the horizon, support for the SQL Server 2025 target platform will be introduced in a future Microsoft.Build.Sql release along with additional improvements to the SDK references. Many Microsoft.Build.Sql releases will coincide with releases to the DacFx .NET library and the SqlPackage CLI with preview releases ahead of general availability releases several times a year. Feature requests and bug reports for the DacFx ecosystem, including Microsoft.Build.Sql, is managed through the GitHub repository. With the v1 GA of Microsoft.Build.Sql, we’re also looking ahead to continued iteration in the development tooling. In Visual Studio, the preview of SDK-style SSDT continues with new features introduced in each Visual Studio release. Plans for Visual Studio include project upgrade assistance in addition to the overall replacement of the existing SQL Server Data Tools. In the SQL projects extension for VS Code, we’re both ensuring SQL projects capabilities from Azure Data Studio are introduced as well as increasing the robustness of the VS Code project build experience. The Microsoft.Build.Sql project SDK empowers database development to integrate with the development cycle, whether you're focused on reporting, web development, AI, or anything else. Use Microsoft.Build.Sql projects to branch, build, commit, and ship your database – get started today from an existing database or with a new project. Get to know SQL projects from the documentation and DevOps samples.5.4KViews6likes4CommentsZSTD compression in SQL Server 2025
Introducing ZSTD Compression in SQL Server 2025: A Leap Forward in Data Efficiency With the release of SQL Server 2025 Preview, we are introducing a powerful new feature that promises to significantly enhance data storage efficiency: support for the Zstandard (ZSTD) compression algorithm. This open-source compression technology, originally developed by Yann Collet at Facebook, is now being integrated into SQL Server as a modern alternative to the long-standing MS_XPRESS compression algorithm. What is ZSTD? ZSTD is a fast, lossless compression algorithm that offers a compelling balance between compression ratio and speed. It has gained widespread adoption across the tech industry due to its: High compression ratios — often outperforming legacy algorithms like MS_XPRESS. Fast decompression speeds — ideal for read-heavy workloads. Scalability — tunable compression levels to balance CPU usage and storage savings. Compression in SQL Server: Then and Now SQL Server has long supported data compression to reduce storage costs and improve I/O performance. The MS_XPRESS algorithm, used in row and page compression, has served well for many years. However, as data volumes grow and performance demands increase, a more efficient solution is needed. Enter ZSTD. Key Benefits of ZSTD in SQL Server 2025 Improved Compression Ratios In internal benchmarks, ZSTD has shown up to 30–50% better compression compared to MS_XPRESS, depending on the data type and structure. This translates to significant storage savings, especially for large data warehouses and archival systems. Faster Decompression ZSTD’s optimized decompression path ensures that queries on compressed data remain fast, even as compression ratios improve. Flexible Compression Levels SQL Server 2025 allows administrators to choose from multiple ZSTD compression levels, enabling fine-tuned control over the trade-off between CPU usage and compression efficiency. Seamless Integration ZSTD is integrated into the existing compression framework, meaning you can apply it using familiar T-SQL syntax and management tools. How to Use ZSTD in SQL Server 2025 Using ZSTD is as simple as specifying it in your compression options. Examples: Default compression of MS_XPRESS: Specify ZSTD during BACKUP: BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = ZSTD) Specify ZSTD and compression level: BACKUP DATABASE <database_name> TO {DISK|TAPE|URL} WITH COMPRESSION (ALGORITHM = ZSTD, LEVEL=HIGH) The default compression level is LOW. Allowed values are LOW, MEDIUM and HIGH. As you would expect, specifying HIGH compression level takes longer compared to LOW compression level. How to verify which compression algorithm was used: You can use the RESTORE HEADERONLY command to verify which compression algorithm was used during the backup. The CompressionAlgorithm column will display the name of the algorithm. Comparing compression levels: A quick look at the file sizes shows the ZSTD does show some savings. Additionally, a higher level of compression is achieved when LEVEL=HIGH is specified. Final Thoughts While the actual savings from the new compression algorithm varies based on the nature of data and amount of resources available, in general we have seen significant improvements in compression. The integration of ZSTD into SQL Server 2025 marks a significant step forward in data compression technology. Whether you're managing terabytes of transactional data or optimizing your data lake for analytics, ZSTD offers a modern, efficient, and high-performance solution. Looking forward to hearing your feedback. SQL Server 2025 brings a ton of new features and capabilities. Refer to Whats new in SQL 2025 for all the excitement. Cheers Dinakar1.5KViews2likes1Comment