Skip to content
Mar 8

Azure AZ-104 Administrator Storage Solutions

MT
Mindli Team

AI-Generated Content

Azure AZ-104 Administrator Storage Solutions

Azure storage solutions are a critical component of the AZ-104 Microsoft Azure Administrator exam, as they form the backbone of data persistence and management in the cloud. Mastering these concepts ensures you can design cost-effective, secure, and highly available storage architectures that meet business requirements. For the exam, you'll need to demonstrate hands-on skills in configuring and managing various storage services, which are frequently tested in scenario-based questions.

Azure Storage Accounts: Foundation of Data Services

Every Azure storage service resides within a storage account, which is a unique namespace in Azure for your data. The exam requires you to understand the different account types, which dictate available services and performance characteristics. The primary type is General-purpose v2 (GPv2), which supports blobs, files, queues, tables, and offers standard performance tiers. For blob-only scenarios requiring premium performance, you would select BlockBlobStorage or FileStorage accounts. A common exam trap is confusing account types with access tiers, which are a separate cost-optimization feature for blob data. The three main tiers are Hot (for frequently accessed data), Cool (for infrequently accessed data with lower storage cost but higher access cost), and Archive (for rarely accessed data with the lowest storage cost but high retrieval latency and cost).

Equally important are redundancy options, which define how your data is replicated for durability and availability. You must know the trade-offs between Locally Redundant Storage (LRS), Zone-Redundant Storage (ZRS), Geo-Redundant Storage (GRS), and Read-Access Geo-Redundant Storage (RA-GRS). For instance, GRS replicates data to a secondary region but that data is only available for read access if you use RA-GRS. Exam questions often present business continuity requirements, forcing you to choose the most cost-effective redundancy that meets the Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

Core Storage Services: Blobs, Files, and Synchronization

Azure Blob Storage is the object storage service for massive amounts of unstructured data. Beyond basic containers and blobs, the blob lifecycle management feature is key for automating cost control. You can define rules that automatically transition blobs from Hot to Cool to Archive tiers, or delete them after a specified period, based on age or other filters. For example, a rule could move all .log files to Archive tier 30 days after creation. On the exam, expect scenario questions where you must configure such policies to meet data retention policies at minimum cost.

For file-based storage, Azure Files offers fully managed file shares accessible via the Server Message Block (SMB) or Network File System (NFS) protocols. SMB shares are commonly used for "lift and shift" of on-premises applications, while NFS shares are for Linux-based workloads. The critical configuration points are the protocol version (SMB 3.0+ is required for use over the internet) and the performance tier (Standard or Premium). To bridge on-premises environments with the cloud, Azure File Sync is a hybrid service. It synchronizes files between Windows Server file servers and an Azure file share, providing cloud tiering where infrequently used files are cached on-premises but stored in Azure. Exam scenarios often test your ability to choose between a pure Azure Files deployment and a File Sync hybrid model based on latency, caching, and central management requirements.

Securing and Controlling Storage Access

Controlling who and what can access your storage data is a major exam domain. Shared access signatures (SAS) are crucial for granting limited, time-bound access to storage resources without sharing account keys. You must understand the differences between a service SAS (for a specific blob, file, queue, or table) and an account SAS (for multiple services). A SAS token includes permissions (like read, write, list) and a validity period. A frequent exam pitfall is creating a SAS with overly broad permissions or an excessively long expiry, creating security risks. Always follow the principle of least privilege.

For network-level security, storage firewalls and virtual networks allow you to restrict access to specific IP ranges or Azure Virtual Networks. You can configure network rules to deny all access by default and then allow only trusted networks. For private, secure connectivity without traversing the public internet, you use private endpoints. A private endpoint assigns a private IP address from your virtual network to the storage account, effectively bringing it into your VNet. Exam questions may present a scenario requiring compliance with data exfiltration protection, where you must combine firewall rules and private endpoints to ensure all traffic stays within the Azure backbone.

Data Management Tools and Migration Strategies

Moving data into and out of Azure storage is a practical task tested on the AZ-104. For physically shipping large datasets (terabytes to petabytes), you use Azure Import/Export service. You prepare hard drives, use the WAImportExport tool to encrypt and write data, ship them to an Azure datacenter, and Azure personnel upload the data. For network-based transfers, AzCopy is the high-performance command-line utility. You must be familiar with commands like azcopy copy to sync data between local systems and Azure, or between storage accounts, using SAS tokens for authentication. For example, to upload a directory to a blob container: azcopy copy "C:\myFolder" "https://mystorage.blob.core.windows.net/mycontainer?<SAS_token>" --recursive.

For graphical management, Azure Storage Explorer is a standalone application that lets you visually manage storage accounts, containers, shares, and blobs. It's invaluable for quick operations like setting permissions, viewing properties, or generating SAS tokens. In exam scenarios, you might need to choose the right tool for a given data transfer job based on data size, network bandwidth, and time constraints. AzCopy is ideal for fast network transfers, Import/Export for offline bulk data, and Storage Explorer for ad-hoc administrative tasks.

Common Pitfalls

  1. Misconfiguring Access Tiers and Lifecycle Policies: A common mistake is moving blobs to the Archive tier without considering the high rehydration cost and time (several hours). For exam questions, carefully assess access patterns. If data needs to be available within minutes, Cool tier is better than Archive. Also, lifecycle rules only execute once per day, so immediate actions are not guaranteed.
  2. Overlooking SAS Security: Generating a SAS token with full permissions (read, write, delete, list) for a long duration is a security risk. The correct approach is to grant only the necessary permissions for the shortest viable time. Exam answers often include overly permissive SAS options as distractors.
  3. Confusing Redundancy with Backup: Storage redundancy (like GRS) protects against hardware failure or regional disaster, but it is not a backup solution against accidental deletion or corruption. If a blob is deleted or overwritten, that change replicates across all copies. The exam may test this distinction, emphasizing the need for separate backup solutions like soft delete or Azure Backup.
  4. Ignoring Network Configuration Steps for Azure Files: Simply creating an Azure Files SMB share does not make it accessible from on-premises. You must open port 445 (often blocked by ISPs) and configure authentication, either with Azure AD Domain Services or using the storage account key. Exam scenarios requiring hybrid access will test these prerequisite network and identity configurations.

Summary

  • Storage accounts are the container for all Azure data services; you must select the correct type (GPv2, etc.), access tier (Hot, Cool, Archive), and redundancy option (LRS, ZRS, GRS) based on cost, performance, and durability needs.
  • Blob lifecycle management automates cost optimization by rule-based tier transitions, while Azure Files provides managed SMB/NFS shares and Azure File Sync enables hybrid cloud file architectures.
  • Secure access is layered: Use Shared Access Signatures (SAS) for granular, time-bound access, storage firewalls for IP/VNet restrictions, and private endpoints for secure, private network connectivity.
  • Data movement tools are scenario-specific: Employ Azure Import/Export for offline bulk transfers, AzCopy for high-performance command-line copies, and Storage Explorer for graphical management and exploration.
  • For the AZ-104 exam, focus on applying these services to business scenarios, prioritizing cost-effective, secure, and available solutions while avoiding common configuration errors related to permissions, networking, and data lifecycle.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.