SAN Architecture Explained

Photo of author
Written By Amit Singh

I am a technology enthusiast with 15 years of experience in SAN and NAS Storage. 

In the world of information and communications technology (ICT), one of the most important components is the storage area network (SAN). But what exactly is SAN architecture, and how does it work? In this blog post, we will delve into the logical layout of a SAN infrastructure and explore its core components: hosts, fabric, and storage. By understanding the intricacies of SAN architecture, you will gain valuable insights into how this specialized network enables multiple devices to share storage resources efficiently and securely. Whether you’re an IT professional or simply interested in learning more about technology, this post will demystify SAN architecture and its role in modern storage systems. So, let’s dive in and explore the fascinating world of SAN architecture together.

I. Introduction

A brief overview of storage area network (SAN) architecture

A Storage Area Network (SAN) is a network architecture that allows storage devices to be connected to servers, enabling them to access the storage as if it were locally attached. There are several options for implementing storage on a network, including Direct Attached Storage (DAS), Network Attached Storage (NAS), and SANs.

DAS involves directly connecting storage devices to servers, typically using a Small Computer System Interface (SCSI). It provides block-level storage and is very efficient, but it lacks scalability and does not allow for easy sharing of storage resources.

NAS devices are connected to the network and accessed as file-level storage. They are ideal for sharing storage resources among multiple devices, but performance may be lower compared to block-level storage.

SANs, on the other hand, use Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE) technology to provide high-speed block-level storage access. FC SANs offer high scalability and performance, making them suitable for large enterprise deployments. FCoE is a cost-effective alternative that combines the benefits of FC with the simplicity of Ethernet.

There are also iSCSI SANs, which transmit SCSI commands over IP networks. They provide block-level storage over IP and are a more affordable option for small to medium-sized deployments.

Here’s a table summarizing the different SAN architecture options:

ArchitectureDescription
Direct Attached Storage (DAS)Storage devices connected directly to servers.
Network Attached Storage (NAS)Storage devices connected to the network and accessed as file-level storage.
Fibre Channel (FC) SANUses FC technology for high-speed block-level storage access.
Fibre Channel over Ethernet (FCoE) SANCombines the benefits of FC with the simplicity of Ethernet.
iSCSI SANTransmits SCSI commands over IP networks, providing block-level storage over IP.
Source: i0.wp.com

II. Direct Attached Storage (DAS)

Explanation of DAS and its limitations

In storage area network (SAN) architecture, direct attached storage (DAS) refers to the configuration where each server has its own direct connection to its own hard drives. This means that the storage devices are locally attached to each server. DAS uses a small computer system interface (SCSI) to communicate between the server and the hard drives.

However, DAS has some limitations. Firstly, it lacks scalability. If a server needs more storage space, it cannot easily access the extra space available on another server’s hard drive. Additionally, DAS does not provide a centralized storage solution, making it difficult to manage and share data across multiple servers. It also results in a high degree of server dependency, as each server is responsible for managing its own storage.

Despite these limitations, DAS can still be a cost-effective solution for small-scale deployments or specific use cases where the storage requirements are relatively small and there is no need for shared storage or advanced data management features.

Overall, while DAS has its place in certain scenarios, many organizations opt for more advanced SAN architectures that provide improved scalability, performance, and centralized management capabilities.

III. Network Attached Storage (NAS)

Advantages and disadvantages of NAS

Network Attached Storage (NAS) offers several advantages and disadvantages for businesses:

Advantages:

  1. Easy setup and management: NAS systems are user-friendly and can be set up quickly. They offer a simple interface for managing storage, access permissions, and backups.
  2. File sharing and collaboration: NAS allows multiple users to access and share files simultaneously, fostering collaboration within an organization.
  3. Data backup and recovery: NAS systems often come with built-in backup and recovery features, providing reliable data protection.
  4. Scalability: NAS devices offer scalability options, allowing businesses to add more storage capacity as their needs grow.
  5. Cost-effective: NAS is generally more affordable compared to other storage solutions, making it suitable for small and medium-sized enterprises.

Disadvantages:

  1. Limited performance: NAS devices may not offer the same level of performance as other storage options, such as Direct Attached Storage (DAS) or Storage Area Network (SAN). This can be a disadvantage for businesses with high-performance requirements.
  2. Network dependency: NAS relies heavily on the network, which means that network issues or outages can affect accessibility and performance.
  3. Limited storage capacity: While NAS devices can be expanded, they may not have the same storage capacity as SAN solutions, making them less suitable for large-scale enterprises with extensive storage needs.
  4. Security concerns: NAS devices are vulnerable to network-based attacks, so businesses must implement appropriate security measures to protect their data.

Overall, NAS is a cost-effective and user-friendly storage solution suitable for small to medium-sized businesses that require basic file sharing and backup capabilities.

IV. Fibre Channel (FC) SAN Architecture

Key components and characteristics of FC SAN

A Fibre Channel (FC) SAN (Storage Area Network) consists of several key components and has certain characteristics that set it apart from other storage solutions.

Key components of an FC SAN include network cables, network adapters (also known as Host Bus Adapters or HBAs), and switches. These components are used to connect devices such as servers and storage arrays, enabling them to communicate and share data over the network.

Characteristics of an FC SAN include high performance, scalability, and reliability. FC SANs offer high-speed access to data, making them ideal for demanding applications that require fast and efficient data transfer. They can also scale easily to accommodate growing storage needs, supporting large amounts of data and multiple devices. FC SANs are known for their reliability and robustness, ensuring data integrity and availability.

Here’s a summary of the key components and characteristics of an FC SAN:

ComponentDescription
Network CablesUsed to physically connect devices in the FC SAN, typically using fibre optic cables.
Network Adapters (HBAs)Allow devices to connect to the FC SAN, converting data into a format the SAN can use.
SwitchesUsed to interconnect devices in the FC SAN, enabling data transfer between them.
High PerformanceProvides fast and efficient data transfer capabilities.
ScalabilityCan easily accommodate growing storage needs and multiple devices.
ReliabilityEnsures data integrity and availability.

By utilizing these key components and characteristics, an FC SAN offers a powerful and reliable solution for storing and accessing data in enterprise environments.

V. Point-to-Point Configuration

Detailed explanation of the point-to-point FC SAN architecture

The point-to-point FC SAN architecture is a configuration where two nodes are directly connected to each other, providing a dedicated connection for data transmission. This architecture is commonly used in a Direct Attached Storage (DAS) environment. In point-to-point configuration, each node has its own connection to the storage device, ensuring efficient data transfer without contention.

However, there are limitations to this architecture, including limited scalability and connectivity. Point-to-point configuration is not suitable for large-scale deployments where multiple nodes need to access the same storage device. Additionally, adding or removing devices in this configuration requires reconfiguration and can cause momentary pauses in data traffic.

Here’s a summary of the characteristics of point-to-point FC SAN architecture:

  • Direct connection between two nodes
  • Provides dedicated data transmission without contention
  • Used in DAS environments
  • Limited scalability and connectivity
  • Requires reconfiguration for device addition or removal

Overall, the point-to-point FC SAN architecture is ideal for small-scale setups where dedicated connections are required between nodes and storage devices. However, for larger and more complex deployments, other architectures like FC-AL or FC-SW provide better scalability and flexibility.

VI. Fibre Channel Arbitrated Loop (FC-AL) Configuration

Working principle and drawbacks of FC-AL configuration

One of the storage area network (SAN) architectures is the Fibre Channel Arbitrated Loop (FC-AL) configuration. In this configuration, devices are connected to a shared loop, where each device contends with other devices to perform I/O operations. However, FC-AL has some drawbacks:

  1. Performance: Due to the shared nature of the loop, devices must wait their turn to process I/O requests, resulting in lower overall performance compared to other SAN architectures.
  2. Scalability: FC-AL has limited scalability, as adding or removing a device on the loop requires loop re-initialization, causing a momentary pause in loop traffic.
  3. Connectivity: The loop configuration offers limited connectivity, making it suitable for direct-attached storage (DAS) environments but not ideal for large-scale SAN deployments.

Despite its drawbacks, FC-AL can be implemented without interconnecting devices by directly connecting devices in a ring through cables or by using FC hubs in a star topology. However, these implementations may still suffer from performance and scalability limitations.

External Link: Learn more about SAN architectures

VII. Fibre Channel Switched Fabric (FC-SW) Configuration

Overview of FC-SW architecture and its benefits

FC-SW architecture, also known as fabric connect, is a logical space in which nodes in a storage area network (SAN) communicate with one another. This architecture involves the use of one or more FC switches or directors to interconnect the nodes. With FC-SW, each node is connected through a dedicated path, preventing contention and allowing for high scalability. It provides a reliable and efficient method of data transfer between nodes in the SAN.

Some of the benefits of FC-SW architecture include:

  1. High scalability: FC-SW architecture allows for the addition or removal of nodes in the fabric without significantly disrupting ongoing traffic between other nodes. This scalability makes it ideal for growing SAN environments.
  2. Enhanced performance: FC-SW offers dedicated paths for data transfer between nodes, ensuring higher speeds and low latency compared to shared loop configurations like FC-AL.
  3. Simplified management: The use of FC switches in a fabric allows for more efficient management of the SAN. Administrators can configure zoning and access controls, monitor the fabric, and troubleshoot any issues more easily.
  4. Flexibility: FC-SW supports a wide range of fabric services, including the FC name server, zoning database, and time synchronization. It can accommodate different types of devices and configurations, making it suitable for various SAN setups.

Overall, FC-SW architecture provides a reliable, scalable, and high-performance solution for SAN environments, enabling efficient and secure data transfer between nodes.

VIII. Inter-Switch Link (ISL)

Importance and functionality of ISL in a FC-SW configuration

In a Fibre Channel (FC) switched fabric (FC-SW) configuration, an Inter-Switch Link (ISL) plays a crucial role in connecting multiple switches together to form a single, larger fabric. It enables the transfer of both storage traffic and fabric management traffic from one switch to another, allowing nodes in the fabric to communicate with each other.

The ISL provides several benefits and functionalities in a FC-SW configuration:

  1. Scalability: ISLs allow switches to be connected together, enabling the creation of a larger fabric. This allows for easy expansion of the storage network to accommodate growing storage needs.
  2. Traffic isolation: ISLs segregate traffic between switches, preventing unnecessary data traffic from congesting the entire fabric. This ensures optimal performance and reduces potential bottlenecks.
  3. Load balancing: ISLs can be configured to distribute traffic evenly across multiple paths, enhancing performance and increasing overall throughput.
  4. High availability: By connecting switches through ISLs, redundancy can be achieved. If one ISL fails, traffic can be automatically rerouted through alternate paths, ensuring continuous availability of storage resources.
  5. Management simplicity: ISLs simplify the management of a storage network by enabling fabric-wide management. Administrators can perform management tasks, such as zoning and firmware upgrades, from any switch within the fabric.

In summary, ISLs are a critical component in FC-SW configurations, providing the necessary connectivity, scalability, and management capabilities for a robust and efficient storage area network.

IX. Features of FC Ports

Explanation of different types of FC ports and their roles

In a Fibre Channel (FC) Storage Area Network (SAN), there are different types of FC ports, each with its own role and functionality.

  1. N_Port (Node Port): This is an endpoint in the fabric and is also known as the node port. It is typically a compute system port, such as an FC HBA port, or a storage system port that connects to a switch in the network.
  2. E_Port (Expansion Port): This port connects two FC switches and is also known as the expansion port. It enables the switch to switch communication and forms the interswitch link (ISL) in the fabric.
  3. F_Port (Fabric Port): This is a port on a switch that connects to an N_Port. It is also known as a fabric port and facilitates the communication between the fabric and the nodes.
  4. G_Port (Generic Port): This is a port on a switch that can operate as either an E_Port or an F_Port. It automatically determines its functionality during initialization, based on the connected devices.

Each type of port plays a crucial role in establishing and maintaining the communication within the FC SAN. The N_Port connects the compute systems and storage devices to the fabric, while the E_Port enables communication between switches. The F_Port connects the fabric to the nodes, and the G_Port provides flexibility with its dual functionality.

For more detailed information on FC SAN architecture and the different types of FC ports, you can refer to this IBM documentation.

X. World Wide Name (WWN)

Definition and significance of WWN in FC SAN configuration

WWN, or World Wide Name, is a unique identifier assigned to each device in a Fibre Channel (FC) SAN configuration. It plays a crucial role in the FC SAN as it is used to physically identify FC network adapters and FC adapter ports. WWNs are static names burned into the hardware or assigned through software, similar to MAC addresses in IP networking. They are critical for FC SAN configuration as each node port needs to be registered by its WWN for the FC SAN to recognize it. WWNs are used for identifying storage systems and FC HBAs in the SAN environment.

The significance of WWNs lies in their ability to ensure proper communication and identification within the FC SAN. They enable devices to be recognized and registered in the SAN, allowing for the establishment of connections and data transfer. WWNs are used in various configuration definitions, such as zoning and LUN masking, which regulate access to specific resources in the SAN. Additionally, WWNs play a crucial role in enabling N_Port Virtualization (NPV) and N_Port ID Virtualization (NPIV) functionalities, which allow for efficient use of resources and optimization of storage access for virtual machines.

Overall, the correct implementation and understanding of WWNs are vital for the proper functioning and management of FC SAN configurations. They ensure that devices are identified and registered correctly, enabling efficient data transfer and resource allocation within the SAN.

XI. Virtualization in SAN

Understanding the concepts of N_Port Virtualization (NPV) and N_Port ID Virtualization (NPIV)

In a SAN architecture, N_Port Virtualization (NPV) and N_Port ID Virtualization (NPIV) are important concepts to understand.

NPV is a feature that reduces the number of domain IDs in a fabric, especially in environments with a large number of edge switches. Edge switches that support NPV do not require a domain ID and forward all fabric activity to the core switch. NPV-enabled edge switches do not perform fabric services and allow traffic to flow between the core switch and compute systems without registering themselves in the fabric.

NPIV enables a single N_Port to function as multiple virtual N_Ports, each with a unique World Wide Port Name (WWPN). This allows a single physical N_Port to obtain multiple FC addresses. NPIV is commonly used by hypervisors like VMware to create virtual N_Ports on FC HBAs and assign them to virtual machines (VMs). Each virtual N_Port acts as a separate FC HBA port and enables VMs to directly access assigned LUNs.

These virtual N_Ports are essential for VM-level access control, using zoning and LUN masking to restrict specific LUNs to specific VMs. NPIV requires support from both the FC HBAs and the FC switches in the environment.

Here’s a summary of NPV and NPIV:

NPVNPIV
DefinitionReduces the number of domain IDs in a fabric by forwarding fabric activity from edge switches to a core switchEnables a single N_Port to function as multiple virtual N_Ports with unique WWPN addresses
PurposeReduces domain ID limitations and improves scalability in large SANsProvides VM-level access control and assignments of LUNs to individual VMs
ImplementationEdge switches support NPV mode without requiring domain IDsBoth the FC HBAs and FC switches must support NPIV
Fabric ServicesNPV-enabled edge switches do not perform fabric servicesNPIV switches support fabric services such as name server registration and zoning
FunctionalityPasses traffic between core switches and compute systemsReduces the number of physical FC HBAs required and enables VMs to directly access assigned LUNs

Understanding NPV and NPIV is crucial in designing and managing complex SAN architectures. These concepts help optimize and streamline the connectivity and access to storage resources in a SAN environment.

I am a technology enthusiast with 15 years of experience in SAN and NAS Storage. I work with one of the fortune 500 companies as SAN Storage Architect.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.