What Fiber Patch Cords are available?

by http://www.fiber-mart.com

A fiber optic patch cord is a fiber optic cable having connectors at either end, which let it to be quickly and easily connected to optical transceiver in switch, router, or any other telecommunication equipment like optical line termination (OLT) or optical network terminal (ONT).
fiber optic patch cord is made with a core having high refractive index, which is surrounded by a coating called cladding having low refractive index. That cladding again is reinforced and surrounded by a shielding cover for protection purpose. The core allows the transfer of optic signals with very little loss for great distances. The lower refractive index of cladding let the light back into the core. The light is reflected back into to core by phenomenon called total internal reflection. The protective shield over the cladding reduces physical damage to the core and cladding.
Regular fiber cable cladding measure 125 µm in diameter. As shown in the figure the core (inner diameter) measures 9 µm for single-mode cables, and 50 or 62.5 µm in multi-mode cables. Fiber cords can be categorized by transmission medium (shorter or longer distance) and by connector construction. Single mode fibers are generally yellow in color, having a blue connectors, and can achieve a longer transmission path. Whereas, multi-mode fibers are usually orange in color, having a cream color connector, and they can cover shorter transmission distance.
Connector types:
Standard connector design have LC, ST, SC, FC, MTRJ, MPO, MU, SMA, FDDI, E2000, DIN4, and D4. Fiber path cords are often classified by the connectors on the cable; some of the most common cable formations include FC-FC, SC-SC, FC-LC, FC-SC, FC-ST, and SC-ST.
LC known as Little Connectors, are small in size and are widely used in SFPs. ST knows as straight tip are similar to BNC connectors, widely used in fiber ODF. SC known as subscriber channel, these are larger in size as compare to LC, widely used in GBIC transceivers. MTRJ are the same as the size of RJ45 connectors. MU fiber optic connector have push-pull function, composed of plastic housing. It is almost half the size of SC connector. E2000 connector have push-pull connection mechanism, they have an automatic shutter inside for protection from dust. Fiber optic patch cords are formed in different ways, like SC-LC or SC-FC these types of cable are common connecting SFP transceivers from routers or switches to the fiber ODF. SC-SC, FC-FC and LC-LC simplex cables can be used to provide physical level optical loops. There are also different types of fiber optic patch cords, some of the types are mentioned below.
Armored Fiber Patch Cord:
Flexible stainless steel is used inside the outer lacked as the armor to protect fiber inside armored fiber optic patch cord. It holds all the features of typical fiber patch cord, however is quite stronger. These type of cable are widely used in longer distance transmission systems. Direct buried fibers, aerial fibers and undersea fiber optic cables are example of armored fiber optic cables. Each having their extra protection according to their application.
Bend Insensitive Fiber Optic Patch Cord:
Bend insensitive fiber patch cords are widely used in FTTH. This type of fiber is not sensitive to pressure and bending. As the fiber patch is not sensitive to pressure and bending, it can be easily used in cable ducts, or inside cable covers along with the walls. Bend insensitive fiber patch cord are sub divided into two categories, category A include G657A1 and G657A2, category B include G657B2 and G657B3 types of fiber. Bending radius for G657A1 fibers can be as low as 10 mm, for G657A2 and G657B2 it is 7.5 mm, the G657B3 can work on bending radius of 5 mm. this is to be noted here that G.657 Series fibers are single mode fibers.
Mode Conditioning Patch Cord:
Mode conditioning patch cords are necessary where Gigabit 1000 Base-LX switches and routers are installed into present multimode cable plants. When a single mode signal is launched into multimode fiber a phenomenon called Differential Mode Delay (DMD) can create multiple signals within the multimode fibers. This effect can confuse the receiver and produce the errors. These multiple signals, caused by DMD, severely limit the cable distance lengths for operating Gigabit Ethernet. A mode conditioning patch cord eliminates these multiple signals by letting the single-mode launch to be offset away from center of the multimode fiber. This offset point creates a launch that is similar to typical multimode LED launch and the resulting multiple signals allowing the use of 1000base-LX over existing multimode cable system.
Fiber optic patch cords are widely used from telecommunication networks to cable TV, from Local Area Network (LAN) to Wide Area Network (WAN), from transmission networks to data centers. As far as the right type of fiber optic patch cord is used, they have vast number of applications.

Singlemode vs Multimode Fiber Optic Cable

by http://www.fiber-mart.com

Fiber optic cables are being widely used in telecommunication and data networks around the world. Small networks like branch offices and large corporate offices having multiple campuses are making use of the fiber optic technologies to provide their users a reliable and efficient network.
Fiber optic cables use light as the medium to transfer the data signals from one end to the other end. Unlike the copper or coaxial cables, there is no electric pulse or current involved in the transmission of signal through a fiber optic cable. Fiber optic cables are available in two main categories, i.e. single-mode fiber and multimode fiber. This article will look into the details of the two types of fiber optic cables and portray the differences, benefits and use cases for both types of fiber.
Single-mode Fiber Optic Cable
Single-mode fiber optic cables are designed in such a way that it allows light to travel straight down the fiber core with least amount of diffraction and reflection. The light travels from source to the destination in a straight line. The core of the single-mode fiber optic cable is very thin, usually in the range of 8.0 – 10.5 micrometers. Single-mode fibers, due to their thin core and less reflection characteristics, are able to carry the signals over longer distances and achieve very high data transfer rates as compared to the multimode fiber optic cables.
The above-mentioned characteristics are beneficial for transmission networks that cover a very large geographical area, however, the increased efficiency is required in the transceivers of the single-mode fiber optic cables. Usually, a very precise and high intensity laser beam is used as a source of light in single-mode fiber optic transceivers. This results in higher costs of the transceivers. On the other hand, the thin core proves to be economical as far as the cost of the fiber optic cable is concerned.
From the above-mentioned arguments, it can be inferred that the single-mode fiber is useful for those networks where there is a requirement for high bandwidths (typically in the range of 10 Gbps – 100 Gbps), and longer distance links. The cost of installing a single-mode fiber optic network is justified in those cases.
Multimode Fiber Optic Cable
Multimode fiber optic cables are constructed in such a way that it allows light to travel through different paths inside the core of the fiber optic cable. The reason behind this is that the core of multimode fiber optic cable is thicker than that of the single-mode fiber optic cable. The core of multimode fiber optic cable is in the range of 50 – 100 micrometers. This thicker core allows the light to reflect and refract inside the core of the fiber optic cable and create multiple “modes” of the light.
The larger core of the multimode fiber optic cable also allows the use of light emitting diodes (LEDs) to be used as the light source for its transmission. This results in to the lower cost of allied electronics and transceivers for the multimode fiber optic cable.
The limitation for multimode fiber optic is the distance and bandwidths. Due to the less precise electronics and losses due to reflection and refraction, the multimode fiber optic cable is unable to carry the data over longer distance links and is also not capable to provide higher bandwidths. Several types of multimode fiber optic cables are available such as OM1, OM2, OM3 and OM4. The widely used 10 Gbps bandwidth is supported by OM4 fiber optic cable up to a distance of 400 meters only.
In the light of above facts, it can be concluded that the single-mode and multimode fiber optic cables are equally useful and beneficial if deployed in their relevant use cases. Single-mode fiber optic cable is beneficial for larger networks and multimode fiber optic cable is useful in smaller office networks where the maximum link distance is not a limiting factor. Multimode fiber optic networks are economical and present an excellent use case for such type of networks.

What is the difference between OS1 and OS2 Single-Mode fibers?

by http://www.fiber-mart.com

In fiber optic network infrastructures, the whole concept is capable of fulfilling the high-demand and long reach needs of the customers mainly because of the deployed transceivers and fiber optic cables. The optical transceivers are the modules that are converting the electrical signal into an optical light signal and, with the help of lasers, sending it down the optical cable. The receiving part of the connection is also an optical transceiver that converts the optical light into electrical signals so the device can read the data received. Even though optical transceivers are doing the more complex job in the fiber optic network, optical cables are the most important part of the whole network infrastructure. Without them the fiber optic connection wouldn’t be possible.
When it comes to fiber optic cables, they come in many shapes and sizes depending on the type of the project they are needed for. However the main two categories by which they are divided are Multi-mode and Single-mode fibers. In short, they are known as MMF and SMF.
As we already know, Multi-mode fibers are optical cables used in fiber optic networks for short range connections, most commonly within a particular building like a Datacenter. There are four types of Multi-mode fibers existing on the market, each different from each other in the capabilities: OM1, OM2, OM3 and OM4.
The key difference between Multi-mode and Single-mode fibers is their reach capability. This difference occurs mainly because of the larger core of Multi-mode fiber optic cables. Because of their large core, which is around 50-100 micrometers, they are capable of carrying a lot bigger wavelength of optical light. This bigger wavelength is bouncing around the cable and the loss in power is greater. In Single-mode fibers we can find a much smaller core which is generally around 9 micrometers. Because of this they can’t carry very big wavelength. Instead the wavelength they are carrying is much narrower compared to Multi-mode, the light is directed by the transceiver directly in the core and during its travel is not bouncing around the cable. This eventually ensures longer reach with low power loss, also known as attenuation.
However, just like with Multi-mode fibers, Single-mode fibers are organized in categories, OS1 and OS2. Depending on the network infrastructure, knowing the difference between these two categories is vital.
OS1 Single-mode fibers are compliant with the ITU-T G.652 standard and its specifications. On the other hand, OS2 Single-mode fibers are compliant with ITU-T G.652C, G.652D or G.657.A1 standards. Another big difference between these two categories is their cable construction. Because OS1 is most commonly used for indoor applications, they are tight-buffered constructed. This means that they are manufactured as a solid medium. On the other hand the OS2 type of cables are constructed as loose –tube and they are mainly designed to be used outdoors. This is the main reason why OS1 cables have greater loss per kilometer compared to OS2 fibers. Generally taken the maximum attenuation allowed for OS1 cable is 1.0 dB/km and for OS2 is 0.4 dB/km. The maximum distance that an OS1 cable can reach is 2 kilometers, while the maximum distance that OS2 can reach is 10 kilometers. This is why generally, OS2 are much more expensive to produce and purchase than OS1 cables.
When choosing and purchasing the correct cable for your project it’s vital to understand that, in both cases, high care must be applied. If a Single-mode cable is needed for an indoor network infrastructure then OS1 is the way to go. If a Single-mode cable is needed for an outdoor network infrastructure then OS2 is the way to go.

Juniper Networks QFX10000 Modular Ethernet Switches Overview

by http://www.fiber-mart.com

The ultra-high density Juniper QFX10000 Modular switches provide the ultimate support and solid ground for today’s most demanding network operations and applications. Their scalability options and their stable performance make them optimal for deployment in medium to large sized Datacenters as well as in private and public clouds. With the custom built ASICs the Juniper QFX10000 switch can deliver from 3 to 96 Tbps of throughput in your network thus becoming a safe and long-term investment. The leading network architects in Juniper are also looking to increase this capability to even 200 Tbps in the near future. With the option to use up to 480 100GB/s ports in a single chassis the Juniper QFX 10000 is the industry leading switch in its class. This eventually enables you to evolve your network infrastructure and boost the performance by upgrading to 100 GB/s and leaving the 40 GB/s and 10 GB/s in the past. This would make the clients extremely happy and will motivate them for even greater creativity. Some of the key features of this switch series are listed below:
The high port density capability of QFX 10000 Modular switches redefine the per-slot economics, enabling customers to do more with less and at the same time simplifying network design and reducing OpEx (operational expenditure)
The custom, built by Juniper, ASICs in each QFX10000 switch delivers unmatched and unparalleled intelligence and analytics
Deep buffers support ensures even greater quality of service options
With the large number of 100GB/s ports this switch series is the optimal solution for future upgrades
The Juniper QFX10000 Modular Switch series gives customers the ultimate architecture flexibility. With both Layer 2 and Layer 3 support the customers can deploy this switch in every part of their network infrastructure. For networks evolving to become software-defined (SDN), the QFX10000 can integrate with VMware NSX SDN controllers and can act as a Virtual Extensible LAN, both Layer 2 and Layer 3,  gateway. You can eventually select out of two available modular chassis:
The QFX10008 Ethernet Switch with an 8-slot, 13 U chassis that supports up to eight line cards
The QFX10016 Ethernet Switch with a 16-slot, 21 U chassis that supports up to 16 line cards
Optionally you can choose some of the following line cards to boost the chassis’ performance:
QFX10000-36Q which can be a 36-port 40GbE quad small form-factor pluggable plus transceiver (QSFP+) or a 12-port 100GbE QSFP28 line card
QFX10000-30C which is a 30-port 100GbE QSFP28/40GbE QSFP+ line card
QFX10000-60S-6Q which is a 60-port 1GbE/10GbE SFP/SFP+ line card with six-port 40GbE QSFP+ or two-port 100GbE QSFP28
Whatever road you take you can’t go wrong with the QFX10000. When it’s fully equipped and configured a single QFX10016 chassis can support up to 480 100GbE ports making it an industry leader.
Key components of the QFX10000
Both versions of the QFX10000 modular switches share some common architectural elements. Both versions run the JunOS operating system which is tasked to handle the Layer 2 and Layer 3 protocols while the Switch Fabric modules have the task to manage the chassis and provide switching functionality for data traffic coming from line cards. The line cards mentioned above include Packet Forwarding Engines (PFEs) and independent line card processors. Due to the Virtual Output Queue (VOQ)-based architecture, the QFX10000 can scale up to a very large deployments with no head of-line blocking, a single-tier low-latency switch fabric and deep buffering to ensure high performance throughout the whole network. One neat feature is the option for easy future upgrade thanks to the direct connection between the horizontal line cards in the front of the chassis and the vertical switch fabric cards in the rear of the chassis. They are connected with each other in a so called orthogonal interconnects which eliminates the need for a midplane. This makes it extremely popular with customers looking to invest in 100 GB/s connections and even 400 GB/s in the near future. The line cards provide an uninterrupted support of cold air due to the redundant and variable-speed fan trays. The power supplies have the capability to convert the external power to the internal voltage needed for safe and stable operation. Each and every component of the QFX10000 switches is hot swappable and interchangeable.
QFX10000 Line Cards key features
Every set of line cards supported by the QFX10000 provides an extensive set of features to compliment your network. They can be deployed in any combination of Layer 2 and Layer 3 network. They have a unique ability for seamless transition between the supported speeds due to the support for tri-speed 10GbE, 40GbE and 100GbE connections. Each line card is built by Juniper Networks making it a recommended component for your QFX Modular switch. The line cards support many technologies including 802.1Q VLAN and VXLAN, link aggregation, VRRP, L2 to L3 mapping. In addition the line cards support filtering, sampling, load balancing, rate limiting, CoS, MPLS, Fibre Channel over Ethernet (FCoE) transit functionality and other key features needed to deploy a high-performance and stable Ethernet infrastructure.
For redundancy in deployments where the power is not stable, the QFX10008 contains six power supply bays while the QFX10016 has ten power supply bays. This offers the much needed flexibility and redundancy. Each power supply has its own internal fan for cooling. In addition all QFX10000 chassis provide both AC and DC power supplies however, AC and DC supplies cannot be mixed in the same chassis. The QFX10000 chassis has front-to-back cooling with hot air being exhausted through the fan trays placed in front of the line cards.
Deploying the QFX10000 switch will benefit and boost your network’s performance. Among the many features that this switch provides, the below few are extremely important to be mentioned:
Each QFX10000 chassis comes with a very important redundant feature: an extra slot to accommodate a redundant RE module that serves as a backup in hot-standby mode. This module is ready to take over in the event of a master RE failure. The taking over of the backup module will be seamless thanks to the integrated Layer 2 and Layer 3 graceful Routing Engine switchover feature implemented in JunOS, while working in conjunction with the nonstop active routing and nonstop bridging features
The support for a virtual output queue for deployment in large size Datacenters. This feature allows for packets to be queued and dropped on ingress traffic during congestion with no head-of-line blocking
The QFX10000 provides many MPLS features to suit your needs. Among many the most important are L3 VPN, IPv6 provider edge router (6PE, 6VPE), RSVP traffic engineering and LDP for segmentation and virtualization
The QFX offers you the support for Fibre Channel over Ethernet (FCoE) together with priority-based flow control (PFC) and Data Center Bridging Capability Exchange (DCBX). All of these features are included as part of the default software which comes with the switch
The QFX10000 Switch Series offers industry leading scale options and high performance with a design capable to seamlessly upgrade your Datacenter to 100 GB/s operating Datacenter. The QFX10000 Series Switches have been designed and destined for the future deployment of 400 GB/s Ethernet solutions. With the deployment of this switch series you will help your cloud and Datacenter operators extract maximum value and intelligence from their network infrastructure.

What is InfiniBand and what is it used for?

by http://www.fiber-mart.com

What is InfiniBand and what is it used for?
INTRODUCTION :
InfiniBAnd (IB) is actually a trademark term used since 1999, it was formerly called System I/O. InfiniBand was coined surprisingly when two dueling designs in the market merged, this happened after realizing that it is the right approach to prevent future limitations in the industry, because the existing designs would no longer meet the needs of future servers.
The two competing designs were:  Future I/O – developed by IBM, Compaq and Hewlett-Packard, and Next Generation I/O – developed by Microsoft, Intel and Sun Microsystems.  With confidence that both the industry and the end-users will benefit from the merging, they formed the InfiniBand Trade Association or IBTA which has over 220 members, currently.
Future I/O and Next Generation I/O are Input/output architectures were expected to replace the traditional PCI or Peripheral Component Interconnect system. Why is there a need to replace the PCI bus?  Mainly because, the PCI bus became the bottleneck limiting the performance of high-speed data servers for the reason that, it is restricted to about 500 Mbps of shared data only. PCI dominated the industry since early 90’s with one major upgrade during the period: from 32 bit/ 33 MHz to 64bit/66Mhz. The PCI-X which made the technology one step advanced to 133 MHz was projected to pro-long PCI architecture usage in the industry. However, Internet became so popular globally, continually increasing in demand to almost no downtime at all. The need to constantly accessible, dependable high-performance with fail-safe system which are services provided by the web, data storage features of internet, applications, database servers and enterprise computing software systems, etc. has changed the game plan of the market players.  Moreover, many opt to move storage out of the server to isolated storage networks and distribute data across fault tolerant storage systems is now a trend in the industry.  Such demands require more bandwidth and that bus system have reached the level that PCI interconnect architecture can no longer cater.
So IBTA, came up with the so-called, InfiniBand. What is Infiniband?
InfiniBand is a switch-based point-to-point serial I/O interconnect architecture developed for today’s system with the ability to scale next generation system requirements.  It is operating based on a four-wire 2.5 Gb/s or 10 GB/s base speed per individual port link in each direction.  It is a low pin count serial architecture connecting devices on the PCB as a component-to-component interconnect and enabling “Bandwidth Out of the Box”, chassis-to-chassis interconnect, traversing distances up to 17m over common twisted pair copper wires.  Compared to ordinary fiber cable, it can go over distances of a number of kilometers or more. Its architecture described a layered hardware protocol; Physical, Link, Network, Transport Layers and a software layer to manage initialization and communication between devices.
Different USES of  InfiniBAnd
RAS(Reliability, Availability, Serviceability) provider
InfiniBand provides RAS (Reliability, Availability, Serviceability) capabilities designed into the InfiniBand. RAS refers to a fabric that works both in-the-box and allows Bandwidth Out of the Box.  Because of this RAS feature, it is projected that InfiniBand architecture will be able to serve as the common I/O infrastructure for the next generation of computer server and storage systems at the heart of the Internet. Hence, this will fundamentally alter the systems and interconnects the Internet infrastructure.
Supports Application Service Providers or ASP
The Internet, from simple online data search engine to supporting numerous applications, creating international market for media streaming, business to business solutions, E- commerce and interactive portal sites.  The demand for reliability of each application created tremendous pressure to service providers. Application Service Providers or ASP entered in, a group offering quality services with the capacity to intensely gauge in a short period of time to accommodate drastic growth of internet despite possible congestion using the cluster to support above requirements.  A cluster is group of servers connected by load balancing switches working in a parallel to serve a particular application.  InfiniBand makes application cluster connections simplified by interconnecting or fusing network with a feature-rich managed architecture. It delivers native cluster connectivity, devices can be attached and multiple paths can be used with the addition of switches to the fabric.
QoS or Quality of Service
 InfiniBand at the same can deliver and process transactions of high importance between devices prioritizing over the less significant items through built-in QoS or Quality of Service mechanisms.
Scalability for IPC or Inter-Processor Communications
The switched nature of InfiniBAnd offers connection reliability for IPC or Inter-Processor Communications systems by allowing multiple paths between systems. Scalability is sustained with fully convertible connections managed by a single unit which is the subnet manager. With multi-cast support feature, single transactions can be made to multiple destinations. Consequently, InfiniBand served as a backbone in the capabilities for IPC clusters by allowing multiple servers work together on a single application without the need of secondary I/O interconnect because of the higher bandwidth connections (4X/12X) it can provide.
Storage Area Networks (SAN) simplified
These are groups of complex storage systems linked together to managed switches to allow vast volumes of data to be stored from multiple servers. They provide dependable connections to large database of information that the Internet Data Center requires. Basically, SAN are built using Fibre Channel switches, servers and hubs attached through Fibre Channel Host Bus Adapters (HBA). But the emergence of InfiniBAnd results from removal of Fiber Channel Network and lets servers connect directly to a storage area network eliminating the pricy HBA. With features like the Remote DMA(RDMA) support, simultaneous peer-to-peer communication and end-to-end flow control, InfiniBand through its fabric topology overcomes the deficiencies of Fibre Channel; such as, restriction of data that individual servers can access which arises to “partitioning mechanism”, or sometimes termed as zoning or fencing without the aid of a costly and complex HBA.

CAN THE 10G SFP+ RJ COPPER TRANSCEIVER BE A GAME CHANGER IN 10GBASE-T?

by http://www.fiber-mart.com

Right from the unveiling of the, then, new IEEE standard for 10 Gigabit Ethernet (10GbE), mostly known as IEEE Standard 802.3ae, large corporations started preparing their network infrastructures for the much needed performance boost and almost immediately they started deploying the new standard in their backbones, Datacenters, and server farms with a single and very important goal at their mind, to evolve their network and make it capable of supporting the growing number and demand of business and mission-critical applications. Today we can safely say that the 10GbE standard evolved in a main competitor when it comes to achieving a reliable, affordable, and simple network architecture.
Even though the 10GbE standard is significantly cheaper to deploy today than when it first came up, many leading corporations are still trying to find a way to reduce costs while gaining performance. Mainly they focus their attention on the copper part of the 10GbE and rely on the proven characteristics of copper transceivers in the past couple of years.
When it comes to transceiver and cabling options, 10GbE has you covered in every single aspect of your network. It can work with either copper of fiber solutions and it offers a wide range of distances for your convenience. With latest trends in the networking world and the noticeable improving switching technologies, copper 10GbE solutions are gaining speed and popularity. Currently the most important 10GbE copper technologies are shown in the table below:
The 10GBase-CX4 is the first ever 10GbE copper standard introduced in 2004. Even though it offered low latency for a very low cost, the main disadvantage was its unusually large form factor which was causing high density configurations to be almost impossible.
The CX4 standard has been replaced with the latest SFP+ standard. This standard offers the same latency characteristics, over longer distances. Together with the small form factor these characteristics make it one of the favorite transceivers used in today’s demanding networks.
The 10GBase SFP+ copper transceiver has been developed for greatness. It offers a high performance bi-directional communication over the cheaper and widely deployed standard copper cables.  In order to achieve the maximum performance, the use of Cat 6a or Cat 7 copper cable is a must. One of the crucial points in its advantage is the low power consumption. When properly deployed and maintained the SFP+ copper transceiver can save 0.5W per port compared to an embedded 10GBASE-T RJ45 port. This is especially noticeable with distances up to 30 meters. In addition with the base of its technology being the copper, you can worry no more for any performance loss if the cable is not positioned straight.
When planning your network infrastructure it is important to make sure that the physical infrastructure will support future application needs, and future technology developments. This is proving to be the main challenge of 10GbE copper transceivers even though they use the traditional RJ45 connector which is the most widely used and known connector in the world. However new dynamics in Datacenters and Service Providers mandate that the cable infrastructure handles latency sensitive applications anywhere in the network architecture.  This leaves the impression that when comparing 10GBase-T technology with the alternative SFP+ technology, it is evident that SFP+ is the right technology to choose to ensure optimal performance with lowest latency in the Datacenter and will for sure become the leading transceiver to use when deploying a high performance network architecture.