Sunday, December 15, 2013

How Data Communications work in the Industrial Environment



Our attention is now drawn to the problem of data communications in the industrial environment. This is the problem of data communications in the manufacturing facility. It is the problem of data communications on the factory floor or in the process control plant. Data communications in these premises can significantly benefit by using fiber optic cable as the Transmission Medium.

Let us begin by describing the industrial environment from a data communications perspective.

What type of data communications is going on here? Typically, the situation is illustrated in Figure 1-1. There is a Master Computer located somewhere in the manufacturing facility. In the past this was usually a mini-computer. Presently, it is either a workstation or PC. The Master Computer is communicating with any of a number of data devices. For example, it may be controlling automated tools and sensors. It may also be exercising control by querying and receiving data from different monitors. These data devices are located throughout the facility. The illustration provided by Figure 1-1 shows a machine tool, but in actuality the number of different automated tool types, sensors and monitors may be very large. By way of example, it may extend to well over 100 in a semiconductor fabrication facility.

The control procedure exercised by the Master Computer usually consists of sending a message out and receiving a message back. It may be sending automated tool or sensor an instruction. It may then receive back either an acknowledgement of instruction receipt or a status update of some sort. In like manner, the Master Computer may send queries to a monitor and receive back status updates.

 
Figure 1-1: Data Communications in the industrial environment



As is readily evident, the whole control procedure is executed using data communications with appropriate signaling devices (modems) and other needed equipment located at both the Master Computer and the data device locations. Required data transmission rates need not be significantly large. On the other hand, in the industrial environment reliability requirements are quite stringent. This is so regardless of whether reliability is measured by either BER or link up-time or some other parameter. The consequences of an unreliable data communications link may be a mere annoyance when it comes to office communications. However, consequences may be catastrophic in a manufacturing operation. Literally, an unreliable link could close down a whole plant.

Generally, the type of situation described above leads the data communications in the industrial environment to follow an inherently hierarchical architecture. This type of architecture is shown in Figure 1-2. The Master Computer is located near a communications closet. The modems and/or other communications equipment (e.g., surge suppressors, isolators, interface converters) needed by the Master Computer to effect links to the data devices are usually rack-mounted in a card cage placed in the communications closet. Cabling then extends out from the card cage to the individual data devices. At the data device end the matching communications equipment may either be stand-alone or DIN Rail mounted. With the latter, the communications equipment snap onto a rail mounted on a wall or mounted on some convenient cabinet near the data device. DIN Rail mounting will be discussed in greater detail toward the end of this chapter.

 
Figure 1-2: Data communications architecture usually found in the industrial environment



It is important to note that this is the general case not the absolute case. If the Master Computer has just 1 or a few ports there may be no need for a card cage. All data communications equipment may then be of the stand-alone type.

There are several topologies associated with this type of hierarchical architecture. The topology could be a star with a cable extending out from the card cage hub to each data device. Each ray of the star is simultaneously operating as data communications link. The topology could be a multi-dropped daisy chain, using the RS-485 interface standard. This is particularly suited to a polling, query-response, data communications scheme - the type of communications being carried out by the Master Computer. The topology could even be a broadcast bus, the type used by an Ethernet LAN.

Exploit the Bandwidth Of Fiber Optic


You are the network manager of a company. You have a Source-User link requirement given to you. In response you install a premises fiber optic data link. The situation is just like that illustrated in Figure 1-1. However, the bandwidth required by the particular Source-User pair, the bandwidth to accommodate the Source-User speed requirement, is much, much, less than is available from the fiber optic data link. The tremendous bandwidth of the installed fiber optic cable is being wasted. On the face of it, this is not an economically efficient installation.

You would like to justify the installation of the link to the Controller of your company, the person who reviews your budget. The Controller doesn't understand the attenuation benefits offiber optic cable . The Controller doesn't understand the interference benefits of fiber optic cable. The Controller hates waste. He just wants to see most of the bandwidth of the fiber optic cable used not wasted. There is a solution to this problem. Don't just dedicate the tremendous bandwidth of the fiber optic cable to a single, particular, Source-User communication requirement. Instead, allow it to be shared by a multiplicity of Source-User requirements. It allows it to carve a multiplicity of fiber optic data links out of the same fiber optic cable.

The technique used to bring about this sharing of the fiber optic cable among a multiplicity of Source-User transmission requirements is called multiplexing. It is not particular to fiber optic cable. It occurs with any transmission medium e.g. wire, microwave, etc., where the available bandwidth far surpasses any individual Source-User requirement. However, multiplexing is particularly attractive when the transmission medium is fiber optic cable. Why? Because the tremendous bandwidth presented by fiber optic cable presents the greatest opportunity for sharing between different Source-User pairs.

Conceptually, multiplexing is illustrated in Figure 3-1. The figure shows 'N' Source-User pairs indexed as 1, 2.There is a multiplexer provided at each end of the fiber optic cable. The multiplexer on the left takes the data provided by each of the Sources. It combines these data streams together and sends the resultant stream out on the fiber optic cable. In this way the individual Source generated data streams share the fiber optic cable. The multiplexer on the left performs what is called a multiplexing or combining function. The multiplexer on the right takes the combined stream put out by the fiber optic cable . It separates the combined stream into the individual Source streams composing it. It directs each of these component streams to the corresponding User. The multiplexer on the right performs what is called a de-multiplexing function.

A few things should be noted about this illustration shown in Figure 1-1.



Figure 1-1: Conceptual view of Multiplexing. A single fiber optic cable is "carved" into a multiplicity of fiber optic data links.



First, the Transmitter and Receiver are still present even though they are not shown. The Transmitter is considered part of the multiplexer on the left and the Receiver is considered part of the multiplexer on the right.

Secondly, the Sources and Users are shown close to the multiplexer. For multiplexing to make sense this is usually the case. The connection from Source-to-multiplexer and multiplexer-to-User is called a tail circuit. If the tail circuit is too long a separate data link may be needed just to bring data from the Source to the multiplexer or from the multiplexer to the User. The cost of this separate data link may counter any savings effected by multiplexing.

Thirdly, the link between the multiplexer, the link in this case realized by the fiber optic cable, is termed the composite link. This is the link where traffic is composed of all the separate Source streams.

Finally, separate Users are shown in Figure 1-2. However, it may be that there is just one User with separate ports and all Sources are communicating with this common user. There may be variations upon this. The Source-User pairs need not be all of the same type. They may be totally different types of data equipment serving different applications and with different speed requirements.

Within the context of premise data communications a typical situation where the need for multiplexing arises is illustrated in Figure 1-2. This shows a cluster of terminals. In this case there are six terminals. All of these terminals are fairly close to one another. All are at a distance from and want to communicate with a multi-user computer. This may be either a multi-use PC or a mini-computer. This situation may arise when all of the terminals are co-located on the same floor of an office building and the multi-user computer is in a computer room on another floor of the building.

The communication connection of each of these terminals could be effected by the approach illustrated in Figure 1-3. Here each of the terminals is connected to a dedicated port at the computer by a separate cable. The cable could be a twisted pair cable or a fiber optic cable. Of course, six cables are required and the bandwidth of each cable may far exceed the terminal-to-computer speed requirements.

 
Figure 1-2: Terminal cluster isolated from multi-user computer

 

 
Figure 1-3: Terminals in cluster. Each connected by dedicated cables to multi-user computer

 

 
Figure 1-4: Terminals sharing a single cable to multi-user computer by multiplexing



A more economically efficient way of realizing the communication connection is shown in Figure 1-4. Here each of the six terminals is connected to a multiplexer. The data streams from these terminals are collected by the multiplexer. The streams are combined and then sent on a single cable to another multiplexer located near the multi-user computer. This second multiplexer separates out the individual terminal data streams and provides each to its dedicated port. The connection going from the computer to the terminals is similarly handled. The six cables shown in Figure 1-3 has been replaced by the single composite link cable shown in Figure 1-4. Cable cost has been significantly reduced. Of course, this comes at the cost of two multiplexers. Yet, if the terminals are in a cluster the tradeoff is in the direction of a net decrease in cost.

There are two techniques for carrying out multiplexing on fiber optic cable in the premise environment. These two techniques are Time Division Multiplexing (TDM) and Wavelength Division Multiplexing (WDM). These techniques are described in the sequel. Examples are introduced of specific products for realizing these techniques. These products are readily available from
Optoroute. TDM and WDM are then compared.

Comparison between TDM and WDM

It is best to compare TDM and WDM on the basis of link design flexibility, speed and impact on BER.

Link Design Flexibility - TDM can be engineered to accommodate different link types. In other words, a TDM scheme can be designed to carve a given fiber optic cable into a multiplicity of links carrying different types of traffic and at different transmission rates. TDM can also be engineered to have different time slot assignment strategies. Slots may be permanently assigned. Slots may be assigned upon demand (Demand Assignment Multiple Access - DAMA). Slots may vary depending upon the type of link being configured. Slots may even be dispensed with altogether with data instead being encapsulated in a packet with Source and User addresses (statistical multiplexing). However, within the context of premises environment there is strong anecdotal evidence that TDM works best when it is used to configure a multiplicity of links all of the same traffic type, with time slots all of the same duration and permanently assigned. This simplest version of TDM is easiest to design and manage in premise data communications. The more complex versions are really meant for the WAN environment.

On the other hand, in the premises environment WDM, generally, has much greater flexibility. WDM is essentially an analog technique. As a result, with WDM it is much easier to carve a fiber optic cable into a multiplicity of links of quite different types. The character of the traffic and the data rates can be quite different and not pose any real difficulties for WDM. You can mix 10Base-T Ethernet LAN traffic with 100Base-T Ethernet LAN traffic with digital video and with out of band testing signals and so on. With WDM it is much easier to accommodate analog traffic. It is much easier to add new links on to an existing architecture. With TDM the addition of new links with different traffic requirements may require revisiting the design of all the time slots, a major effort.

With respect to flexibility the one drawback that WDM has relative to TDM in the premises environment is in the number of simultaneous links it can handle. This is usually much smaller with WDM than with TDM. Nonetheless, advances in DWDM for the WAN environment may filter down to the premise environment and reverse this drawback.

Speed - Design of TDM implicitly depends upon digital components. Digital circuitry is required to take data in from the various Sources. Digital components are needed to store the data. Digital components are needed to load the data into corresponding time slots, unload it and deliver it to the respective Users. How fast must these digital components operate? Roughly, they must operate at the speed of the composite link of the multiplexer. With a fiber optic cable transmission medium, depending upon cable length, a composite link of multiple GBPS could be accommodated. However, commercially available, electrically based, digital logic speeds today are of the order of 1 billion operations per second. This can and probably will change in the future as device technology continues to progress. But, let us talk in terms of today. TDM is really speed limited when it comes to fiber optic cable. It cannot provide a composite link speed to take full advantage of the tremendous bandwidth presented by fiber optic cable. This is not just particular to the premises environment it also applies to the WAN environment.

On the other hand, WDMdoes not have this speed constraint. It is an analog technique. Its operation does not depend upon the speed of digital circuitry. It can provide composite link speeds that are in line with the enormous bandwidth presented by fiber optic cable.

Impact on BER - Both TDM and WDM, carve a multiplicity of links from a given fiber optic cable. However, there may be cross talk between the links created. This cross talk is interference that can impact the BER and affect the performance of the application underlying the need for communication.

With TDM cross-talk arises when some of the data assigned to one time slot slides into an adjacent time slot. How does this happen? TDM depends upon accurate clocking. The multiplexer at the Source end depends upon time slot boundaries being where they are supposed to be so that the correct Source data is loaded into the correct time slot. The multiplexer at the User end depends upon time slot boundaries being where they are supposed to be so that the correct User gets data from the correct time slot. Accurate clocks are supposed to indicate to the multiplexer where the time slot boundaries are. However, clocks drift, chiefly in response to variations in environmental conditions like temperature. What is more, the entire transmitted data streams, the composite link, may shift small amounts back and forth in time, an effect called jitter. This may make it difficult for the multiplexer at the User end to place time slot boundaries accurately. Protection against TDM cross-talk is achieved by putting guard times in the slots. Data is not packed end-to-end in a time slot. Rather, there is either a dead space, or dummy bits or some other mechanism built into the TDM protocol so that if data slides from one slot to another its impact on BER is minimal.

With WDM cross-talk arises because the optical signal spectrum for a given link placed upon one particular (center) wavelength is not bounded in wavelength (equivalently frequency). This is a consequence of it being a physical signal that can actually be generated. The optical signal spectrum will spill over onto the optical signal spectrum for another link placed at another (center) wavelength. The amount of spillage depends upon how close the wavelengths are and how much optical filtering is built into the WDM to buffer it. The protection against cross-talk here is measured by a parameter called isolation. This is the attenuation (dB) of the optical signal placed at one (center) wavelength as measured at another (center) wavelength. The greater the attenuation,  the less effective spillage and the less impact on BER.

At the present time, clock stability for digital circuitry is such that TDM cross-talk presents no real impact on BER in the context of premises data communications and at the composite link speeds that can be accommodated. The TDM cross-talk situation may be different when considering WANs. However, this is the case in the premise environment. The situation is not as good for WDM. Here, depending upon the specific WDM design, the amount of isolation may vary from a low value of 16 dB all the way to 50 dB. A low value of isolation means that the impact upon BER could be significant. In such situations WDM is limited to communications applications that can tolerate a high BER. Digital voice and video would be in this group. However, LAN traffic would not be in this group. From the perspective of BER generated by cross-talk TDM is more favorable thanWDM.

Brief Introduction on Connectors


The Connector is a mechanical device mounted on the end of a fiber optic cable, light source, Receiver or housing. It allows it to be mated to a similar device. The Transmitter provides the Information bearing light to the fiber optic cable through a connector. The Receiver gets the Information bearing light from the fiber optic cablethrough a connector. The connector must direct light and collect light. It must also be easily attached and detached from equipment. This is a key point. The connector is disconnected. With this feature it is different than a splice which will be discussed in the next sub-chapter.

A connector marks a place in the premises fiber optic data link where signal power can be lost and the BER can be affected. It marks a place in the premisesfiber optic cablelink where reliability can be affected by a mechanical connection.

There are many different connector types. The ones for glass fiber optic cableare briefly described below and put in perspective. This is followed by discussion of connectors for plastic fiber optic cable. However, it must be noted that the ST connector is the most widely used connector for premise data communications

Connectors to be used with glass fiber optic cableare listed below in alphabetical order.

Biconic - One of the earliest connector types used in fiber optic data links. It has a tapered sleeve that is fixed to the fiber optic cable. When this plug is inserted into its receptacle the tapered end is a means for locating the fiber optic cable in the proper position. With this connector, caps fit over the ferrules, rest against guided rings and screw onto the threaded sleeve to secure the connection. This connector is in little use today.

D4 - It is very similar to the FC connector with its threaded coupling, keying and PC end finish. The main difference is its 2.0mm diameter ferrule. Designed originally by the Nippon Electric Corp.

FC/PC - Used for single-mode fiber optic cable. It offers extremely precise positioning of the single-mode fiber optic cable with respect to the Transmitter's optical source emitter and the Receiver's optical detector. It features a position locatable notch and a threaded receptacle. Once installed the position is maintained with absolute accuracy.

SC - Used primarily with single-mode fiber optic cables. It offers low cost, simplicity and durability. It provides for accurate alignment via its ceramic ferrule. It is a push on-pull off connector with a locking tab.

SMA - The predecessor of the ST connector. It features a threaded cap and housing. The use of this connector has decreased markedly in recent years being replaced by ST and SC connectors.

ST - A keyed bayonet type similar to a BNC connector. It is used for both multi-mode and single-mode fiber optic cables. Its use is wide spread. It has the ability both to be inserted into and removed from a fiber optic cable both quickly and easily. Method of location is also easy. There are two versions ST and ST-II. These are keyed and spring loaded. They are push-in and twist types.

Photographs of several of these connectors are provided in Figure 1-1.


Figure 1-1: Common connectors for glass fiber optic cable (Courtesy of AMP Incorporated)


Plastic Fiber Optic Cable Connectors - Connectors that are exclusively used for plastic fiber optic cable stress very low cost and easy application. Often used in applications with no polishing or epoxy. Figure 1-2 illustrates such a connector. Connectors for plastic fiber optic cable include both proprietary designs and standard designs. Connectors used for glass fiber optic cable, such as ST or SMA are also available for use with plastic fiber optic cable. As plastic fiber optic cable gains in popularity in the data communications world there will be undoubtedly greater standardization.





 

Figure 1-2: Plastic fiber optic cable connector (Illustration courtesy of AMP Incorporated)

Brief Introduction on Receiver


The Receiver component serves two functions as shown in Figure 1-1. First, it must sense or detect the light coupled out of the fiber optic cable then convert the light into an electrical signal. Secondly, it must demodulate this light to determine the identity of the binary data that it represents. In total, it must detect light and then measure the relevant Information bearing light wave parameters in the premises fiber optic data link context intensity in order to retrieve the Source's binary data.
              
                 Figure 1-1: Example of Receiver block diagram - first stage


Within the realm of interest in this book the fiber optic cable provides the data to the Receiver as an optical signal. The Receiver then translates it to its best estimates of the binary data. It then provides this data to the User in the form of an electrical signal. The Receiver can then be thought of as an Electro-Optical (EO) transducer.

A Receiver is generally designed with a Transmitter. Both are modules within the same package. The very heart of the Receiver is the means for sensing the light output of the fiber optic cable. Light is detected and then converted to an electrical signal. The demodulation decision process is carried out on the resulting electrical signal. The light detection is carried out by a photodiode. This senses light and converts it into an electrical current. However, the optical signal from the fiber optic cable and the resulting electrical current will have small amplitudes. Consequently, the photodiode circuitry must be followed by one or more amplification stages. There may even be filters and equalizers to shape and improve the Information bearing electrical signal.

All of this active circuitry in the Receiver presents a source of noise. This is a source of noise whose origin is not the clean fiber optic cable. Yet, this noise can affect the demodulation process.

The very heart of the Receiver is illustratedas in figure 1-2. This shows a photodiode, bias resistor and a low noise pre-amp. The output of the pre-amp is an electrical waveform version of the original Information out the source. To the right of this pre-amp would be additional amplification, filters and equalizers. All of these components may be on a single integrated circuit, hybrid or even a printed circuit board.


The complete Receiver may incorporate a number of other functions. If the data link is supporting synchronous communications, this will include clock recovery. Other functions may include decoding (e.g. 4B/5B encoded information), error detection and recovery.

The complete Receiver must have high detectability, high bandwidth and low noise. It must have high detectability so that it can detect low level optical signals coming out of the fiber optic cable. The higher the sensitivity, the more attenuated signals it can detect. It must have high bandwidth or fast rise time so that it can respond fast enough and demodulate, high speed, digital data. It must have low noise so that it does not significantly impact the BER of the link and counter the interference resistance of the fiber optic cable Transmission Medium.

There are two types of photodiode structures; Positive Intrinsic Negative (PIN) and the Avalanche Photo Diode (APD). In most premises applications the PIN is the preferred element in the Receiver. This is mainly due to fact that it can be operated from a standard power supply; typically between 5 and 15 V. APD devices have much better sensitivity. In fact it has 5 to 10 dB more sensitivity. They also have twice the bandwidth. However, they cannot be used on a 5V printed circuit board. They also require a stable power supply. This makes cost higher. APD devices are usually found in long haul communications links.

The demodulation performance of the Receiver is characterized by the BER that it delivers to the User. This is determined by the modulation scheme - in premise applications - Intensity modulation, the received optical signal power, the noise in the Receiver and the processing bandwidth.

Considering the Receiver performance is generally characterized by a parameter called the sensitivity, this is usually a curve indicating the minimum optical power that the Receiver can detect versus the data rate, in order to achieve a particular BER. The sensitivity curve varies from Receiver to Receiver. It subsumes within it the signal-to-noise ratio parameter that generally drives all communications link performance. The sensitivity depends upon the type of photodiode employed and the wavelength of operation. Typical examples of sensitivity curves are illustrated in Figure 1-2.

In examining the specification of any Receiver you need to look at the sensitivity parameter.In a sense it represents optimum performance on the part of the photodiode in the Receiver. That is, performance where there is 100% efficiency in converting light from the fiber optic cable into an electric current for demodulation.
 
Figure 1-2: Receiver sensitivities for BER = 10-9, with different devices.
 



Tuesday, December 10, 2013

The Evolution of Fibre Optic Communication Technology

The Present Status of Optical Communication
The 2012 global fiber core with about 236 million kilometer, the cumulative amount of the fiber core has more than 1.8 billion kilometers; In 2012 China’s optical fiber with a core of about 116 million kilometers, has been laying fiber optic cable lengths up to 14.81 million kilometers, has been laying fiber over 600 million core km; Currently open wire and cables have been all disabled, only 200000 kilometers of microwave circuits, as well as a small satellite link; Global broadband subscribers more than 648 million, of which 128 million FTTx users; number of broadband users are 180 million, of which fiber broadband users of about 19.48 million, covering 94 million fiber access; global mobile subscribers has more than 6 million (of which 51 million LTE subscribers in 2013 up to 200 million), China’s mobile subscribers reached 1.165 billion mobile Internet users reached 8.13 one hundred million. All of which are the basis of optical communications, today more than 95% of the global amount of information that is transmitted through the optical communication.
Laying the early foundation 1960s (invented laser and fiber), 70s early R & D field trials and commercialization (short wavelength multimode born), 80s single-mode and long wavelength system as applied, 90s OA/WDM transmission and network appears. To 00s, large capacity (number of Tb/s) technology, systems and networks began to appear, to the ultra high speed optical fiber communication, large capacity, ultra long range development, the development of multi service bearer. 2005 China’s first 40Gb/s systems R & D is completed, also in 2005, China is the first 80x40Gb/s commercial DWDM system.

After “five 10-years” development, 100Gb/s system began commercial, PTN/IPRAN large-scale applications, the maximum transmission distance of 10.7 Tb/s 10608km in 2011 years, in 2012 single fiber maximum system capacity of 102.3 Tb/s 240 km.

The development trend of Optical Fiber Communication

 For the development trend of Optical Fiber Communications, Mao Qian said PEOTN era is coming. Mao Qian believes that grouping and large-capacity OTN future direction of development, driven by demand, is expected to achieve commercial PEOTN soon. PEOTN technical requirements there are four major points: the positioning of large particles service scheduling, you need more than 10T exchange capacity; blocking access to the whole business of the need to harmonize the cross; China Mobile has selected 100G, PEOTN should be based on 100G platform; respond OTN sinking trend, POTN requires serialization.
Software-defined optical transmission network SDTN / SDON centralized control, unified hardware and open interfaces for interoperability, full use of resources, improve innovation capabilities to provide flexible services.
It is mentioned two current all-optical switching network problems – optical storage and optical logic devices, optical storage is currently still relatively difficult to cross, and optical logic devices are based on optical storage, and therefore more difficult to cross.


Micro Fiber Optic Cable Jump on the Bandwagon

In recent years, the industry has been focusing on reducing the footprint of fiber optic networks. It can be said about 2005 as the fiber optic supplier development and small bending radius (FBR) fiber, the trend toward smaller cable and hardware development had begun to appear. These new optical waveguide design appear soon, people will develop international standards to regulate. Subsequently, the fiber is gradually increased to the macro-and micro-bending of the bending tolerance, these can be “tied knot” of the fibers begin to achieve a smaller size to allow the cable design.

Small Bending Radius Fiber High Efficiency
Macro-bending phenomenon is a simple easy to understand. ITU G657 macro-bending performance requirements for special bend radius optical loss at the special specification. However, some saying that performance improved slightly curved main features from the small bend radius can achieve smaller size, higher performance cabling. A method for the analysis of the differences between the actual macro-bend and is slightly curved imagine the single fiber wound on your finger to measure fiber loss (macro-bend), according to the piece of sandpaper on the fiber and measuring the corresponding loss (micro-bending loss), and then compare the differences between the two.
In both cases, the basic optical signal loss phenomenon has a very big difference. When the fiber optic cable is exposed to low temperature environment, the material of the optical cable will tend to shrink, a force is applied along the length along the fiber, the force can cause micro-bending in the fiber optic cable. For example, slightly curved small margin improved bend radius fiber optic cable will undoubtedly help to withstand large temperature variations.
The fiber optic cable manufacturers are utilizing this feature small bending radius of the fiber, their desire is to be developed using the same as using copper cable- rugged, small size, practical, anyone can easily operate, and will not damage to the fiber. To achieve this goal, it is also the material used in the manufacturing process of the optical cable were innovative. Small bend radius fiber bending performance has been enhanced to promote the new materials and manufacturing techniques used in the manufacture of fiber optic cable, fiber optic cable so that the smaller size and lighter weight. Together to solve these problems and it can be produced smaller in size, greater flexibility of the new generation of optical fiber cable.
A major factor in the small radius of the fiber optic cable is plugged into the wiring and other direct connection cable. In addition to installing more fiber optic cables in the same space, beyond the obvious benefits, smaller cable size can also speed up the flow of air, because the cable duct space occupied by fewer. With active electronic component suppliers to try and merge the miniaturization of electronic enclosures, the importance of this advantage will become more apparent. In such electronic cabinets, heat gradually becomes an important issue. Typically, one would consider the airflow along the copper (copper itself generates heat), but with the equipment cabinet becomes smaller, heat, various aspects of flow becomes very important.
Smaller direct optical cable and fiber jumpers have emerged
Smaller, beyond imagination. This phenomenon may not be so obvious now, but the diameter of the round cable is reduced by one unit, the space occupied by the optical cable (the circular area) much reduced accordingly. Therefore, the optical cable typically 2.0mm to 1.2mm diameter cable compared can be clearly seen, while half the cable diameter is not reduced, but the number of cables in the same space (1 square inch) can be mounted almost in the original 3 times.
Late in the first decade of the 21st century, Telcordia widely used for GR-409 standard straight optical cable released revision 2. Revision 2, including the sub-categories called “mini” cable, in accordance with GR-409 standard allows lower production strength of optical cable. Revision 2 reduces the tensile strength of the so-called small package installation provisions, allowing optical cable to withstand 9 pounds (40N) of installed load, rather than 22 pounds (100N) standard mounting load. At the time, it is widely believed to reduce the intensity of the production of smaller cable size required. With a rated load of 22 pounds of optical cable compared to the rated tensile load is nine pounds of optical cable installation personnel require more careful to avoid damaging optical cable.
But, at present, some based on the small cable bending radius of optical cable, in fact, the materials used, the design, the method can make the cable size smaller, more than the original 22 pounds of GR – 409 tensile load. For example, 1.2 mm direct optical cable has been listed, can install support 30 pounds of rated load. Compared with small rating of 2.0 mm cable, which means that the diameter of 1.2 mm of the strength of the new type of optical fiber cable is 3 times as much, and will only take up a third space.

So, soon after, data center managers and other staff will be able to install much smaller in size than the previous cable, while not passively select GR-409 is a small package, so as not to reduce the optical cable strength. Look forward to in the near future, we can see more than ever with a smaller size of the hardware, so you can achieve higher density, more compact cabling management, while ensuring the reliability of the network.

Market Competitiveness Remain Insufficient Enhanced As the Utilization of Plastic Optical Fiber

With the rapid development of optica communication industry, fiber optical signal as a transmission medium, plays the role of the information superhighway. In recent years, with the rapid development of short-distance, large-capacity data communication and lighting industries, the growing use of plastic optical fiber. Although excellent plastic fiber tensile strength, high durability, small footprint, but overall it seems. Competitiveness of plastic fiber is also insufficient.
Industry analysts pointed out: increased utilization of plastic optical fiber does not mean it has absolute advantage. Although plastic optical fiber has a quartz fiber do not have the advantages of short distance communication and fiber sensing aspects, but in terms of temperature stability, high temperature plastic optical fiber capacity cannot, without a competitive performance for its price and do not have a strong competitive advantage.
From the loss, the loss of a large plasticoptic fiber, quartz fiber loss is only 0.18db/km, and plastic optical fiber loss is 0.5db/km, plastic optical fiber loss per kilometer too large. Fiber material has not yet entered the home, or in a pilot inside, relatively few domestic manufacturers of plastic optical fiber, there are still large gaps in the transmission bandwidth and abroad. In terms of price, foreign quartz fiber prices have dropped to 40 yuan, while the country is still in a plastic optical fiber 600 to 700 yuan, but not comparable.

But there is no denying the fact that in the future intelligent family, office automation, industrial networking, automotive airborne data communications network and military communication network data transmission. China should vigorously develop the plastic fiber industry alliance should establish technical standards to reduce the risk and cost of standardization, coordination of technical standard language intellectual contradictions should also strengthen the cooperation of enterprises with the Soviet Union Optical fiber national engineering laboratory. The national engineering laboratory plastic optical fiber communication industry chain enterprises should unite to solve technical challenges. In addition, still should jointly cope with the international trade barriers. Optical fiber domestic enterprises should unity and cooperation, under the organization of industry association, actively to protect their legitimate rights and interests. The only way is to give full play to the potential of plastic fibers.

Although in recent years, the utilization of plastic fibers improved, but due to constraints in terms of consumption, price, compared with the conventional silica fiber, and do not form their own distinct competitive advantage, how to reduce the losses and increase the bandwidth and improve the heat resistance so that its key competitive advantage.

Monday, December 9, 2013

Extended EPON Elbows a Profitable Way for Operators

Market-driven standard for extended EPON developed under open process is key to helping operators cost-effectively provide higher density optical access, addressing their current and future needs.

The IEEE P802.3bk Extended EPON Task Force of the IEEE 802.3™ Working Group is an excellent example of how IEEE fosters a market-driven, open process to solve real-life problems challenging the evolution of networks around the world.

In broad terms, the P802.3bk Extended EPON Task Force addresses the need the operators have to build optical access networks that can support more customers, at a longer distance from the local hub. But more specifically, operators are primarily looking for cost-effective solutions to address the mobile backhaul in dense urban areas, as well as delivering services to geographically scattered customers. (Further discussion in this video)

The mobile backhaul is a very hot topic right now. With the advent of 3G and 4G networks, the problem is only further aggravated, because mobile carriers need a higher density of base towers in the given area to provide good coverage to their customers. As the evolution to faster mobile networks continues, the mobile backhaul network has to also be ready for increasing data rates and base station densities. Operators need to migrate from the current point-to-point fiber backhaul solution to more cost-effective point-to-multipoint architectures. This migration substantially improves the density of connected base stations per hub, lowering the cost of a single connection and making the resulting infrastructure much more scalable and future-proof.

The other challenge in front of operators is serving geographically scattered customers, who cannot be reached in a cost-effective manner today using point-to-point solutions. Right now an operator either has to run a really long point-to-point fiber or use just as costly microwave transport. With the new standard developed by the IEEE P802.3bk Extended EPON Task Force, it is possible to connect such customers using lower reach Ethernet Passive Optical Network solutions, maintaining the advantages of passive outside plant, while still providing gigabit speeds to end subscribers.

Open process means that anyone can participate in the development of the standard, and people who choose to participate in such efforts are very passionate about pushing the networking technology forward, making Internet access faster, better, more cost-effective, and in general – more accessible. But just as important, as a community of technical experts, we create standards to address the actual need for specific networking solutions, making them much more successful in the competitive market of today.

The first step for the P802.3bk Task Force was to gather requirements from those who are most affected by the problem at hand, i.e., individual network operators and carriers. This step took six months and allowed the Task Force to better understand the technical challenges at hand. Bringing together both operators and suppliers is a critical part of the process, guaranteeing that the end product of the standardization effort addresses the actual technical problem, and does not become a technical solution without a problem.

It is pretty amazing that over just a few years Ethernet has evolved to the point where we’re discussing the development of solutions operating at 400 Gbit/s. The evolution of optical access based on Ethernet does not stop either. Just 4 years after the completion of the P802.3av™ project which delivered EPON operating at the symmetric rates of 10 Gbit/s, we are going to start looking at the next generation of EPON systems. Such next generation EPON is likely going to support data rates in excess of 40 Gbit/s, providing the necessary scalability into the foreseeable future, but also provide backward compatibility to guarantee that operators deploying EPON today have a clear evolution path into the future, future-proofing their current investments into optical access networks.

Prospects for 40G and 100G in the data center Seems Improved

Forty-gigabit/second interconnects in the data center are poised to take off as all the various server, switch, cabling, and transceiver parts have finally come together.

After about a six-month delay, Intel finally released its next-generation "Romley" architecture that offers 10 cores per microprocessor and a PCI Express 3.0 bus that supports faster I/O. With the servers and switches therefore ready to go, 40G interconnects using direct attachcopper (DAC), active opticalcables (AOCs), and opticaltransceivers should be in demand as data center infrastructures begin a major upgrade cycle.

10G faces many issues
Once the server upgrades, faster uplinks to top-of-rack switches are needed. But the 1G-to-10G transition is fraught with issues. In the past, server suppliers included Gigabit Ethernet (GbE) RJ-45 LAN-on-motherboard (LOM) for "free" -- but a dual-port 10GBase-T today costs far too much for such largesse. Meanwhile, with Cat5e almost free as well, the interconnect was never a serious cost issue. Now it is.

Server companies offer 10G ports on pluggable "daughter cards" that block out aftermarket competitors and ensure high prices. Daughter cards come in different flavors of 1G and 10GBase-T, two to four SFP+ ports, or dual QSFP with a path to 100G CXP and CFP/2 in the future. As server manufacturers are making a lot of money on the 10G/40G upgrades, this begs the question, "Will server companies ever return to the LOM model where buyers consider it a freebee?"

Meanwhile, 10GBase-T has had problems with high power consumption, size, and cost. This has left the door open for SFP+ DAC cabling to move in while 10GBase-T suppliers build 28-nm parts. This event changed the entire industry. But DAC has its share of issues too, as it "electrically" connects two different systems together, and not all SFP+ ports are alike.

2012 will show about 1 million 10G Base-T ports actually filled, representing about 500,000 links -- almost what can be found in a single large data center today with 1GBase-T! SFP+ DAC demand is shaping up to be about 2.5-3.5 million ports filled, mostly to link servers to top-of-rack switches at less than 7 m.

On the optical end, SFP+ AOCs are on the near-term horizon, and optical transceivers are typically being used to link switches together over reaches greater than 7 m. It is forecast about 6 million 10G SFP+ short-reach (SR) and long-reach (LR) optical transceivers will ship in 2012.

40G the "next big thing"
Upgrading server-switch links from 1G to 10G forces switch uplinks that connect top-of-rack to end-of-row and aggregation switch layers to jump to 40G. However, as data center operators emerge from the economic recession, budgets are still very tight and "incremental upgrades" are the way operators are buying. Adding 10G/40G links "as needed" is the current buying practice.

While 100G seems to get all the trade show and press coverage, 40G is where the money is for the next two to three years. Data centers are just hitting the need for about 4G to 6G, never mind 10G; so many data centers are in a transitional, upgrade-as-needed state. The so called Mega Data Centers at Google, Facebook, Microsoft, etc., at $1 billion a piece, do not represent the mainstream data center -- although they garner a lot of attention and awe.

Chasing the transceiver opportunity 40G will present, multiple transceiver suppliers have jumped at offering 40G QSFP SR transceivers and Ethernet AOCs for applications of less than 50 m. Over 10 transceiver companies have announced transceivers and/or AOCs, and more suppliers are coming! Technical barriers to entry are low, and cost-sensitive Internet data centers (especially in China) are likely to gobble these up in volume.
40/100G transceivers: An introduction
Optical modules for 40G and 100G have two main "flavors" in the data center: short reach (SR) for ~100 m using multimode fiber and Long Reach (LR) for 100 m to 10 km using singlemode fiber.

SR transceivers are typically used to connect computer clusters and various switch layers in data centers. Several SR transceivers can reach ~300 m with OM4 fiber, but somewhere between 125 and 200 m the economics of the fibers and transceivers justify converting to singlemode optics -- and at even shorter distances for 25G signaling. Data rates of 40G are typically deployed as four 10G lanes using QSFP or CFP MSAs transceivers. SR modules use eight multimode fibers (four for each direction), VCSEL lasers, and typically a QSFP MSA form factor. LR4 uses edge-emitting lasers and multiplexes the four 10G lanes onto two singlemode fibers capable of 10 Km reach within either QSFP or CFP form factors. Both SR and LR4 QSFPs can be used in the same switch port without any issues - just plug and play, 1 m to 10 km no problem.

But this is not so for 100G. Modules for 100G SR applications use 20 multimode fibers, VCSELs, and typically the CXP MSA form factor. Although specified to 100 m, these modules are typically used to link large aggregation and core switches at less than 50 m, as 20 multimode fibers become very expensive, very fast as the reach gets longer; multimode fiber is about 3X more expensive than singlemode fiber.

Only in 2012 have multiple transceiver companies started to unveil CXP 100G SR transceivers, whereas the 40G QSFP transceivers and AOCs have been available since about 2008.
The transceiver industry will do its traditional pricing act of "Let's all cut our own throats on price and see who bleeds to death last." As a result, 40G SR parts are likely to see a very rapid price drop from about $250 today to under $190 for fully compliant OEM offerings next year. We even have seen "plug and hope they play" parts at $65. (But you get what you pay for!) OEM prices for Ethernet AOCs can be found below $190 -- and that is for a complete link with both ends and fiber!

The 40G QSFP MSA uniquely supports SR at approximately 100 m with multimode fiber or 10 km with duplex, singlemode fiber - all in the same QSFP switch port. Companies such as ColorChip, Sumitomo, and a few others offer QSFP parts and Oclaro (via its merger with Opnext), NeoPhotonics, Finisar, InnoLight, etc., offer larger CFP devices.

Implementing 100G is much more complex
Much noise has been made at industry conferences about the imminent need for tens of thousands of 100G medium-reach links in the data center to support the upcoming "exa-flood" of traffic from server virtualization, big data, smartphones, tablets, and even software-defined networking. Ten-channel CXPs are used for multimode primarily by the large core switching companies in both transceivers and AOCs. At 25G signaling for 4x25G, multimode noise spikes threaten to decrease the reach of multimode transceivers to 50-70 m; therefore, these modules may require forward error correction (FEC) and/or equalization to reach 125 m.

The 100G 2-km problem
Sending 100G more than 100 m has proven frustratingly hard to implement and longer to develop than first expected. The IEEE 40/100G High Speed Study Group met in July and extended its study another six months to deal with the technical issues.

For longer reaches, engineers are wrestling with trying to fit all the optics and electronics into new MSA packages and hit all the power, size, electrical, and optical specs required. This goal is achievable -- but at what cost and power is still an open issue. CFP/2 is not a given! Much debate still centers on zCXP vs CFP/2 for the next MSA, with Molex and TE Connectivity backing zCXP.

Today, there's no economically viable option for 100 to 600 m, and as data centers become bigger, this is a hot area and the center of debates within the IEEE community. One extra meter can bump the transceiver OEM cost from a CXP at $1,000 to a telecom-centric CFP at $14,000! Often referred to as 2 km, the reach for this application actually translates to between 400 and 600 m with an optical budget of about 4-5 dB in a lossy data center environment with patch panels and dirty connectors. To go 10 km, the link will need 6 dB. Next-generation lasers and CMOS electronics instead of SiGe are on the way, but Mother Nature just keeps getting in the way of our industry PowerPoint slides!

Conclusion
The next few years will involve Intel's Romley server architecture and subsequent silicon shrink, PCI Express 3.0, 10G uplinks to the top-of-rack switches, and 40G uplinks in the switching infrastructure. Supporting links at 40G will be where the money is for the next three years, but everyone can see that 100G will be the next stop -- with mid-board optics as well. IEEE will sort out the technical issues, and 100G infrastructure technology should kick in with volume in late 2014. It is important for the community to get this right as 100G will be around for a very long time.

Forecast on 2014 Annual Telecommunication Technology

It's not that we can't wait for 2013 to end, but here's a look at what optical communications technologies should be hot in 2014.
What can we say about 2013 that hasn't been said already? It seemed the optical communications space spent much of the year waiting for the economic turmoil of 2012 to blow over. That finally seemed to start happening in the middle of the year, if recent revenue estimates by market research firms can be believed.

The deployment of 100-Gbpstechnology kicked into high gear – except in the data center, where 40 Gbps is just getting established. Nevertheless, the IEEE determined that it's not too early to begin thinking about what comes next and decided it's 400 Gigabit Ethernet. In carrier networks, a small handful of companies offered 400 Gbps, but carriers started to talk about breaking that in half and deploying 200 Gbps.
The fact that we've reach the last two months of 2013 means the time has come to determine which of the current hot topics will continue to burn brightly in the next 12 months and which ones will cool off. Our discussion will encompass the following application areas:
1.  Networking (except for the access)
2.  Fiber to the home
3.  Cable-operator applications
4.  Equipment design
5.  Test and measurement

Optical networks in the abstract, quickly
If you thought you couldn't get away from SDN and NFV last year, you may find 2014 even worse. Many of the efforts toward carrier-friendly
SDN and NFV that began this year will reach fruition over the next 12 months, which may lead these concepts out of the petri dish and into optical networks.
And those efforts are numerous. The most significant is that of the Optical Networking Foundation's Optical Transport Working Group, which is examining use cases and specifications for fiber-optic networks. It should complete its task toward the middle of 2014; with luck, the working group will provide a guidepost others will follow toward the realization of provisioning and orchestration of multivendor network resources between the routing and transport layers. Meanwhile, we'll continue to see parallel efforts such as Open Daylight as well as ecosystems and demonstrations generated by individual vendors.
Focusing on the actual optical transport layer rather than its abstraction, we'll see more 100-Gbps offerings customized for the metro in the second half of next year, thanks in large part to new coherent optical transceivers (about which we have more to say in our discussion of equipment design trends below). We also should expect more efforts to promote interoperability among vendors' systems, with the recent interoperability announcement between Acacia Communications and NEL a step in that direction.
We may also see the first deployments of 200-Gbps technology, which Verizon already has taken for a spin. The interest in 200G indicates that while some carriers may have already outgrown 100G on certain links, they're not quite ready to make the leap to 400G right away. That said, with the first 400-Gbps deployment announced this year, we'll likely see a few more in 2014, with more vendors signaling their ability to support such data rates when their customers require them.

The maturation of FTTH
Overall, 2013 was a quiet year for fiber to the home (FTTH) technology. The next 12 months likely will be more of the same. That's because the major building blocks for FTTH networks are now firmly in place.
A carrier has a choice among EPON, GPON, and point-to-point Ethernet architectures, with multiple vendors available for each option. There is a wide variety of CPE to meet the services expected on the subscriber end (even if that end is a tower). And when the carrier runs out of capacity on their chosen architecture, there's 10-Gbps versions of PON and point-to-point ready to be deployed.
The same could be said for the fiber, with dozens of offerings that promise some degree of bend tolerance. There's specialized cabling for multiple-dwelling-unit (MDU) applications that make running fiber inside the building more efficient and less intrusive.
So in 2014 expect incremental improvements to these major pieces, aimed at lowering costs and streamlining deployments. The innovation will come at the intersection of copper and fiber.
In the most obvious sense, that means the copper coming after the fiber in fiber to the node (FTTN) and fiber to the cabinet (FTTC) architectures. We saw vectoring roll out in 2013, and this noise-cancellation technology will become more ubiquitous next year. We'll also hear more about G.fast, the vectoring successor that will take copper into data rates of 200 Mbps and above – perhaps even 1 Gbps, if the nodes are close enough to the subscriber. Expect systems houses to aid carriers to deploy VDSL2 with vectoring now and an upgrade path to G.fast.
But even carriers intent on wringing the last megabit out of their copper lines know that fiber will be necessary in the future. So we also can expect to see more emphasis placed on platforms that support the migration from copper to fiber when recalcitrant carriers finally see the light.

Cable MSOs find more use for coax
The use of FTTH technology is more ubiquitous among U.S. cable operators than they're willing to admit. But new technologies promise to enable U.S. cable MSOs – and others around the world in the future – to meet future competitive requirements via their existing hybrid fiber/coax (HFC) networks.
DOCSIS 3.1 holds the most promise along these lines. The technology, now in the specification development process within CableLabs, targets 10 Gbps downstream and 1 Gbps upstream, much like the asymmetrical variant of 10G EPON. CableLabs has targeted this year for the completion of specifications (the company was expected to report its progress at SCTE Cable-Tec), which means chipset samples sometime in 2014. First hardware prototypes could follow within the next 12–18 months as well. DOCSIS 3.1 is expected to be backward compatible with DOCSIS 3.0 to the point that signals using both DOCSIS variants could travel together. DOCSIS 3.1 signals will leverage orthogonal frequency-division multiplexing, while DOCSIS 3.0 uses quadrature amplitude modulation.
If that wasn't bad enough for FTTH proponents, the IEEE has embarked on development of a standard that would enable EPON capabilities over coax. In-building use of existing coax, particularly in MDUs, would be a natural application of the technology. However, there's a chance that cable operators could use EPON over coax in the outside plant as well.
By the time this technology reaches the field in a few years, cable operators may have more widely deployed EPON in their networks, thanks to the fact that DOCSIS Provisioning of EPON (DPoE) has finally matured to the point where it's approaching deployment. Bright House Networks recently announced it will use Alcatel-Lucent's DPoE offering to support business services. The systems vendor says it has at least one more North American DPoE customer. With this success, we can expect more DPoE offerings.

Integration dominates equipment design
Many of the technology design advances we'll see in 2014 will have integration as a foundation. Wafer-level, hybrid, photonic, digital, analog, vertical – it's just a question of which kind.
Another integration question that developers will debate in 2014 is whether upcoming coherent CFP2 transceivers should integrate the necessary DSP inside the module or whether those chips should remain on the host board. There are points in favor of both options: An integrated design is simpler, whereas leaving the DSP outside the module opens the door to sharing a large DSP device with multiple modules and enables companies that use ASICs developed in-house to continue to leverage that investment while enjoying the benefits of a smaller module and a wider ecosystem (since we can expect companies that have so far sat out the coherent module business to jump in if they don't have to worry about supplying the DSP). Both approaches will have adherents; it may turn out that one finds more favor in long haul applications at certain systems houses.
Meanwhile, as this topic and some of the earlier sections of this article indicate, the importance of electronics and software to optical communications technology will continue to grow.

Putting simplicity to the test
As optical communications technology developers have tackled demands for higher bit rates, the avenues they've pursued have become increasingly complex. This fact presents a pair of challenges for test and measurement-instrument suppliers. First, they have to stay ahead of current requirements, which means coming up with instruments that can handle the growing complexity of optical communications techniques. Second, while the technology within these test instruments arguably is at least as complex as the elements under test (if not more), the test sets must perform their functions as simply as possible.
In field applications, the drive for simplicity has begun to translate into an increased use of automation – what could be thought of as smart test sets. The ability not only to perform measurements but also analyze results has become significantly important as carriers transition from copper to fiber faster than installers can be retrained. We see this trend most strongly in test instruments for FTTx applications, but a similar case can be made for mobile backhaul as fiber to the tower (FTTT) becomes more prevalent as well as within data centers. So in 2014 we will continue to see instruments reach the field that automate test functions, ease collaboration, and leverage platforms with which technicians are becoming familiar outside of the job, such as mobile phones and tablets.

One question is whether you still need to create dispersion maps, particularly if you plan to run wavelengths other than 100G down the same fiber and whether you're using Raman amplification. This question will continue to be asked in 2014, but I believe you'll see more carriers abandon dispersion mapping on green field links, particularly if they're not using Raman.
The other debate point is whether you need a new class of field test instruments designed to examine signals at the phase and amplitude level. That debate also will continue in 2014, but increasingly it appears that the conclusion will be that you don't. It seems that most carriers don't want to deal with issues at the phase or amplitude level, partly because the transmission schemes their systems vendors use are highly proprietary and therefore something of a mystery. So the consensus approach has become the placement of Ethernet or OTN test sets on the client side of the coherent systems, injection of a signal on one end, and a determination of whether it comes out okay on the other. If there's an issue in between, it's the systems vendor's problem.

And systems vendors are stepping up to the responsibility – and helping with the simplification of fiber-optic-network test – by incorporating test functions into their hardware. Many, if not all, coherent transmission equipment now comes with functions that aid acceptance testing that previously would have required a separate piece of test gear. Similarly, we're seeing the systems houses incorporate OTDR functionality into their equipment, particularly for FTTH applications.
Do these trends spell disaster for test equipment vendors? Probably not. An executive at one test and measurement vendor told me that the number of OTDR sales the integrated FTTH systems were costing his company was hardly worth noting. But it does show that the landscape of field testing is changing rapidly.

Meanwhile, back in the lab, the complexity problem is somewhat different. The technicians here know how to do the tests, but often need several pieces of test equipment to get the job done. That makes test setups cumbersome, particularly if instruments from multiple vendors are involved, either by desire or necessity. These setups also are extremely expensive – a coherent modulation analysis capability, including the necessary oscilloscope, can run in the multiple hundreds of thousands of dollars on its own.
So the emphasis in 2014 will continue to be placed on simplification as a vehicle toward reduced cost. That will mean companies adding to their product lines must be able to meet all aspects of a specific requirement, adding functions to existing systems to lower the number of instruments needed and making it simpler to share equipment and capabilities to spread out the costs.
Meanwhile, with work underway toward a 400 Gigabit Ethernet specification, we'll see more test equipment either introduced or upgraded with this requirement in mind. The 400G interfaces likely will be based initially on 25-Gbps lanes. The good news is that 100G is moving toward 25-Gbps lanes, so the test capabilities are already in place; the bad news is that 400G will require 16 such lanes, meaning a requirement to make testing this many lanes more efficient.


There also has been talk about applying multilevel modulation formats to 400 Gigabit Ethernet signals, which means we'll see a continuation of the recent string of announcements around PAM4 test equipment.