What can we say about 2013 that hasn't
been said already? It seemed the optical communications space spent much of the
year waiting for the economic turmoil of 2012 to blow over. That finally seemed
to start happening in the middle of the year, if recent revenue estimates by
market research firms can be believed.
The deployment of 100-Gbpstechnology kicked into high gear – except in the data center, where
40 Gbps is just getting established. Nevertheless, the IEEE determined that
it's not too early to begin thinking about what comes next and decided it's 400 Gigabit Ethernet. In carrier networks, a small handful
of companies offered 400 Gbps, but carriers started to talk about breaking that
in half and deploying 200 Gbps.
The fact that we've reach the last two
months of 2013 means the time has come to determine which of the current hot
topics will continue to burn brightly in the next 12 months and which ones will
cool off. Our discussion will encompass the following application areas:
2. Fiber to the home
3. Cable-operator applications
4. Equipment design
5. Test and measurement
If you thought you couldn't get away
from SDN and NFV last year, you may find 2014 even worse. Many of the efforts
toward carrier-friendly
SDN and NFV that began this year will
reach fruition over the next 12 months, which may lead these concepts out of
the petri dish and into optical networks.
And those efforts are numerous. The most
significant is that of the Optical Networking Foundation's
Optical Transport Working Group, which is examining use cases and
specifications for fiber-optic networks. It should complete its task toward the
middle of 2014; with luck, the working group will provide a guidepost others
will follow toward the realization of provisioning and orchestration of
multivendor network resources between the routing and transport layers.
Meanwhile, we'll continue to see parallel efforts such as Open Daylight as well
as ecosystems and demonstrations generated by individual vendors.
Focusing on the actual optical transport
layer rather than its abstraction, we'll see more 100-Gbps offerings customized
for the metro in the second half of next year, thanks in large part to new
coherent optical transceivers (about which we have more to say in our
discussion of equipment design trends below). We also should expect more
efforts to promote interoperability among vendors' systems, with the recent
interoperability announcement between Acacia Communications and NEL a step in
that direction.
We may also see the first deployments of
200-Gbps technology, which Verizon already has taken for a spin. The interest
in 200G indicates that while some carriers may have already outgrown 100G on
certain links, they're not quite ready to make the leap to 400G right away.
That said, with the first 400-Gbps deployment announced this year, we'll likely
see a few more in 2014, with more vendors signaling their ability to support
such data rates when their customers require them.
The maturation of FTTH
Overall, 2013 was a quiet year for fiber
to the home (FTTH) technology. The next 12 months likely will be more of the
same. That's because the major building blocks for FTTH networks are now firmly
in place.
A carrier has a choice among EPON, GPON, and point-to-point Ethernet
architectures, with multiple vendors available for each option. There is a
wide variety of CPE to meet the services expected on the subscriber end (even
if that end is a tower). And when the carrier runs out of capacity on their
chosen architecture, there's 10-Gbps versions of PON and point-to-point ready to be deployed.
The same could be said for the fiber,
with dozens of offerings that promise some degree of bend tolerance. There's
specialized cabling for multiple-dwelling-unit (MDU) applications that make
running fiber inside the building more efficient and less intrusive.
So in 2014 expect incremental
improvements to these major pieces, aimed at lowering costs and streamlining
deployments. The innovation will come at the intersection of copper and fiber.
In the most obvious sense, that means
the copper coming after the fiber in fiber to the node (FTTN) and fiber to the
cabinet (FTTC) architectures. We saw vectoring roll out in 2013, and this
noise-cancellation technology will become more ubiquitous next year. We'll also
hear more about G.fast, the vectoring successor that will take copper into data
rates of 200 Mbps and above – perhaps even 1 Gbps, if the nodes are close
enough to the subscriber. Expect systems houses to aid carriers to deploy VDSL2
with vectoring now and an upgrade path to G.fast.
But even carriers intent on wringing the
last megabit out of their copper lines know that fiber will be necessary in the
future. So we also can expect to see more emphasis placed on platforms that support
the migration from copper to fiber when recalcitrant carriers finally see the
light.
Cable MSOs find more use for coax
The use of FTTH technology is more
ubiquitous among U.S. cable operators than they're willing to admit. But new
technologies promise to enable U.S. cable MSOs – and others around the world in
the future – to meet future competitive requirements via their existing hybrid
fiber/coax (HFC) networks.
DOCSIS 3.1 holds the most promise along
these lines. The technology, now in the specification development process
within CableLabs, targets 10 Gbps downstream and 1 Gbps upstream, much like the
asymmetrical variant of 10G EPON. CableLabs has targeted this year for the
completion of specifications (the company was expected to report its progress
at SCTE Cable-Tec), which means chipset samples sometime in 2014. First
hardware prototypes could follow within the next 12–18 months as well. DOCSIS
3.1 is expected to be backward compatible with DOCSIS 3.0 to the point that
signals using both DOCSIS variants could travel together. DOCSIS 3.1 signals
will leverage orthogonal frequency-division multiplexing, while DOCSIS 3.0 uses
quadrature amplitude modulation.
If that wasn't bad enough for FTTH
proponents, the IEEE has embarked on development of a standard that would
enable EPON capabilities over coax. In-building use of existing coax,
particularly in MDUs, would be a natural application of the technology.
However, there's a chance that cable operators could use EPON over coax in the outside plant as well.
By the time this technology reaches the
field in a few years, cable operators may have more widely deployed EPON in their networks, thanks to the
fact that DOCSIS Provisioning of EPON (DPoE) has finally matured to the point
where it's approaching deployment. Bright House Networks recently announced it
will use Alcatel-Lucent's DPoE offering to support business services. The
systems vendor says it has at least one more North American DPoE customer. With
this success, we can expect more DPoE offerings.
Integration dominates equipment design
Many of the technology design advances
we'll see in 2014 will have integration as a foundation. Wafer-level, hybrid,
photonic, digital, analog, vertical – it's just a question of which kind.
Another integration question that
developers will debate in 2014 is whether upcoming coherent CFP2 transceivers should integrate the necessary DSP inside
the module or whether those chips should remain on the host board. There are
points in favor of both options: An integrated design is simpler, whereas
leaving the DSP outside the module opens the door to sharing a large DSP device
with multiple modules and enables companies that use ASICs developed in-house
to continue to leverage that investment while enjoying the benefits of a
smaller module and a wider ecosystem (since we can expect companies that have
so far sat out the coherent module business to jump in if they don't have to
worry about supplying the DSP). Both approaches will have adherents; it may
turn out that one finds more favor in long haul applications at certain systems
houses.
Meanwhile, as this topic and some of the
earlier sections of this article indicate, the importance of electronics and
software to optical communications technology will continue to grow.
Putting simplicity to the test
As optical communications technology
developers have tackled demands for higher bit rates, the avenues they've
pursued have become increasingly complex. This fact presents a pair of
challenges for test and measurement-instrument suppliers. First, they have to
stay ahead of current requirements, which means coming up with instruments that
can handle the growing complexity of optical communications techniques. Second,
while the technology within these test instruments arguably is at least as
complex as the elements under test (if not more), the test sets must perform
their functions as simply as possible.
In field applications, the drive for
simplicity has begun to translate into an increased use of automation – what
could be thought of as smart test sets. The ability not only to perform
measurements but also analyze results has become significantly important as
carriers transition from copper to fiber faster than installers can be
retrained. We see this trend most strongly in test instruments for FTTx
applications, but a similar case can be made for mobile backhaul as fiber to
the tower (FTTT) becomes more prevalent as well as within data centers. So in
2014 we will continue to see instruments reach the field that automate test functions,
ease collaboration, and leverage platforms with which technicians are becoming
familiar outside of the job, such as mobile phones and tablets.
One question is whether you still need
to create dispersion maps, particularly if you plan to run wavelengths other
than 100G down the same fiber and whether you're using Raman amplification.
This question will continue to be asked in 2014, but I believe you'll see more
carriers abandon dispersion mapping on green field links, particularly if
they're not using Raman.
The other debate point is whether you
need a new class of field test instruments designed to examine signals at the
phase and amplitude level. That debate also will continue in 2014, but
increasingly it appears that the conclusion will be that you don't. It seems
that most carriers don't want to deal with issues at the phase or amplitude
level, partly because the transmission schemes their systems vendors use are
highly proprietary and therefore something of a mystery. So the consensus
approach has become the placement of Ethernet
or OTN test sets on the client side of the coherent systems, injection of a
signal on one end, and a determination of whether it comes out okay on the
other. If there's an issue in between, it's the systems vendor's problem.
And systems vendors are stepping up to
the responsibility – and helping with the simplification of fiber-optic-network
test – by incorporating test functions into their hardware. Many, if not all,
coherent transmission equipment now comes with functions that aid acceptance
testing that previously would have required a separate piece of test gear.
Similarly, we're seeing the systems houses incorporate OTDR functionality into
their equipment, particularly for FTTH applications.
Do these trends spell disaster for test
equipment vendors? Probably not. An executive at one test and measurement
vendor told me that the number of OTDR sales the integrated FTTH systems were
costing his company was hardly worth noting. But it does show that the
landscape of field testing is changing rapidly.
Meanwhile, back in the lab, the
complexity problem is somewhat different. The technicians here know how to do
the tests, but often need several pieces of test equipment to get the job done.
That makes test setups cumbersome, particularly if instruments from multiple
vendors are involved, either by desire or necessity. These setups also are
extremely expensive – a coherent modulation analysis capability, including the
necessary oscilloscope, can run in the multiple hundreds of thousands of
dollars on its own.
So the emphasis in 2014 will continue to
be placed on simplification as a vehicle toward reduced cost. That will mean
companies adding to their product lines must be able to meet all aspects of a
specific requirement, adding functions to existing systems to lower the number
of instruments needed and making it simpler to share equipment and capabilities
to spread out the costs.
Meanwhile, with work underway toward a
400 Gigabit Ethernet specification, we'll see more
test equipment either introduced or upgraded with this requirement in mind. The
400G interfaces likely will be based initially on 25-Gbps lanes. The good news
is that 100G is moving toward 25-Gbps lanes, so the test capabilities are
already in place; the bad news is that 400G will require 16 such lanes, meaning
a requirement to make testing this many lanes more efficient.
There also has been talk about applying
multilevel modulation formats to 400 Gigabit Ethernet
signals, which means we'll see a continuation of the recent string of
announcements around PAM4 test equipment.