Citrix HDX Adaptive Transport and EDT

The members of the Citrix HDX team, have been wanting to write a blog post about HDX Adaptive Transport and Enlightened Data Transport (EDT) ; ICA (HDX) is the Citrix main protocol and these are some of its biggest improvements ever. We will clear up misconceptions around EDT and explain what are we doing, why we are doing it, and where are we headed. In the XenApp & XenDesktop 7.16HDX Adaptive Transport is set to “preferred,” by default, so receivers will use EDT whenever possible. Therefore, we want to give you a thorough background on the enhancements. We always like to start with a little bit of history, because it is essential to understanding the present.

TCP: de facto transport protocol of the internet

If you are ~40 or older, then you know that TCP wasn’t always king. In fact, TCP only started to take off when ARPANET (the precursor to the modern internet) switched to TCP/IP in 1983 (replacing the earlier NCP protocol). When the ICA protocol was commercially launched (1991), it did not even support TCP until an engineer named Jeff Muir decided to give it a try in 1994 (make sure you read his blog post about it; it’s a fascinating journey back in time).

The original 1494 TCP port assignment letter from USC-ISI/IANA to Citrix engineer John Richardson! The internet boom changed the technology landscape for everyone, and the conservative mechanics of TCP made a lot of sense for an emerging “network of networks.” There are two main pillars to TCP: flow control and congestion control (with many honorary titles like selective acknowledgements, fast re-transmit, fast recovery, the whole RFC1323 — e.g. window scaling — and more).

Flow control, a window-based mechanism (‘rwnd’) between sender and receiver, is designed to prevent the receiver from being overwhelmed by incoming packets. (A separate rwnd window is maintained independently by each receiving side and advertised in the TCP three-way handshake). We would say congestion control (and avoidance) is the most difficult problem by far, and the main differentiator in all the TCP flavors (Tahoe, Reno, Vegas, New Reno, Compound TCP — default on Windows, BIC, CUBIC — default on Linux, experimental on Windows 10 1703, etc.). Its goal is to effectively utilize the network bandwidth and back off when congestion is detected. It’s a window-based mechanism (“cwnd”) between the sender and the network, maintained by the sender, but never advertised in the three-way handshake. The algorithm used in TCP for congestion control/avoidance is called AIMD (Additive Increments of packet sending rate up to the estimated bandwidth, and Multiplicative Decrements to cope with loss), which unfortunately is inefficient over high speed WANs. TCP takes longer to build up to the available bandwidth, and each time a loss occurs it retracts drastically, resulting in TCP not saturating the available bandwidth.

These two inefficiencies are detrimental to any remoting technology with use cases ranging from bulk data transfer (file transfer or USB traffic) to interactivity with user input and display remoting (Thinwire).

UDP: the Null Protocol

But rewriting TCP is impractical both for technical and commercial reasons: Operative System kernels will take a long, long time to update their TCP stacks. So, we started looking at its cousin, the User Datagram Protocol (UDP).

UDP was added to the core internet protocol suite in 1980, right around the time the TCP/IP specifications were being split to become two separate RFCs. UDP, at its core, is the encapsulation of an IP packet into a datagram, with four additional fields: source port, destination port, length of packet and a checksum. While IP can establish a communication between hosts, UDP does it between processes or applications on those hosts. And that’s pretty much it (hence the “null” reference).

UDP is not something new to ICA. Framehawk, RTP Audio, and the HDX Real-time Optimization Pack for Skype for Business all rely on it. For these scenarios, an unreliable “best effort” protocol works very well.

The best aspect of UDP is not what extra features it introduces, but rather what features it omits from TCP: no guaranteed delivery, no orderly delivery, and no congestion control.

Enlightened Data Transport: a reliable UDP solution

A challenge with our previous UDP solutions was that they were restricted to graphics and audio. So, Georgy, our HDX Architect, decided to devise a UDP-based reliable protocol solution that will unlock the value of UDP for all of HDX and serve as a solid foundation for future enhancements.

The main goals were to:

  • Build a transparent transport layer for all HDX that works in all network scenarios
  • Seamlessly seek UDP first with fallback to TCP, just like a hybrid car, which is electric first but falls back to gasoline
  • Leverage custom congestion and flow control algorithms for performance far better than TCP on WAN and on par with TCP on LAN
  • Improve performance for both interactive and bulk data transfer scenarios
  • Achieve faster convergence to the network’s bandwidth capacity
  • Take into account modern networks where packet loss is mostly due to end-point interference (stochastic loss) and not congestion. Some packet loss can be tolerated without ‘throughput panicking ’
  • Fair-share available bandwidth with other EDT or TCP streams
  • Avoid affecting Single Server Scalability
  • Avoid any impact to connection time

We introduced EDT in XenApp/XenDesktop 7.13 and have been improving it steadily ever since. In reality, we introduced HDX Adaptive Transport, which is a superset of EDT, orchestrating the fallback from EDT to TCP if the network does not allow UDP or if EDT fails for any reason.

The scenarios where HDX Adaptive Transport performs an evaluation of the use of UDP vs. TCP transport are:

  • Initial connection
  • Roaming from one Receiver device to another (same as initial connection)
  • Session Reliability (SR) Reconnect and Auto Client Reconnect (ACR)
  • High-availability failover from one Gateway instance to another
  • Proactively seek UDP even when TCP transport is not broken (Q4 2017 Receivers only)

EDT and ICA: show me the Stack

A common misconception is that EDT is a Graphics protocol. The confusion might stem from the word “adaptive” in Adaptive Transport, which sounds similar to Adaptive Display, the latter being a suite of encoding technologies, selectively using H264 for transient regions of the screen and lossless encoding for text. (HDX is pixel-perfect where it matters). This is Thinwire+, our graphics virtual channel, the one in charge of display-remoting through encoding/decoding.

From Figure 1 above, you can clearly see that EDT is part of our Transport Driver, and it can be leveraged by every virtual channel: Thinwire, printing, CDM, multimedia, USB, and so on.

Exceptions: Framehawk (our ”big guns” graphics solution for mobile workers on broadband wireless connections) has its own UDP data transport layer based on network gearing (ability to monitor network conditions and immediately react to changes), so it does not use EDT or TCP. In the future we plan on cross-pollinating Thinwire and Framehawk and converging on a single Display Remoting mode over a unified ICA stack with EDT.

Also, VoIP still performs better over pure UDP/RTP Audio (outside ICA). In the future we plan to merge real-time audio into the unified ICA stack with EDT.

Key Takeaways

We would like to close this Part I by listing a few key takeaways.

  • Is EDT the same as Adaptive Transport? Almost! Adaptive Transport = EDT (UDP) + TCP (Fallback).
  • Is Adaptive Transport same as Adaptive Display? No, Adaptive Display is a Graphics stack suite.
  • EDT is not designed to necessarily save bandwidth:
    • It is not a compression protocol nor a more efficient encoder.
    • It just gets the data where it needs to go faster (it’s a transport protocol).
    • Might actually use more bandwidth if it is available (but this is good for interactivity and bulk data transfer speed).
  • That said, EDT is fair to other streams: EDT, TCP, HTTP, etc.
  • EDT works on a LAN too. It really shines on WAN: Up to 2.5x Interactivity depending on nature of the load and network conditions; Up to 10x faster file transfer.
  • Framehawk is not EDT:
    • Framehawk is a Graphics stack, with a Receiver-side Intent Engine and a tightly coupled UDP-based transport featuring network gearing.
    • EDT is just a transport protocol, available to every Virtual Channel (except Framehawk and RTP Audio today).

Common Gateway Protocol (CGP)

In the figure below you will notice that the TCP and UDP stacks share one common component: Common Gateway Protocol (CGP).

Our old friend CGP has been with us since the days of Citrix Secure Gateway and Citrix Presentation Server. CGP is a general-purpose tunneling protocol with its own handshake and commands. CGP is the protocol upon which Session Reliability — “session recoverability” in case of broken transport — is built, but is more than just that.

CGP is also used as an authorization protocol via NetScalerit carries the Secure Ticket Authority (STA) ticket. CGP is also critical for supporting High Availability failover from one Gateway instance to another.

Therefore, CGP is required for EDT connections via NetScaler Gateway. But CGP is optional on direct EDT connections between Receiver and VDA, e.g. corporate MPLS. So, if you have a NetScaler Gateway, EDT requires Session Reliability Policy to be enabled, which, in turn, enables CGP (since currently CGP and Session Reliability are coupled).

Insider Tip: Session Reliability is enabled in Studio and Storefront and it is on by default.

Do not confuse Auto Client Reconnect (ACR) with Session Reliability (SR). Both allow users to automatically reconnect back to their sessions after recovering from a network disruption. The difference is that ACR does not resume the ICA connection as seamlessly as SR does. ACR performs an automatic full reconnection to a disconnected session, whereas SR only reestablishes the transport beneath ICA without disrupting the session or the UX (as much as possible).

For example, when reconnecting with ACR, a file copy over Client Drive Mapping (CDM), which was ongoing when the transport was broken, would fail. With SR, the file transfer would succeed because each side is buffering every byte sent in each direction. Upon reconnection, the sequence-numbers of the last CGP packets received at both ends are exchanged, buffers flushed, and the file transfer is resumed where it was left off. The virtual channels are never aware of the disconnect. ACR and SR are by default used in sequence: SR is the first line of defense, ACR is the last resort fallback.

Security: DTLS

Security is critical for every organization today. Network-level encryption with DTLS is an obvious choice with UDP. EDT with DTLS has been supported with NetScaler on the front-end (Receiver to NetScaler) since and, yet we would strongly recommend to use or, as those builds contain some important DTLS fixes. You still need to manually enable DTLS on the NSG front-end VPN vServer. Newer NetScaler 12.x builds in Q4 2017 will have DTLS = on by default for the front-end.

DTLS is a requirement for the front-end EDT connection to NetScaler. In the Q4 2017 releases of XenApp/XenDesktop and NetScaler, DTLS is supported end-to-end. In other words, the back end connection between NetScaler and the VDA could optionally use DTLS. In addition, Receiver could optionally use DTLS in direct connection to the VDA. The required configurations for the back-end are the same as the existing TLS security policies in StoreFront and VDA.  Receivers for Windows (4.7, 4.8, 4.9), MAC (12.5, 12.6, 12.7), iOS (7.2, 7.3.x) and Linux (13.7) all support DTLS 1.0. Android Receiver 3.12.3 is limited to direct connections to the VDA (DTLS in Tech Preview).

You will have to work with your Networking team in order to get UDP 443 opened between your DMZ first Firewall and the NSG frontend vServer, and UDP 2598 between the backend vServer (SnIP) and the VDA through the second DMZ Firewall (confirm with them that there is no global rate-limiting UDP Policy applied, or EDT will be affected). On the VDA’s Windows Firewall, the VDA MetaInstaller should have opened UDP ports 1494 and 2598, unless you selected to do it manually later. CLI installs require /enable_hdx_udp_ports to be passed.

Troubleshooting Tips

  • Open a command prompt in the VDA and type ‘netstat –a –p udp’, it will tell you if UDP sockets are listening.
    Do you see and under ‘Local Address’? If not, restart ‘Citrix Desktop Service’ or reboot VDA.
  • The best way to test EDT is to launch an app from the internal network directly to StoreFront, bypassing NSG. First inspect the ICA file. There should be an entry that reads ‘HDXoverUDP = Preferred’. If it says ‘Off’ or the entry is missing, then HDX Adaptive Transport is not set to Preferred in the Studio policy, or the GP update has not been applied at the VDA yet. Note that in the Q4 release of XenApp/XenDesktop, HDX Adaptive Transport is Preferred by default and there is no explicit requirement to configure the Studio policy.
  • After connecting to the VDA, run ‘ctxsession’ on the VDA command prompt and verify your session is using UDP. If that works, your VDA is ready for EDT connections from the outside, too.
  • Now try to launch a session through NSG. First inspect the ICA file just like in the direct connection case.
  • After you launch an app/desktop via NSG, run ‘ctxsession’ on the VDA command prompt again and verify your session is using UDP. If it says TCP, then most likely something went wrong between Receiver and NSG and the connection fell back to TCP.  Then a Wireshark trace on NSG is your best troubleshooting tool: Are UDP packets reaching/leaving the NSG frontend and backend vServers (VIP and SnIP)?  Wireshark Dissectors will misinterpret EDT as ‘QUIC’.
  • In addition, you can check Director → Session Details → Protocol → UDP.

Citrix Receivers kick-starting EDT

Existing Receivers supporting EDT use a sequential logic for HDX Adaptive Transport: If the policy is Preferred, Receiver attempts EDT first and, if it fails or times out, Receiver falls back to TCP. As explained in Part 1 of this series, the assessment of UDP vs. TCP use happens only during initial connection and after a scenario involving a transport break. With these Receivers during the lifetime of the HDX session, a transport switch does not occur unless the transport breaks. The upcoming Q4 2017 Receivers for Windows, iOS and MAC attempt EDT and TCP in parallel, always favoring EDT if both are able to connect, but still falling back to TCP if EDT is not available. In addition, if the transport happens to be TCP, Receivers proactively continue to seek UDP in the background and if it becomes available, a seamless switch to EDT is performed without affecting user experience. In this case there is no network disruption that triggers the switch to UDP, there is only a forced termination of the TCP connection by Receiver. CGP protocol is required for the parallel connect and proactive switch to UDP to work.

The benefits are:

  • Fast connect time: no timeouts when UDP is unavailable before falling back to TCP
  • Always use UDP whenever it becomes available: any time in the lifetime of an HDX session. For example, when switching from data plan to WiFi, or between network subnets with different access policies, etc.

Fall-back to TCP or proactive switch to EDT is never triggered by a network metric such as low round-trip time [RTT] or high packet loss, but rather by the mere availability of TCP or UDP. Varying network conditions are dealt with by the congestion control algorithms of EDT. In short, if HDX Adaptive Transport is Preferred, the use of EDT vs. TCP is driven by Receiver.

Performance metrics

So, what performance benefits can you expect from EDT?

If you have users connecting to your Site over the public internet, or have branch offices across the country/world, then without a doubt EDT will improve the user experience significantly! We have seen customers already being able to consolidate Sites and serve distant offices from only one Data Center. Do you have a challenging 3D App that requires fluidity? Put EDT to the work, and give yourself extra breathing room by enabling hardware encoding on the VDAs for Desktop OS with HDX 3D Pro policy. WAN Environments (from 50 to 250msec RTT, and 0 to 1% packet loss):

  • Client Drive Mapping: Up to 10x improvements
  • Thinwire Interactivity: Up to 2.5x improvements
  • Printing: Up to 2x improvements
  • Generic USB: Up to 35% improvements

How to configure EDT

Adaptive transport for XenApp and XenDesktop optimizes data transport by leveraging a new Citrix protocol called Enlightened Data Transport (EDT) in preference to TCP whenever possible. Compared to TCP and UDP, EDT delivers a superior user experience on challenging long-haul WAN and Internet connections, dynamically responding to changing network conditions while maintaining high server scalability and efficient use of bandwidth. EDT is built on top of UDP and improves data throughput for all ICA virtual channels, including Thinwire display remoting, file transfer (Client Drive Mapping), printing, multimedia redirection (though we still recommend to use the ‘Audio over UDP real-time transport’ policy, since EDT is a reliable protocol. Setting that policy will effectively pull Audio out of EDT and allow it to run over a separate UDP port with no reliability). When UDP is not available, adaptive transport automatically reverts to TCP. Step-by-step guidance on how to enable/configure EDT in Citrix environments can be found at:

Closing words: ICA’s Future

Last but not least … where are we heading? We have the best engineers working on EDT today, across HDX, Receivers and NetScaler, and the roadmap is looking better than ever. But we don’t want to get into trouble with our legal department, so we will give you only a teaser: Enlightened Virtual Channels!


About The Author