As Internet use and user expectations grow, it is natural that network and service providers, as well as software developers, are all looking to provide the best experience possible for their users and customers. However, performance issues (especially those related to transient congestion) tend to have collateral effects. This is a case where local optimization strategies may, in fact, not lead to globally optimal network performance for a given activity. In fact, server or client software developers' assumptions about network conditions may lead to disastrously wrong choices in managing network traffic if software elsewhere in the network is making different and countervailing assumptions and choices.
This panel will explore some of the different approaches being developed, between website, network transport and server developers, their assumptions about network performance and potential collision of strategies. Panelists will also further elaborate existing work in measuring and developing (and deploying!) standards-based transport layer strategies for robustly improving overall performance.
As diversity of some key pieces of the stack, for example TCP algorithms, on the Internet grows, how does this impact reliability and the delivery of consistent performance outcomes for the end user application?
What are the pressures of working with constrained devices and the protocol adaptations and revisions that are being developed for the 'Internet of Things' teaching us about improving performance for the Internet in general?
Given the challenges of modeling or simulating the complexity of the Internet, how can we robustly develop and deploy new mechanisms to improve performance and improve end-user experience?